• Aucun résultat trouvé

Conditioned brownian trees

N/A
N/A
Protected

Academic year: 2022

Partager "Conditioned brownian trees"

Copied!
35
0
0

Texte intégral

(1)

www.elsevier.com/locate/anihpb

Conditioned Brownian trees

Jean-François Le Gall

, Mathilde Weill

DMA – Ecole normale supérieure de Paris, 45, rue d’Ulm, 75005 Paris, France Received 30 November 2004; accepted 30 August 2005

Available online 19 January 2006

Abstract

We consider a Brownian tree consisting of a collection of one-dimensional Brownian paths started from the origin, whose genealogical structure is given by the Continuum Random Tree (CRT). This Brownian tree may be generated from the Brownian snake driven by a normalized Brownian excursion, and thus yields a convenient representation of the so-called Integrated Super- Brownian Excursion (ISE), which can be viewed as the uniform probability measure on the tree of paths. We discuss different approaches that lead to the definition of the Brownian tree conditioned to stay on the positive half-line. We also establish a Vervaat- like theorem showing that this conditioned Brownian tree can be obtained by re-rooting the unconditioned one at the vertex corresponding to the minimal spatial position. In terms of ISE, this theorem yields the following fact: Conditioning ISE to put no mass on]−∞,ε[and lettingεgo to 0 is equivalent to shifting the unconditioned ISE to the right so that the left-most point of its support becomes the origin. We derive a number of explicit estimates and formulas for our conditioned Brownian trees. In particular, the probability that ISE puts no mass on]−∞,ε[is shown to behave like 2ε4/21 whenεgoes to 0. Finally, for the conditioned Brownian tree with a fixed heighth, we obtain a decomposition involving a spine whose distribution is absolutely continuous with respect to that of a nine-dimensional Bessel process on the time interval[0, h], and Poisson processes of subtrees originating from this spine.

©2005 Elsevier SAS. All rights reserved.

Résumé

Nous considérons un arbre brownien formé par une famille de trajectoires browniennes issues de l’origine sur la droite, dont la structure généalogique est le Continuum Random Tree (CRT). Cet arbre brownien peut être construit à partir du serpent brownien dirigé par une excursion brownienne normalisée, et fournit une représentation simple de la mesure aléatoire appelée ISE, qui peut être vue comme la probabilité uniforme sur l’arbre des trajectoires. Nous discutons différentes approches qui conduisent à la définition de l’arbre brownien conditionné à rester du coté positif. Nous établissons aussi un théorème à la Vervaat qui montre que cet arbre brownien conditionné peut être obtenu en réenracinant l’arbre non conditionné au sommet donnant la position spatiale la plus à gauche. Pour la mesure ISE, cela donne le résultat suivant : conditionner ISE à ne pas charger l’intervalle]−∞,ε[ et faire tendreεvers 0 revient à translater la mesure non conditionnée vers la droite de façon que l’origine devienne le point le plus à gauche de son support. Nous donnons des estimations et des formules explicites pour l’arbre brownien conditionné. En particulier, la probabilité que ISE ne charge pas l’intervalle]−∞,−ε[se comporte comme 2ε4/21 quandεtend vers 0. Finalement, pour l’arbre brownien conditionné avec une hauteurh, nous obtenons une décomposition combinant une arête principale, de loi absolument continue par rapport à celle d’une trajectoire de processus de Bessel de dimension 9 sur l’intervalle[0, h], avec un processus de Poisson de sous-arbres issus de cette arête principale.

©2005 Elsevier SAS. All rights reserved.

* Corresponding author. Tel.: +(33) 1 44 32 31 68; fax: +(33) 1 44 32 20 80.

E-mail addresses:legall@dma.ens.fr (J.-F. Le Gall), weill@dma.ens.fr (M. Weill).

0246-0203/$ – see front matter ©2005 Elsevier SAS. All rights reserved.

doi:10.1016/j.anihpb.2005.08.001

(2)

MSC:60J80; 60J65; 60G57

Keywords:Random tree; CRT; ISE; Conditioned tree; Brownian snake; Re-rooting

1. Introduction

In this work, we define and study a continuous tree of one-dimensional Brownian paths started from the origin, which is conditioned to remain in the positive half-line. An important motivation for introducing this object comes from its relation with analogous discrete models which are discussed in several recent papers.

In order to present our main results, let us briefly describe a construction of unconditioned Brownian trees. We start from a positive Brownian excursion conditioned to have duration 1 (a normalized Brownian excursion in short), which is denoted by(e(s), 0s1). This random function can be viewed as coding a continuous tree via the following simple prescriptions. For everys, s∈ [0,1], we set

me(s, s):= inf

ssrsse(r).

We then define an equivalence relation on[0,1]by settingss if and only ife(s)=e(s)=me(s, s). Finally we put

de(s, s)=e(s)+e(s)−2me(s, s)

and note thatde(s, s)only depends on the equivalence classes ofsands. Then the quotient spaceTe:= [0,1]/∼ equipped with the metricdeis a compact R-tree (see e.g. Section 2 of [15]). In other words, it is a compact metric space such that for any two pointsσ andσthere is a unique arc with endpointsσ andσand furthermore this arc is isometric to a compact interval of the real line. We viewTeas a rootedR-tree, whose rootρis the equivalence class of 0. For everyσTe, the ancestral line ofσ is the line segment joiningρ toσ. This line segment is denoted by Jρ, σK. We writes˙for the equivalence class ofs, which is a vertex inTeat generatione(s)=de(0, s).

Up to unimportant scaling constants,Te is the Continuum Random Tree (CRT) introduced by Aldous [3]. The preceding presentation is indeed a reformulation of Corollary 22 in [5], which was proved via a discrete approximation (a more direct approach was given in [22]). As Aldous [5] has shown, the CRT is the scaling limit of critical Galton–

Watson trees conditioned to have a large fixed progeny (see [14] and [15] for recent generalizations of Aldous’ result).

The fact that Brownian excursions can be used to model continuous genealogies had been used before, in particular in the Brownian snake approach to superprocesses (see [21]).

We can now combine the branching structure of the CRT with independent spatial motions. We restrict ourselves to spatial displacements given by linear Brownian motions, which is the case of interest in this work. Conditionally givene, we introduce a centered Gaussian process(Vσ, σTe)with covariance

cov(Vs˙, Vs˙)=me(s, s), s, s∈ [0,1].

This definition should become clear if we observe thatme(s, s)is the generation of the most recent common ancestor tos˙ands˙in the treeTe. It is easy to verify that the process(Vσ, σTe)has a continuous modification. The random measureZonRdefined by

Z, ϕ = 1 0

ϕ(Vs˙)ds

is then the one-dimensional Integrated Super-Brownian Excursion (ISE, see Aldous [6]). Note that ISE in higher dimensions, and related Brownian trees, have appeared recently in various asymptotic results for statistical mechanics models (see e.g. [13,17,29]). The support, or range, of ISE is

R:= {Vσ: σTe}.

For our purposes, it is also convenient to reinterpret the preceding notions in terms of the Brownian snake. The Brownian snake(Ws,0s1)driven by the normalized excursioneis obtained as follows (see Subsection 2.1 for a more detailed presentation). For everys∈ [0,1],Ws=(Ws(t ),0te(s))is the finite path which gives the spatial

(3)

positions along the ancestral line ofs:˙ Ws(t )=Vσ ifσ is the vertex at distancetfrom the root on the segmentJρ,s˙K. Note thatWsonly depends on the equivalent classs. We view˙ Ws as a random element of the spaceWof finite paths.

Our first goal is to give a precise definition of the Brownian tree(Vσ, σTe)conditioned to remain positive.

Equivalently this amounts to conditioning ISE to put no mass on the negative half-line. Our first theorem gives a precise meaning to this conditioning in terms of the Brownian snake. We denote byN(1)0 the distribution of(Ws, 0 s1)on the canonical spaceC([0,1],W)of continuous functions from[0,1]intoW, and we abuse notation by still writing(Ws,0s1)for the canonical process on this space. The rangeRis then defined underN(1)0 by

R= Ws: 0s1

whereWsdenotes the endpoint of the pathWs. Theorem 1.1.We have

limε0ε4N(1)0

R⊂ ]−ε,∞[

= 2 21.

There exists a probability measure onC([0,1],W), which is denoted byN(1)0 , such that limε0N(1)0

· |R⊂ ]−ε,∞[

= N(1)0 ,

in the sense of weak convergence in the space of probability measures onC([0,1],W).

Our second theorem gives an explicit representation of the conditioned measuresN(1)0 , which is analogous to a famous theorem of Vervaat [30] relating the normalized Brownian excursion to the Brownian bridge. To state this result, we need the notion of re-rooting. Fors∈ [0,1], we writeT[es]for the “same” treeTebut with roots˙instead of ρ= ˙0. We then shift the spatial positions by settingVσ[s]=VσVs˙ for everyσTe, in such a way that the spatial position of the new root is still the origin. (Notice that bothT[es]andV[s]only depend on s, and we could as well˙ defineT[eσ] andV[σ]for σTe.) Finally, the re-rooted snake W[s]=(Wr[s], 0r1)is defined analogously as before: For everyr∈ [0,1],Wr[s]is the path giving the spatial positionsVσ[s]along the ancestral line (in the re-rooted tree) of the vertexs+rmod 1.

Theorem 1.2.Letsbe the unique time of the minimum ofWon[0,1]. The probability measureN(1)0 is the law under N(1)0 of the re-rooted snakeW[s].

If we want to define one-dimensional ISE conditioned to put no mass on the negative half-line, the most natural way is to condition to put no mass on ]−∞,ε[and then to letεgo to 0. As a consequence of the previous two theorems, this is equivalent to shifting the unconditioned ISE to the right, so that the left-most point of its support becomes the origin. Another method would be to condition the mass in]−∞,0]to be less thanεand then to letεgo to 0. Proposition 3.7 below shows that this leads to the same measureN(1)0 .

Both Theorems 1.1 and 1.2 could be presented in a different and perhaps more elegant manner by using the for- malism of spatial trees as in Section 5 of [15]. In this formalism, a spatial tree is a pair(T, U )whereTis a compact rootedR-tree (in fact an equivalent class of such objects modulo root-preserving isometries) andU is a continuous mapping fromTintoRd. Then the second assertion of Theorem 1.1 can be rephrased by saying that the conditional distribution of the spatial tree(Te, V )knowing thatR⊂ ]−ε,∞[has a limit whenεgoes to 0, and Theorem 1.2 says that this limit is the distribution of(T[eσ], V[σ])whereσis the unique vertex minimizingV. We have chosen the above presentation because the Brownian snake plays a fundamental role in our proofs and also because the resulting statements are stronger than the ones in terms of spatial trees.

Let us discuss the relationship of the above theorems with previous results. The first assertion of Theorem 1.1 is closely related to some estimates of Abraham and Werner [1]. In particular, Abraham and Werner proved that the probability for a Brownian snake driven by a Brownian excursion of height 1 not to hit the set]−∞,ε[behaves like a constant times ε4(see Section 4 below). Thed-dimensional Brownian snake conditioned not to exit a domainD was studied by Abraham and Serlet [2], who observed that this conditioning gives rise to a particular instance of the

(4)

Brownian snake with drift. The setting in [2] is different from the present work, in that the initial point of the snake lies inside the domain, and not at its boundary as here. We also mention the paper [19] by Jansons and Rogers, who establish a decomposition at the minimum for a Brownian tree where branchings occur only at discrete times.

An important motivation for the present work came from several recent papers that discuss asymptotics for planar maps. Cori and Vauquelin [11] proved that there exists a bijection between rooted planar quadrangulations and certain discrete trees called well-labeled trees (see also Chassaing and Schaeffer [10] for a more tractable description of this bijection). Roughly, a well-labeled tree consists of a (discrete) plane tree whose vertices are given labels which are positive integers, with the constraints that the label of the root is 1 and the labels of two neighboring vertices can differ by at most 1. Our conditioned Brownian snake should then be viewed as a continuous model for well-labeled trees. This idea was exploited in [10] and especially in Marckert and Mokkadem [27], where the re-rooted snake W[s] appears in the description of the Brownian map, which is the continuous object describing scaling limits of planar quadrangulations. In contrast with the present work, the re-rooted snakeW[s]is not interpreted in [27] as a conditioned object, but rather as a scaling limit of re-rooted discrete snakes. Closely related models of discrete labeled trees are also of interest in theoretical physics: See in particular [7] and [8]. The article [25], which was motivated by [10] and [27], proves that our conditioned Brownian tree is the scaling limit of discrete spatial trees conditioned to remain positive. To be specific, consider a Galton–Watson tree whose offspring distribution is critical and has (small) exponential moments, and condition this tree to have exactlynvertices (in the special case of the geometric distribution, this gives rise to a tree that is uniformly distributed over the set of plane trees withnvertices). This branching structure is combined with a spatial displacement which is a symmetric random walk with bounded jump size onZ. Assuming that the root is at the origin ofZ, the spatial tree is then conditioned to remain on the positive side. According to the main theorem of [25], the scaling limit of this conditioned discrete tree whenn→ ∞leads to the measureN(1)0 discussed above. The convergence here, and the precise form of the scaling transformation, are as in Theorem 2 of [18], which discusses scaling limits for unconditioned discrete snakes.

Let us now describe the other contributions of this paper. Although the preceding theorems have been stated for the measureN(1)0 , a more fundamental object is the excursion measureN0of the Brownian snake (see e.g. [24]). Roughly speaking,N0is obtained by the same construction as above, but instead of considering a normalized Brownian excur- sion, we now letebe distributed according to the (infinite) Itô measure of Brownian excursions. Ifσ (e)denotes the duration of excursione, we haveN(1)0 =N0(· |σ =1). It turns out that many calculations are more tractable under the infinite measureN0than underN(1)0 . For this reason, both Theorems 1.1 and 1.2 are proved in Section 3 as con- sequences of Theorem 3.1, which deals withN0. Motivated by Theorem 3.1 we introduce another infinite measure denoted byN0, which should be interpreted asN0conditioned on the event{R⊂ [0,∞[}, even though the condition- ing requires some care as we are dealing with infinite measures. In the same way as for unconditioned measures, we haveN(1)0 = N0(· |σ =1). Another motivation for considering the measureN0comes from connections with super- processes: Analogously to Chapter IV of [24] in the unconditioned case,N0could be used to define and to analyze a one-dimensional super-Brownian motion started from the Dirac measureδ0 and conditioned never to charge the negative half-line.

In Section 4, we present a different approach that leads to the same limiting measures. IfH (e)stands for the height of excursione, we consider for everyh >0 the measureNh0:=N0(· |H=h). In the above construction this amounts to replacing the normalized excursioneby a Brownian excursion with heighth. By using a famous decomposition theorem of Williams, we can then analyze the behavior of the measureNh0 conditioned on the event that the range does not intersect]−∞,ε[and show that it has a limit denoted by Nh0 whenε→0. The method also provides information about the Brownian tree underNh0: This Brownian tree consists of a spine whose distribution is absolutely continuous with respect to that of the nine-dimensional Bessel process, and as usual a Poisson collection of subtrees originating from the spine, which are Brownian snake excursions conditioned not to hit the negative half-line. The connection with the measuresN(1)0 andN0is made by proving that Nh0= N0(· |H=h). Several arguments in this section have been inspired by Abraham and Werner’s paper [1]. It should also be noted that a discrete version of the nine-dimensional Bessel process already appears in the paper [9] by Chassaing and Durhuus.

At the end of Section 4, we also discuss the limiting behavior of the measuresNh0 as h→ ∞. This leads to a probability measureN0 that should be viewed as the law of an infinite Brownian snake excursion conditioned to stay positive. We again get a description of the Brownian tree coded byN0 in terms of a spine and conditioned Brownian

(5)

snake excursions originating from this spine. Moreover, the description is simpler in the sense that the spine is exactly distributed as a nine-dimensional Bessel process started at the origin.

Section 5 gives an explicit formula for the finite-dimensional marginal distributions of the Brownian tree underN0, that is for

N0 ]0,σ[p

ds1· · ·dspF (Ws1, . . . , Wsp)

wherep1 is an integer andF is a symmetric nonnegative measurable function on Wp. In a way similar to the corresponding result for the unconditioned Brownian snake (see (1) below), this formula involves combining the branching structure of certain discrete trees with spatial displacements. Here however because of the conditioning, the spatial displacements turn out to be given by nine-dimensional Bessel processes rather than linear Brownian motions.

In the same way as the finite-dimensional marginal distributions of the CRT can be derived from the analogous formula under the Itô measure (see Chapter III of [24]), one might hope to derive the expression of the finite-dimensional marginals underN(1)0 from the case ofN0. This idea apparently leads to untractable calculations, but we still expect Theorem 5.1 to have useful applications in future work about conditioned trees.

Basic facts about the Brownian snake are recalled in Section 2, which also establishes a few important preliminary results, some of which are of independent interest. In particular, we state and prove a general version of the invariance property of N0 under re-rooting (Theorem 2.3). This result is clearly related to the invariance of the CRT under uniform re-rooting, which was observed by Aldous [4] (and generalized to Lévy trees in Proposition 4.8 of [15]). An equivalent form of Theorem 2.3 already appears as Proposition 4.9 of [27]: see the discussion after the statement of this theorem in Subsection 2.3.

2. Preliminaries

In this section, we recall the basic facts about the Brownian snake that we will use later, and we also establish a few important preliminary results. We refer to [24] for a more detailed presentation of the Brownian snake and its connections with partial differential equations. In the first four subsections below, we deal with the d-dimensional Brownian snake since the proofs are not more difficult in that case, and the results may have other applications.

2.1. The Brownian snake

The (d-dimensional) Brownian snake is a Markov process taking values in the spaceWof finite paths inRd. Here a finite path is simply a continuous mapping w :[0, ζ] →Rd, whereζ =ζ(w)is a nonnegative real number called the lifetime of w. The setWis a Polish space when equipped with the distance

d(w,w)= |ζ(w)ζ(w)| +sup

t0

w(t∧ζ(w))−w(tζ(w)).

The endpoint (or tip) of the path w is denoted byw. The range of w is denoted by w[0, ζ(w)].

In this work, it will be convenient to use the canonical spaceΩ:=C(R+,W)of continuous functions fromR+ intoW, which is equipped with the topology of uniform convergence on every compact subset ofR+. The canonical process onΩis then denoted by

Ws(ω)=ω(s), ωΩ,

and we writeζs=ζ(Ws)for the lifetime ofWs.

Let w∈W. The law of the Brownian snake started from w is the probability measurePwonΩwhich can be char- acterized as follows. First, the processs)s0is underPwa reflected Brownian motion in[0,∞[started fromζ(w). Secondly, the conditional distribution of(Ws)s0knowings)s0, which is denoted byΘwζ, is characterized by the following properties:

(i) W0=w,Θwζ a.s.

(ii) The process(Ws)s0is time-inhomogeneous Markov underΘwζ. Moreover, if 0ss,

(6)

Ws(t )=Ws(t )for everytm(s, s):=inf[s,s]ζr,Θwζ a.s.

(Ws(m(s, s)+t )Ws(m(s, s)))0tζsm(s,s) is independent ofWs and distributed as ad-dimensional Brownian motion started at 0 underΘwζ.

Informally, the valueWs of the Brownian snake at timesis a random path with a random lifetimeζs evolving like reflecting Brownian motion in[0,∞[. Whenζs decreases, the path is erased from its tip, and whenζs increases, the path is extended by adding “little pieces” of Brownian paths at its tip.

Excursion measures play a fundamental role throughout this work. We denote byn(de)the Itô measure of positive Brownian excursions. This is aσ-finite measure on the spaceC(R+,R+)of continuous functions fromR+intoR+. We write

σ (e)=inf

s >0:e(s)=0

for the duration of excursione. Fors >0,n(s)will denote the conditioned measuren(· |σ=s). Our normalization of the excursion measure is fixed by the relation

n= 0

ds 2√

2π s3n(s).

Ifx∈Rd, the excursion measureNxof the Brownian snake fromx is then defined by Nx=

C(R+,R+)

n(de)Θxe¯

wherex¯ denotes the trivial element ofW with lifetime 0 and initial pointx. Alternatively, we can viewNx as the excursion measure of the Brownian snake from the regular pointx¯. With a slight abuse of notation we will also write σ (ω)=inf{s >0:ζs(ω)=0}forωΩ. We can then consider the conditioned measures

N(s)x =Nx(· |σ=s)=

C(R+,R+)

n(s)(de)Θxe¯.

Note that in contrast to the introduction we now viewN(s)x as a measure onΩ rather than onC([0, s],W). The range R=R(ω)is defined byR= {Ws: s0}.

Lemma 2.1.Suppose thatd=1and letx >0.(i)We have Nx

R∩ ]−∞,0] =∅

= 3 2x2. (ii)For everyλ >0,

Nx

1−1{R∩]−∞,0]=∅}eλσ

= λ

2 3

coth

21/41/42

−2 wherecoth(y)=cosh(y)/sinh(y).

Proof. (i) According to Section VI.1 of [24], the functionu(x)=Nx(R∩ ]−∞,0] =∅)solvesu=4u2in]0,∞[, with boundary conditionu(0+)= +∞. The desired result follows.

(ii) See Lemma 7 in [12]. 2

2.2. Finite-dimensional marginal distributions

In this subsection we state a result giving information about the joint distribution of the values of the Brownian snake at a finite number of times and its range. In order to state this result, we need some formalism for trees. We first introduce the set of labels

U= n=0

{1,2}n

(7)

where by convention{1,2}0= {∅}. An element ofUis thus a sequenceu=u1. . . unof elements of{1,2}, and we set

|u| =n, so that|u|represents the “generation” ofu. In particular,|∅| =0. The mappingπ:U\ {∅} →U is defined byπ(u1. . . un)=u1. . . un1(π(u)is the “father” ofu). In particular, ifk= |u|, we haveπk(u)=∅.

A binary (plane) treeT is a finite subset ofU such that:

(i) ∅∈T.

(ii) uT \ {∅} ⇒π(u)T.

(iii) For everyuT, eitheru1T andu2T, oru1/T andu2/T (uis called a leaf in the second case).

We denote byAthe set of all binary trees. A marked tree is then a pair(T, (hu)u∈T)whereTAandhu0 for everyuT. We denote byTthe space of all marked trees. In this work it will be convenient to view marked trees asR-trees in the sense of [15] or [16] (see also Section 1 above). This can be achieved through the following explicit construction. Letθ=(T, (hu)u∈T)be a marked tree and letRT be the vector space of all mappings fromT intoR. Writeu, uT)for the canonical basis ofRT. Then consider the mapping

pθ:

u∈T

{u} × [0, hu] −→RT defined by

pθ(u, )=

|u|

k=1

hπk(u)επk(u)+εu.

As a set, theR-tree associated withθis the rangeθ˜ofpθ. Note that this is a connected union of line segments inRT. It is equipped with the distance dθ such thatdθ(a, b)is the length of the shortest path in θ˜ going froma tob. By definition, the range of this path is the segment betweena andband is denoted byJa, bK. Finally, we will writeLθ

for (one-dimensional) Lebesgue measure onθ.˜

By definition, leaves ofθ˜are points of the formpθ(u, hu)whereuis a leaf ofθ. Points of the formpθ(u, hu)when uis not a leaf are called nodes ofθ. We write˜ L(θ )for the set of leaves ofθ, and˜ I (θ )for the set of its nodes. The root ofθ˜is just the point 0=pθ(,0).

We will consider Brownian motion indexed byθ, with initial point˜ x∈Rd. Formally, we may consider, under the probability measureQθx, a collectionu)u∈T of independentd-dimensional Brownian motions all started at 0 except ξwhich starts atx, and define a continuous process(Va, a∈ ˜θ )by setting

Vpθ(u,)=

|u|

k=1

ξπk(u)(hπk(u))+ξu(),

for everyuT and∈ [0, hu]. Finally, with every leafaofθ˜we associate a stopped path w(a)with lifetimedθ(0, a):

For everyt∈ [0, dθ(0, a)], w(a)(t )=Vr(a,t )wherer(a, t )is the unique element ofJ0, aKsuch thatdθ(0, r(a, t ))=t. For every integerp1, denote byApthe set of all binary trees withpleaves, and byTpthe corresponding set of marked trees. The uniform measureΛponTpis defined by

Tp

Λp(dθ )F (θ )=

TAp v∈T

dhvF

T, (hv)v∈T .

With this notation, Proposition IV.2 of [24] states that, for every integerp1 and every symmetric nonnegative measurable functionF onWp,

Nx ]0,σ[p

ds1. . .dspF (Ws1, . . . , Wsp) =2p1p!

Λp(dθ )Qθx F

w(a)

aL(θ )

. (1)

We will need a stronger result concerning the case where the functionF also depends on the rangeRof the Brownian snake. To state this result, denote byKthe space of all compact subsets ofRd, which is equipped with the Hausdorff

(8)

metric and the associated Borelσ-field. Suppose that under the probability measureQθx (for each choice ofθinT), in addition to the process(Va, a∈ ˜θ ), we are also given an independent Poisson point measure onθ˜×Ω, denoted by

iI

δ(aii),

with intensity 4Lθ(da)⊗N0(dω).

Theorem 2.2.For every nonnegative measurable functionF onWp×K×R+, which is symmetric in the first p variables, we have

Nx ]0,σ[p

ds1· · ·dspF (Ws1, . . . , Wsp,R, σ )

=2p1p!

Λp(dθ )Qθx

F

w(a)

aL(θ ),cl

iI

Vai+R(ωi) ,

iI

σ (ωi) , wherecl(A)denotes the closure of the setA.

Remark.It is immediate to see that cl

iI

Vai+R(ωi) =

aL(θ )

w(a)[0, ζ(w(a))] ∪

iI

Vai+R(ωi) , Qθxa.e.

Proof. Consider first the casep=1. LetF1be a nonnegative measurable function onW, and letF2andF3be two nonnegative measurable functions onΩ. By applying the Markov property underNxat times, then using the time- reversal invariance ofNx(which is easy from the analogous property for the Itô measuren(de)), and finally using the Markov property at timesonce again, we get

Nx

σ 0

dsF1(Ws)F2

(W(sr)+)r0 F3

(Ws+r)r0

=Nx

σ 0

dsF1(Ws)F2

(W(sr)+)r0 EWs

F3

(Wrσ)r0

=Nx

σ 0

dsF1(Ws)F2

(Ws+r)r0 EWs

F3

(Wrσ)r0

=Nx

σ 0

dsF1(Ws)EWs

F2

(Wrσ)r0 EWs

F3

(Wrσ)r0 .

We then use the casep=1 of (1) to see that the last quantity is equal to

0

dt

Pxt(dw)F1(w)Ew

F2

(Wrσ)r0 Ew

F3

(Wrσ)r0 ,

wherePxt denotes the law of Brownian motion started atx and stopped at timet (this law is viewed as a probability measure onW). Now if we specialize to the case whereF2is a function of the formF2(ω)=G2({Ws(ω):s0}, σ ), an immediate application of Lemma V.2 in [24] shows that

Ew

F2

(Wrσ)r0

=E

G2

cl

jJ

w(tj)+R(ωj) ,

jJ

σ (ωj) ,

(9)

where

jJδ(tjj) is a Poisson point measure on [0, ζ(w)] ×Ω with intensity 2 dtN0(dω). Applying the same observation toF3, we easily get the casep=1 of the theorem.

The general case can be derived along similar lines by using Theorem 3 in [22]. Roughly speaking, the case p=1 amounts to combining Bismut’s decomposition of the Brownian excursion (Lemma 1 in [22]) with the spatial displacements of the Brownian snake. For generalp, the second assertion of Theorem 3 in [22] provides the analogue of Bismut’s decomposition, which when combined with spatial displacements leads to the statement of Theorem 2.2.

Details are left to the reader. 2 2.3. The re-rooting theorem

In this subsection, we state and prove an important invariance property of the Brownian snake underN0, which plays a major role in Section 3 below. We first need to introduce some notation. For everys, r∈ [0, σ], we set

sr=s+r ifs+rσ, s+rσ ifs+r > σ.

We also use the following convenient notation for closed intervals: Ifu, v∈R,[u, v] = [v, u] = [uv, uv].

Lets∈ [0, σ[. In order to define the re-rooted snakeW[s], we first set ζr[s]=ζs+ζsr−2 inf

u∈[s,sr]ζu,

ifr∈ [0, σ], andζr[s]=0 ifr > σ. We also want to define the stopped pathsWr[s], in such a way that Wr[s]=WsrWs,

ifr∈ [0, σ], andWr[s]=0 ifr > σ. To this end, we may notice thatW[s]satisfies the property Wr[s]=Wr[s] ifζr[s]=ζr[s]= inf

u∈[r,r]ζu[s]

and so in the terminology of [26],(Wr[s])0rσ is uniquely determined as the snake whose tour isr[s],Wr[s])0rσ (see the homeomorphism theorem of [26]). We have the explicit formula, forr0, and 0r[s],

Wr[s](t )=W[s]

sup{ur:ζu[s]=t}. (2)

As explained in the introduction,r[s])r0codes the sameR-tree as the one coded byr)r0, but with a new root which is the vertex originally labeled bys, andWr[s]gives the spatial displacements along the line segment from the (new) root to the vertex coded byr(in the coding given byζ[s]).

Theorem 2.3.For every nonnegative measurable functionF onR+×Ω, N0

σ 0

dsF

s, W[s]

=N0

σ 0

dsF (s, W )

.

Remark.For everys∈ [0, σ[, the duration of the re-rooted snake excursionW[s]is the same as that of the original one. Using this simple observation, and replacing F by1{1ε<σ1}F, we can easily get a version of Theorem 2.3 for the normalized Brownian snake excursion. Precisely, the formula of Theorem 2.3 still holds ifN0is replaced by N(1)0 (or byN(r)0 for anyr >0). Via a continuity argument, it follows that, for everys∈ [0,1[, and every nonnegative measurable functionGonΩ,

N(1)0 G

W[s]

=N(1)0 G(W )

. (3)

The identity (3) appears as Proposition 4.9 of [27], which is proved via discrete approximations. Note that conversely, it would be easy to derive Theorem 2.3 from (3). We have chosen to give an independent proof of Theorem 2.3 because this result plays a major role in the present work, and also because the proof below fits in better with our general strategy, which is to deal first with unnormalized excursion measures before conditioning with respect to the duration.

(10)

Proof. By (2),W[s]can be writtenN0a.e. asΦ(ζ[s],W[s]), where the deterministic functionΦdoes not depend on s. Also note that whens=0,W=W[0]=Φ(ζ,W ), N0a.e. In view of these considerations, it will be sufficient to treat the case when

F (s, W )=F1(s, ζ )F2

s,W

whereF1 and F2 are nonnegative measurable functions defined respectively on R+×C(R+,R+) and on R+× C(R+,Rd). We first deal with the special caseF2=1.

Fors∈ [0, σ[andr0, set ζr1,s=ζ(sr)+ζs, ζr2,s=ζs+rζs.

LetGbe a nonnegative measurable function onR+×C(R+,R)×R+×C(R+,R). From the Bismut decomposition of the Brownian excursion (see e.g. Lemma 1 in [22]), we have

N0

σ 0

dsG s,

ζr1,s

r0, σs, ζr2,s

r0

= 0

daE G

Ta, (BrTa)r0, Ta, (BrT

a)r0 ,

whereBandBare two independent linear Brownian motions started at 0, and Ta=inf{r0:Br= −a}, Ta=inf{r0: Br= −a}.

Now observe that ζr[s]=ζr2,s−2 inf

0urζu2,s, if 0s, ζσ[s]r=ζr1,s−2 inf

0urζu1,s, if 0rs,

and note that Rt :=Bt −2 infrtBr and Rr :=Bt −2 infrtBr are two independent three-dimensional Bessel processes, for which

La:=sup{t0:Rta} =Ta, La:=sup{t0:Rta} =Ta.

(This is Pitman’s theorem, see e.g. [28], Theorem VI.3.5.) It follows that N0

σ 0

dsG σs,

ζr[s]s)

r0, s,

ζσ[s](rs)

r0

= 0

daE G

La, (RrL

a)r0, La, (RrLa)r0

=N0

σ 0

dsG

s, (ζrs)r0, σs, (ζr)s)r0

where the last equality is again a consequence of the Bismut decomposition, together with the Williams reversal theorem ([28], Corollary XII.4.4). Changingsintoσsin the last integral gives the desired result whenF2=1.

Let us consider the general case. For simplicity we taked=1, but the argument can obviously be extended. From the definition of the Brownian snake, we have

N0

σ 0

dsF1(s, ζ )F2

s,W

=N0

σ 0

dsF1(s, ζ )Θ0ζ F2

s,W

andWis underΘ0ζ a centered Gaussian process with covariance covΘζ

0

Ws,Ws

= inf

r∈[s,s]ζr.

(11)

We have in particular N0

σ 0

dsF1

s, ζ[s] F2

s,W[s]

=N0

σ 0

dsF1

s, ζ[s] Θ0ζ

F2

s, WsrWs

r0

.

Now note that(WsrWs)r0is underΘ0ζ a Gaussian process with covariance cov WsrWs,WsrWs

= inf

[sr,sr]ζu− inf

[sr,s]ζu− inf

[sr,s]ζu+ζs= inf

[r,r]ζu[s], where the last equality follows from an elementary verification. Hence,

Θ0ζ F2

s, WsrWs

r0

=Θ0ζ[s] F2

s,W , and, using the first part of the proof,

N0

σ 0

dsF1

s, ζ[s] F2

s,W[s]

=N0

σ 0

dsF1

s, ζ[s] Θ0ζ[s]

F2

s,W

=N0

σ 0

dsF1(s, ζ )Θ0ζ F2

s,W

=N0

σ 0

dsF1(s, ζ )F2 s,W

.

This completes the proof. 2 2.4. The special Markov property

LetDbe a domain inRd, and fix a pointxD. For every wW, we set τ (w):=inf

t0: w(t ) /∈D

where inf∅= +∞as usual. The random set s0:τ (Ws) < ζs

is openNxa.e., and can thus be written as a disjoint union of open intervals]ai, bi[,iI. It is easy to verify thatNx

a.e. for everyiIand everys∈ ]ai, bi[, τ (Ws)=τ (Wai)=τ (Wbi)=ζai=ζbi

and moreover the pathsWs, s∈ [ai, bi], coincide up to their exit time fromD.

For everyiI, we define a random elementW(i)ofΩby setting for everys0 Ws(i)(t )=W(ai+s)biai+t ), for 0

(Ws(i)):=ζ(ai+s)biζai.

Informally, theW(i)’s represent the excursions of the Brownian snake outsideD(the word “outside” is a bit misleading since these excursions may come back intoDeven though they start from the boundary ofD).

Finally, we also need a process that contains the information given by the Brownian snake paths before they exitD.

We setWsD=WηD

s , where for everys0, ηDs :=inf

r0:

r 0

du1{τ (Wu)ζu}> s

.

Theσ-fieldEDis by definition generated by the processWDand by the class ofNx-negligible subsets ofΩ(the point x is fixed throughout this subsection). The following statement is proved in [23] (Proposition 2.3 and Theorem 2.4).

Références

Documents relatifs

We focus here on results of Berestycki, Berestycki and Schweinsberg [5] concerning the branching Brownian motion in a strip, where particles move according to a Brownian motion

In this paper, we study complex valued branching Brownian motion in the so-called glassy phase, or also called phase II.. In this context, we prove a limit theorem for the

Similarly, [4] also considered a model related to the 2-dimensional branching Brownian motion, in which individuals diffuse as Brownian motions in one direction, and move at

We use our result to prove that the scaling limit of finite variance Galton–Watson trees conditioned on the number of nodes whose out-degree lies in a given set is the

In the critical case H = 1/6, our change-of-variable formula is in law and involves the third derivative of f as well as an extra Brownian motion independent of the pair (X, Y

The expected lifetime of conditioned Brownian motion in a simply connected planar domain D is by now quite well understood. It is sufficient to consider the case when the

In addition, L 3/2 is the scaling limit of the boundary of (critical) site percolation on Angel &amp; Schramm’s Uniform Infinite Planar Triangulation. Theorem (Curien

Keywords: minimum spanning tree, minimum spanning trees problem, asymp- totically optimal algorithm, probabilistic analysis, bounded below, performance guarantees, random