• Aucun résultat trouvé

An integral test for the transience of a brownian path with limited local time

N/A
N/A
Protected

Academic year: 2022

Partager "An integral test for the transience of a brownian path with limited local time"

Copied!
20
0
0

Texte intégral

(1)

www.imstat.org/aihp 2011, Vol. 47, No. 2, 539–558

DOI:10.1214/10-AIHP371

© Association des Publications de l’Institut Henri Poincaré, 2011

An integral test for the transience of a Brownian path with limited local time

Itai Benjamini

a

and Nathanaël Berestycki

b

aWeizmann Institute of Science, Rehovot, Israel. E-mail:itai.benjamini@weizmann.ac.il

bUniversity of Cambridge, CMS, 3 Wilberforce Road, Cambridge, CB3 0WB, UK. E-mail:N.Berestycki@statslab.cam.ac.uk Received 24 November 2009; revised 20 April 2010; accepted 23 April 2010

Abstract. We study a one-dimensional Brownian motion conditioned on a self-repelling behaviour. Given a nondecreasing positive functionf (t), t≥0, consider the measuresμtobtained by conditioning a Brownian path so thatLsf (s), for allst, whereLs is the local time spent at the origin by times. It is shown that the measuresμt are tight, and that any weak limit ofμtast→ ∞is transient provided thatt3/2f (t)is integrable. We conjecture that this condition is sharp and present a number of open problems.

Résumé. Etant donnée une fonction croîssantef (t), t≥0, considérons la mesureμt obtenue lorsqu’on on conditionne un mou- vement brownien de sorte queLsf (s), pour toutst, oùLsest le temps local accumulé au tempssà l’origine. Nous montrons que les mesuresμtsont tendues, et que toute limite faible deμtlorsquet→ ∞est la loi d’un processus transient sit3/2f (t)est intégrable. Nous conjecturons que cette condition est également nécessaire pour la transience et proposons un certain nombre de questions ouvertes.

MSC:60G17; 60J65; 60K37

Keywords:Brownian motion; Conditioning; Local time; Entropic repulsion; Integral test; Transience; Recurrence

1. Introduction

Let (Xt, t≥0)be a Brownian motion inRd. It is well known that d=2 is a critical value for the recurrence or transience ofX. In this paper, we show however that even in dimension 1, a very small perturbation of the Brownian path may result in the transience of the process. Letf:[0,∞)→ [0,∞)be a given nonnegative Borel function such thatf (0) >0, and consider the event

Kt=

Lsf (s), for allst .

Here(Ls, s≥0)is a continuous determination of the local time process ofX at the origin x=0. Our goal in this paper is to analyse the limiting behaviour of the Wiener measure conditioned onKt, ast→ ∞. Since(Ls, s≥0)is almost surely nondecreasing, we may and will assume without loss of generality thatf is nondecreasing. (Otherwise one may always considerf (t )ˆ =infstf (s).) Note that by Brownian scaling,Lt is of order√

t for an unconditional Brownian path. Hence whenf (t )t1/2, this constraint is of a self-repelling nature, since it forces the Brownian path to spend less time than it would naturally want to at the origin. We will thus assume that f is nondecreasing and t1/2f (t )is nonincreasing.

Our main result is that iff is only logarithmically smaller thant1/2, thenXbecomes transient almost surely in the limitt→ ∞. Here, a probability measurePon the spaceCof continuous sample paths is called transient (almost surely) ifP(limt→∞|Xt| = +∞)=1. If lim supt→∞Xt = +∞and lim inft→∞Xt= −∞withP-probability 1, then

(2)

Pis called recurrent. We equipCwith the topology of uniform convergence on compact sets, which turnsC into a Polish space, and discuss weak convergence of probability measures onCwith respect to this topology.

Theorem 1. LetW denote the Wiener measure on C,and letWt =W(·|Kt).Then {Wt, t ≥0}is a tight family.

Assume further that

1

f (t )

t3/2 dt <∞. (1)

Then for any weak subsequential limitPofWt ast→ ∞,Pis transient almost surely.

In particular, iff (t )∼√

t (logt )γ withγ≥0, then Pis transient as soon asγ >1. We believe, but have not succeeded in proving, that condition (1) is sharp, in the following sense.

Conjecture 1. If

1

f (t )

t3/2 dt= ∞, (2)

then any weak limitPofWtast→ ∞is recurrent almost surely.

It should be noted that this problem is open even in the basic case wheref (t )∼√

t ast→ ∞, for which it is still the case thatW(Et)→0 ast→ ∞.

Figure1below illustrates this result.

In the recurrent regime, i.e. if (2) holds, we further believe that the local time process ofXis well defined almost surely underP, but thatXis “far away from breaking the constraintKt,” in the following sense:

Conjecture 2. Assume(2),and letPbe any weak limit ofWt.Then there exists a nonnegative deterministic function ω(t )→ ∞ast→ ∞such that

P

Ltf (t ) ω(t )

→1 (3)

ast→ ∞.

Furthermore, whenf (t )t1/2(logt )γ ast→ ∞, with 0< γ <1, we believe that Ltf (t )exp

C(logt )γ

, (4)

with probability asymptotically 1, for someC >0. It may seem surprising at first that, in the recurrent regime, the process shouldn’t use its full allowance of local time. This phenomenon is related to entropic effects, which cause the process to stay far away from breaking the constraint to allow for more fluctuations. In [2], we already observed a similar behaviour in the case where the local time profile of the process is conditioned to remain bounded at every point, and have called this phenomenon “Brownian entropic repulsion.” This aspect is actually crucially exploited in our proof, which relies on considering a suitable softer constraintKt (easier to analyse, because more “Markovian”), but which nevertheless turns out to be equivalent to that ofKt.

Discussion and relation to previous works. The integral test (1) is reminiscent of classical integral tests on Brownian motion. Indeed, an indication that this test provides the right answer follows from a pretty basic calculation. This calculation is carried out in Lemma4, where the probability of hitting 0 during the interval [t /2, t] given Kt is estimated. However we stress that one of the major difficulties of this problem is to control the long-range interactions induced by conditioning far away into the future, and to show that this propagates down to an arbitrarily large but finite window close to the origin in a manageable way. It is this long-range interaction, inherent to the study of self- interacting processes, which is at the source of our difficulties in proving Conjectures1and2.

(3)

Fig. 1. Simulations of trajectories up to timet=104. From top to bottom:γ=0.5,γ=0.9 andγ=1.1.

(4)

Putting a bound on the local time can be viewed as introducing some form of self-repellence of the process. The problems studied here are reminiscent of some problems arising in the mathematical study ofrandom polymers, of which an excellent review can be found in [6]. Our work is also somewhat related in spirit to a series of papers by Roynette et al. (see, e.g., [12], or the forthcoming monograph by Roynette and Yor [13]) and Najnudel [8] , although our goals and methods are quite different.

Theorem1establishes tightness of the measures Pt but not weak convergence. One possible approach to prove uniqueness would be to identify the limiting process as the unique solution to a certain stochastic differential equation.

We note that this is related to the work of Barlow and Perkins [1], which describes the behaviour of Brownian motion near a typical slow point, i.e., a timet near which the growth ofBsatisfies

lim sup

h0

h1/2|Bt+hBt| =1.

Blowing up the trajectory near this slow point, their Theorem 3.3 gives precisely a description of the process as a solution to a certain stochastic differential equation.

Organization of the paper. In Section2, we prove some preliminary results which contain results interesting for their own sake. Namely, it is shown in Theorem2that a Brownian motion conditioned on having a local time at the origin bounded by 1 is transient, and that the total local time accumulated by this process is a uniform random variable on(0,1). Note that this is smaller than 1 almost surely, so here again the process does not use its full allowance of local time. Also, in Theorem3that a Brownian motion conditioned on the eventEt= {Ltf (t )}is recurrent in the limitt→ ∞as soon asf (t )→ ∞, no matter how slowly.

In Section3, we give a proof of the main result (Theorem1). This is based partly on Theorem3and on a general result which shows that any conditioning of the Brownian motion based on its zero set cannot grow faster than diffusively (Lemma1) and, in the case ofKt, this is matched by a lower bound of the same order of magnitude (Lemma2). These various ingredients are put together using a coupling method, which then gives the proof of the result.

Finally in Section4, we study a slightly different but related problem, where a Brownian path is conditioned to spend no more than one unit of time in the negative half-line. It is shown there again that the measures converge weakly to a limiting process, which is (unsurprisingly) transient, and also that the total amount of time spent in the forbidden region by this process is equal toU2, whereUis a uniform random variable on (0,1). Hence here again, the process does not use its full allowance, another expression of the entropic repulsion principle.

2. Preliminaries

2.1. Brownian motion with bounded local time

It will be convenient to define various processes on the same space, but governed by different probability measures on this space. We take for this common space the spaceC=C([0,∞),R)=the space of continuous functions from [0,∞)intoR.Xs will denote thesth coordinate function onC; we shall also writeX(s)forXs occasionally when sis a complicated expression. In this setup Brownian motion is obtained by putting the Wiener measureWonC;W is concentrated on the paths which start atX(0)=0 and makes increments over disjoint intervals independent with suitable Gaussian distributions. We now takeL(·,·)as a jointly continuous local time of the Brownian motion. This is a continuous functionL(s, x)which satisfies

{st: XsB} = t

0

I[XsB]ds=

xB

L(t, x)dx (5)

W-almost surely simultaneously for all Borel setsB andt ≥0 ([7], Section 3.4). Under the measureWthere a.s.

exists such a jointly continuous function, and it is clearly unique for any sample function for which it exists.

As a first step towards the proof of Theorem1, we prove the following simple result. Assume thatf (t )=1, so that Kt:= {Lt≤1}.

Our first theorem describes the weak limit ofWtast→ ∞. This description involves a Bessel-3 process, a description of which can be found, for instance, in [10].

(5)

Theorem 2. The measures Wt converge weakly onC to a measure P.UnderP,the process X is transient andP can be described as follows:On some probability space letU,{B(s), s≥0}, and {B(3)(s), s≥0}respectively be a random variable with a uniform distribution on[0,1],a Brownian motion,a random variable uniform on{−1,1}, and a Bessel-3process,and assume that these four random elements are independent of each other.Define

τ =sup

v: L(v,0) < U

(6) (whereLis the local time ofB),and

Y (t )=

B(t ) iftτ, B(3)(tτ ) ift > τ. ThenPis the distribution of{Y (t ), t≥0}.

Somewhat informally, the theorem says that underP,Xcan be described by first drawing an independent uniform random variableU. ThenX is the standard Brownian motion until it has accumulated a local time at 0 equal toU, and performs a three-dimensional Bessel process afterwards.

It is well known that a Bessel-3 process starting at the origin diverges to infinity almost surely. This is of course the reason why the process governed byPis transient. However, we can say more. It is also well known thatLt, the local time at 0 can change only at timestwhenXt=0. This fact is also clear from (5). Together with the description of the process underPthis implies thatLt is a.s. constant ontτ at which it takes the valueU (by definition and continuity of the inverse local timeτ). Thus, the theorem implies

L=Lτ=U. (7)

SinceU <1 almost surely, this shows that underP,Xdoesnotuse its full allowance of local time, which is another expression of the entropic repulsion principle.

Proof of Theorem2. Step1. In this step we shall give a representation of Brownian motion by means of excursions.

This will turn out to be useful for the proof. Readers familiar with this sort of things are encouraged to skip this step and go to step 2. To help with the intuition, consider the setZ:= {t: Xt=0}. IfXt is a continuous function oft, then Z is a closed set, and its complement,R\Z is a countable union of maximal open intervals. On each such interval X=0. The piece of the path ofX on such an interval is called anexcursionofX. One can now try to construct a process equivalent toXby first picking excursions on some probability space and according to a suitable distribution, and then putting these excursions together. ForXa Brownian motion, this can be done rather explicitly. The following description can be found in a number of references (see, e.g., [7], Section III.4.3, [10], Chapter XII). The excursions are elements ofWwhich is the collection of continuous functionsw:[0,∞)→Rsuch thatw(0)=0, and for which there exists aζ (w) >0 such thatw(t) >0 orw(t) <0 for all 0< t < ζ (w)andw(t)=0 fortζ (w).ζ (w)is called thelength of the excursionw, or sometimes its duration.

Itô’s fundamental result about the excursions of a Brownian motion states that there exists aσ-finite measureνon the spaceW, calledItô’s measure, such that the Brownian motion, viewed in the correct time-scale, can be seen as a Point process of excursions with intensity measureν. To state this result precisely, note thatXhas only countably many excursions (since there are only finitely many excursions above 1/nfor each finiten≥1 in any compact time- interval). Let(ei)i=1be an enumeration of these excursions. SinceLt only increases on the zero set ofX, let i be the common value ofLt throughout the excursionei, fori≥1. Then Itô’s theorem states that

M:=

i1

δ( i,ei) (8)

is a Poisson point process on(0,)×W, with intensity measure dλ⊗dν, where dλis the Lebesgue measure on (0,). That is, for any Borel set in(0,)×W, if

P(B):=(the number of points of this process in the setB),

(6)

thenP(B)has a Poisson distribution with mean

Bdλdν, and for disjoint setsBi in(0,)×W, the P(Bi)are independent. With a slight abuse of notation, we say that(l, e)MifM(l, e)=1. Note that the collection of points ( i, ei)entirely determines the path ofX. [Indeed, if we define, for allu >0,

τ (u)=

u ( ,e)M

ζ (e),

then for alli≥1, the functionτ (u)has an upward jump of sizeζ (ei)at timesi, and these are the only jumps ofτ. Ift >0, lets=inf{ut: τ (u) >0}, and letebe the excursion associated with the jump ofτ at timeu. Then it is easy to check that we have the formula

e tτ

u

=Xt,

whereXis the original process that we started with, and for allu >0,

τ (u)=inf{t >0:Lt > u}. (9)

Thus the excursions can easily be put together.]

A well-known description of Itô’s measure (see, e.g., [10], Section XII.4), which we will use in this proof, is the following determination of the “law" of the duration of an Itô excursion:

ν(ζ > t)=

t

√ds 2πs3 =

2

πt. (10)

Step2. Consider the eventEtthat by timeτ (1)(defined by (9)), there is an excursion of duration greater thant.

That is, formally Et=

there exists( , e)Msuch that ≤1 andζ (e) > t . Observe that on the one hand,EtEt. Indeed,Et can be written as

Et=

τ (1) > t

and it is clear that this occurs onEt. On the other hand, we claim that the two events have asymptotically the same probability. Indeed, ifB= {eW: ζ (e) > t}, thenEtis the event that by time 1, at least one point ofMhas fallen in B. Since the number of such points is Poisson with a parameter given in (10), we obtain

P Et

=1−exp

− 2

πt

from which we deduce P

Et

∼ 2

πt (11)

ast→ ∞. To computeP (Et), we appeal to Lévy’s reflection principle. Let St=sup

st

Xs.

Then Lévy showed that under the Wiener measureW, the two processes(L(t,0), t≥0)and(St, t≥0)have the same distribution. On the other hand, for fixedt≥0, by the standard reflection principle,St has the same distribution as

|Xt|(see, e.g., Durrett [4], Section 7.4, or [10], Sections III.3.7 and VI.2.3). Consequently, W

Et

=W

|Xt| ≤1

= 1

√2πt

x∈[−1,1]ex2/2tdx∼

2/(πt ). (12)

(7)

LetAF=

t0Ft. It follows from the above that W(A|Et)−→

t→∞P(A) if and only if W A|Et

−→

t→∞P(A). (13)

It thus suffices to prove Theorem2when the event we condition on isEt, rather thanEt. In fact, for the same reason, one can condition on the eventEt(2) that there is exactly one excursion of duration greater thant prior to timeτ (1).

IndeedEt(2)Etand we also haveP (Et(2))P (Et)since the probability that there are two or more such excursion is O(t1).

Now, forBW, let(NuB, u≥0)=P([0, u)×B)be the number of points ofMthat have fallen inBby timeu, and takeBto be the setB= {eW:ζ (e) > t}defined above (11). Note thatN·Bis a Poisson process with rateν(B).

It is well known that, conditionally on the number of jumps of a Poisson process by time 1, the jump times have the distribution of the uniform order statistics. In particular, since

Et(2)=

N1B=1 ,

we see that conditional uponEt(2), there exists a uniform random variableUin(0,1)andeBsuch that(U, e)M.

Moreover,eis independent ofU and is distributed according toν(·|B). That is, P (e∈ ·)= ν(e∈ ·)

ν(ζ > t). (14)

Consider now the conditional distribution of

i1:eiBcδ( i,ei)givenEt(2). By the independence property of Poisson point processes in disjoint sets, this distribution is simply equal to the unconditional distribution of the restriction of Mto(0,)×Bc. Therefore, letMbe an independent realization ofM, and let

M˜ =M|(0,1Bc+δ(U,e)+M|(1,∞]×W. (15)

LetX˜ be the process obtained by reconstructing the path from the point processM. The above reasoning shows that˜ for a setAFs, wheres >0 is fixed (whilet > stends to infinity)

P (XA|Et)P

XA|Et(2)

P (X˜ ∈A).

Thus it suffices to show that X˜ converges in distribution to the law Q of the process Y in Theorem2. Note that (X˜t,0≤tτ (U ))depends only on the points ofM, and is thus independent of(U, e). Moreover, providedMdid not have any point inBon the time-interval[0,1](an event of probability 1−o(1)),(X˜t,0≤tτ (U )has the same distribution as(Xt,0≤tτ (U )). Sinceeis also independent fromU, it thus suffices to prove that

ν(·|ζ > t )−→

t→∞P

B(3)∈ ·

(16) weakly. There are many ways to prove (16), and we propose one below. Let us postpone the proof of this statement for a few moments and finish the proof of Theorem2. What we have proved is that for everys >0,

W(A|Et)→P(A), AFs, s >0, (17)

wherePis the measure described in the statement of Theorem2. It is not hard (but not immediate) to deduce weak convergence ofW(·|Et)towardsP. The problem is that one cannot directly apply theλ−πsystem theorem of Dynkin to conclude that (17) holds for allAF. Instead, note that, as in the proof of Lemma 6 in [2], (17) implies tightness (all events involved in the verification of tightness are measurable with respect toFsfor somes >0), and furthermore any weak limit must be identical toP, because for instance of the convergence of the finite-dimensional distributions.

ThusWt converges weakly towardsP.

Turning to the proof of (16), which is well known in the folklore (but we have not been able to find a precise reference), we propose the following simple argument. First note that underν, sign(e)is uniform on{−1,+1}and is

(8)

independent from(|e(x)|, x≥0), which has a “distribution” equal toν+, the restriction ofν to positive excursions.

Letν+(·|ζ =t )denote the law of a positive Itô excursion conditioned to have duration equal tot, that is, the weak limit of

ν+(·;ζ(t, t+ε))

ν+(t, t+ε)) (18)

asε→0. Since ν+(A|ζ > t )=

s>t

ν(A|ζ=s) ds

√2πs3, (19)

it suffices to prove ν+(·|ζ =t )−→

t→∞P

B(3)∈ ·

. (20)

It is not hard to show (see, e.g., Pitman [9], formula (28)), that a Brownian excursion conditioned to have duration equal tot is equal in distribution to a 3-dimensional Bessel bridge of durationt, that is, can be written as

eu=

b21,u+b2,u2 +b23,u, 0≤ut, (21)

where(bi,u,0≤ut )3i=1are three independent one-dimensional Brownian bridges. Now, it is easy to check (see, e.g., Yor [14], Section 0.5) that ifW(t)is the law of a one-dimensional bridge of durationt, then fors < t,

dW(t) dW

Fs

= t

ts

exp

Xs2 2(t−s)

. (22)

Lettingt→ ∞ands >0 fixed, we see that the above Radon–Nikodyn derivative converges to 1. This means that the restrictions of(bi,u,0≤ut )3i=1toFsconverge to three independent Brownian motions. By (21), it follows that the restriction of(eu,0≤ut )converges to(

X1,u2 +X2,u2 +X23,u,0≤us)in distribution, where(Xi,u, u≥0)3i=1 are three independent Brownian motions. The law of this process is, of course, the same asP|Fs, and hence Theorem2

is proved.

2.2. Slowly growing local time

We now consider a problem which may be considered the basic building block for the proof of Theorem1. Letf be a nonnegative nondecreasing function, and let

Et=

Ltf (t )

. (23)

Let

Qt(·)=W(·|Et) (24)

with{Xt, t≥0}distributed according to the Wiener measureW. Note the difference betweenEt andKt, where the conditioning concerns the entire growth of the local time profile up to timet, whereasEt concerns only the value of Lat timet.

Theorem 3. Assume thatf (t )→ ∞ast→ ∞.ThenQt converges weakly to the standard Wiener measureWonC.

This result may be a little surprising at first: even iff grows as slowly as log log(t)we still obtain a recurrent process, indeed a Brownian motion in the limit. What is going on is that the effect of the conditioning is to create one very long excursion, but whose starting point escapes to infinity ast→ ∞. As a result the conditioning becomes trivial in the weak limit.

(9)

Proof of Theorem3. The strategy for the proof of Theorem3is similar to that used in the proof of Theorem2, and we will thus give fewer details. We start again by noticing that the eventEt= {Ltf (t )}is equivalent to the event {τ (f (t )) > t}. If we define the eventEtthat there is at least one excursion of duration greater thantby timeτ (f (t )), then we have againEtEt, and lettingλ=f (t )ν(ζ > t )=f (t )

2/(πt ), W

Et

=1−eλf (t ) 2

πt (25)

while once again, by Lévy’s identity and the reflection principle, W(Et)=W

|Xt| ≤f (t )

= 1

√2πt

|x|≤f (t )

ex/(2t )dx∼f (t ) 2

πt. (26)

Therefore, this time again, ifAF thenW(A|Et)→P(A)as t→ ∞if and only ifW(A|Et)→P(A). Thus let

M=

i1δ( i,ei) be Itô’s excursion point process, and letM˜ be a realisation ofM conditionally given Et. Then reasoning as in (15), we may write

M˜ =M|(0,f (t )]+δ(Uf (t ),e)+M|(f (t),)×W, (27)

whereMis an independent realisation ofM,Uis an independent uniform random variable on(0,1), andehas the distribution (14). Letting X˜ be the path reconstructed from the points ofM, we see that the distribution of˜ X˜ up to timeτ (Uf (t))is that of a standard Brownian motion. (We also have thatX˜t =e(tτ (Uf (t)))forτ (Uf (t))tτ (Uf (t))+ζ (e), and that the processestill converges to a 3-dimensional Bessel process ast→ ∞with a random sign, but as we will see this is irrelevant.) Observe that now the random variableτ (Uf (t))tends to infinity almost surely sincef (t )→ ∞ast→ ∞. In particular this convergence holds in probability. Fix anys≥0 andAFs. Then

P

(X˜u,0≤us)A

−W(A) ≤2P τ

Uf (t)

> s

→0

ast→ ∞. This proves that the law of(Xu,0≤us)conditionally givenEt, converges weakly ast→ ∞(or indeed in total variation) towards W|Fs. Therefore, by (25) and (26), the law of (Xu,0≤us) givenEt also converges

weakly towardsW|Fs. This proves Theorem3.

In words, the conditioning “becomes invisible” asymptotically: the effect of conditioning onEt is to add a three- dimensional Bessel which starts far away in the future. Observe that, as a byproduct, givenEt,

Lt

f (t )−→

t→∞−→U (28)

in distribution, whereUis a uniform random variable on(0,1).

3. Proof of the main result

3.1. Preliminary estimates

To begin with the proof of Theorem1, we first divide the positive half-line into dyadic blocks. Lettj =2j, and letIj= [tj1, tj). We use as a shorthand notationKn=Ktn. Our strategy will consist in studying a less stringent constraint for which the analysis is easier. LetCn be the following modified constraint:

Cn=

LsLtn−1f (s), for allsIn

, (29)

and define analogously Kn =

n j=1

Cj. (30)

(10)

Note thatKn is obviously a “weaker” constraint, in the sense thatKnKn. The difference betweenKn andKn is thatKn is “more Markovian” and hence easier to analyse. We will show however thatKnhas a positive probability to occur givenKn (bounded away from zero), which means that any result which is true forKn with probability 1, will also be true forKn. Note also thatKn may be realised as the eventKtnassociated with a modified functionf (t ),ˆ which stays constant on each intervalIj and takes the valuef (tj)on this interval.

LetHj= {Xs=0 for somesIj}, and letAj=HjCj.

We start the proof with the following general result which we will extensively use as an upper-bound on the growth ofXgivenKt. For a functionωC, letZ(ω)= {x≥0: ω(x)=0}be the set of its zeros; we viewZ as a random variable under the probability measureW. That is, we equip the setΩ of all closed subsets of the real nonnegative half-line with theσ-fieldBdefined byZBif and only if{ωC: Z(ω)Z} ∈F, whereF is the Borelσ-field onCgenerated by the open sets associated with local uniform convergence. LetZ be a set of closed subsets ofR+, such thatW(ZZ) >0. Let be the order onΩ defined by ωω if and only ifω(x)ω(x)for allx ≥0.

Finally, letP(Bess)denote the law of a three-dimensional Bessel process, which is obtained when one considers, e.g., the Euclidean norm of a Brownian motion inR3.

Lemma 1. Conditionally on{ZZ},the|X|is dominated by a three-dimensional Bessel process.More precisely, for any continuous bounded functionalF onC,which is nondecreasing for the order,

W F

|Xs|, s≥0 Z∈Z

P(Bess)

F (Xs, s≥0)

, (31)

whereP(Bess)denote the law of a3-dimensional Bessel process.

Proof. Let(Rs, s≥0)be a three-dimensional Bessel process. Recall that(Rs, s≥0)is solution of the stochastic differential equation:

dRs=dBs+ 1 Rs

ds. (32)

We work conditionally on{Z(ω)=z}for a given zZ.zbeing closed, its complement is open and defines open intervals which are referred to as excursion intervals. Given{Z=z}, the law ofXcan be described as a concatenation of independent processes whose laws are precisely Itô excursions conditioned on their durationζ, whereζ is the length of the excursion interval ofzunder consideration. More formally, forζ >0, letnζdenote the law of a Brownian excursion conditioned to have durationζ. Note that this a priori only makes sense Lebesgue-almost everywhere (see, e.g., Section V.10 in Feller [5]). However, the Brownian scaling property implies thatnζ is weakly continuous for the topology induced by local uniform convergence, so thatnζ is defined unambiguously for allζ >0. Furthermore, it is not hard to show (see, e.g., Pitman [9], formula (28)), that undern1,X is a solution to the stochastic differential equation:

dXs=dBs+ 1

Xs ds− Xs

1−sds (33)

for which there is strong uniqueness. By Brownian scaling, undernζ,X is thus the unique in law solution to the stochastic differential equation:

dXs=dBs+ 1 Xs

ds− Xs

ζsds. (34)

Start with a small parameterδ >0. IfXCis a continuous function, letX(δ) denote the element ofCobtained by removing all the excursions ofXof duration smaller thanδ(alternatively, retaining all the excursions of length greater thanδ). Since there are always no more than a finite number of excursions longer thanδ on any compact interval, there is no problem in ordering these excursions chronologically. Likewise, letX(δ)denote the processXwhere all the excursions longer thanδ have been removed. Then Itô’s excursion theorem tells us that underW, the processes X(δ)andX(δ) are independent. Furthermore, the law ofX(δ)can be obtained by concatenating an i.i.d. sequence of excursions conditioned to have length greater thanδ. As a consequence, letZ(δ)be the zero set ofX(δ), and forzΩ

(11)

letz(δ)Ωbe the closed subset ofR+where all intervals ofzcof length smaller thanδ. Then, given{Z(ω)=z},X(δ) may be described as a concatenation of independent excursions of respective lawsnζ1, . . . ,together with independent random signs, whereζ1, . . .are the chronologically ordered interval lengths of(z(δ))c. SinceW(ZZ) >0, it follows that

E

F X(δ) Z∈Z

= 1

W(ZZ)

Ω

1{z∈Z}μ(dz)E

F Xδ Z=z

. (35)

By the above two observations, given{Z=z},X(δ)may be obtained as a concatenation of independent solutions to the stochastic differential equations (34). We may thus construct a coupling ofX(δ)with a Bessel process(Rs, s≥0) as follows. Fix a Brownian motion(Bs, s≥0). Letζ1, ζ2, . . . denote the lengths of the excursions ofz(δ), ordered chronologically. We first construct(X(δ)s ,0≤sζ1)and(Rs,0≤sζ1)on the same probability space by solving (32) and (34) using the same Brownian motion(Bs,0≤sζ1)(this is possible by strong uniqueness), and by giving X(δ)a random sign on that interval. Since (34) contains an additional negative drift term compared to (32), it follows that

Xs(δ)Rs (36) holds pointwise on[0, ζ1]. By the Markov property of Brownian motion at timeζ1, there is no problem in extending this construction on the interval[ζ1, ζ1+ζ2], this time using the Brownian motion(BsBζ1, ζ1sζ1+ζ2)for both (32) and (34). SinceRζ1≥ |X(δ)ζ

1 |by (36), and since (34) still contains an additional negative drift term compared to (32), we conclude that the comparison (36) still holds true on the interval[ζ1, ζ1+ζ2]. It is trivial to extend this construction on all of R+ by induction, and to obtain (36) pointwise on all of R+. Since F is nondecreasing, we deduce that

E

F Xδ Z=z

E(Bess) F (X)

.

Plugging this into (35), we get E

F X(δ) Z∈Z

E(Bess) F (X)

. (37)

On the other hand,F is by assumption continuous for the local uniform convergence, and it is easy to see thatX(δ) converges almost surely toXin the local uniform convergence. SinceF is furthermore bounded, we conclude by the dominated convergence theorem that

E F

|X| Z∈Z

E(Bess) F (X)

as requested.

Lemma1has the following concrete and useful consequence. Observe that the eventKnmay be written as{Z(ω)Zn}for some setZn. Indeed, recall that almost surely for everyt≥0:

Lt=lim

δ0

π 2δ

1/2

Ntδ,

whereNtδis the number of excursions longer thanδby timet. (See, e.g., Proposition (2.9) of Chapter XII in [10] for a proof.) Thus we obtain in particular:

W F

|Xs|, s≥0 Kn

P(Bess)

F (Xs, s≥0)

. (38)

Remark 1. An easy modification of the above proof shows that ifX0=x=0almost surely(i.e.,ifWis replaced by the law of(x+Bt, t≥0)in(31)),then the same result holds whereRis a Bessel process also started atx.

Corresponding to this upper-bound on the growth ofXgivenKt, we will prove a matching lower-bound and show that givenKt,|X|dominates a reflecting Brownian motion. in terms of reflecting Brownian motion.

(12)

Lemma 2. For allT ≥2, WT

F

|Xs|, s≥0

≤W F

|Xs|, s≥0 .

Proof. It is well known that a conditioned Brownian motion can be seen as anh-transform of the process and hence as adding a drift to the Brownian motion. It suffices to show that this drift is always the same sign as the current position of the process. Formally, fixT ≥0 and let

A=

ωΩ: Ls(ω)f (s)for all 0≤sT

. (39)

Fortt0, and 0≤ ≤f (t )0, define:

A(t, )=

ωΩ: Lsf (s+t )for all 0≤sTt

. (40)

That is,Ais the initial constraint andA(t, )is what remains of that constraint aftertunits of time and having already accumulated of local time at zero of by that time. Let

h(x, t, )=Wx

A(t, )

. (41)

Then the conditioned processPT may be described by using Girsanov’s theorem, to get that underPT, thenX is a solution to

dXt=dWt+ ∇xlogh(Xt, t, Lt), (42)

whereLis the local time at the origin ofXandW is a one-dimensional Brownian motion. Details can be found for instance in [11], Section IV.39, in the case where the constraint depends only on the position ofX(and not on its local time as well) but the proof remains unchanged in this case as well. Assume to simplify thatx≥0, and let us show that∇xlogh(x, t, )≥0 for alltt0and 0≤ ≤f (t ). It suffices to prove that(∂h/∂x)(x, t, )≥0. Thus it suffices to prove that for allyxsufficiently close tox,

h(y, t, )h(x, t, ). (43)

In other words, we want to show that the probability ofA(t, )is monotone in the starting point. We use a coupling argument similar to the one we used in [2], Lemma 8. LetXbe a standard Brownian motion started atx and letY be another Brownian motion, started aty. Consider the stopping time:

τ=inf

t >0:Yt= |Xt|

(44) and construct the processZdefined byZt=Yt fortτ and fort > τ:

Zt=

Xt ifXτ=Yτ,

Xt ifXτ= −Yτ.

ThenZt has the law of Brownian motion started aty. Moreover, we claim that for anys >0, in this coupling:

Ls(Z)Ls(X).

Indeed, note that for anysτ we haveLs(Z)=0 sinceZ cannot hit 0 beforeτ. Afterwards, the local time ofZ increases exactly as that ofX, hence in general, for alls≥0:

Ls(Z)=

Ls(X)Lτ(X)

+. (45)

From (45), we also see that the increment ofLs(Z)over any time-interval is smaller or equal to the increment of Ls(X)over the same time-interval. Therefore, ifXsatisfiesA(t, ), then so doesZ. As a consequence,

Wx

A(t, )

≤Wy

A(t, )

which is the same as (43). Thus Lemma2is proved.

(13)

A first consequence of these two dominations is a simple proof that the conditioned processes form a tight family of random paths.

Lemma 3. The family of measures{Pt}t0is tight.

Proof. By classical weak convergence arguments (see, e.g., Billingsley [3]), it suffices to prove that for each fixedA, the following two limit relations hold:

blim→∞lim sup

T→∞ PT

sup

0sA

X(s) > b

=0, (46)

and for eachη >0 limε0lim sup

T→∞ PT

sup

0s,sA,|ss|

X(s)−X s > η

=0. (47)

Equation (46) is a direct consequence of Lemma1, so we move to the proof of (47). Basically, the domination above and below by a Bessel process and a reflecting Brownian motion give us a uniform Kolmogorov estimate:

ET

|XtXs|4

C(ts)2

for someC >0 and allt, s >0. For instance, to get a bound onE((XtXs)4+), it suffices to note that ifs < twithout loss of generality, then conditionally onFs, the process(Xs+u, u≥0)underWT may be realised again as a Brownian motion conditioned upon satisfying a constraint of the form KTs, and hence the domination by a 3-dimensional Bessel process started fromXsholds. The bound onE((XtXs)4)follows similarly.

Thus, ifnis an integer and 0≤kA2n, by Markov’s inequality:

WT

sup

0kA2n

|X(k+1)2nXk2n| ≥2n/8

A2nC2n/42n=CA23n/4.

Thus, summing overnNfor someN≥1:

WT

nN, sup

0kA2n

|X(k+1)2nXk2n| ≥2n/8

CA23N/4. (48)

Now, lets, tbe two dyadic rationals such thats < tand|st|<2N, and assume that the complement of the event in the left-hand side of (48) holds. LetrN be the least integer such that|st|>2r1. Then there exists an integer 0≤kA2r, as well as two integers , msuch that

s=k2rε12r1− · · · −ε 2r , t=k2r+ε12r1+ · · · +εm2rm, withεi, εi∈ {0,1}. For 0≤i≤ and 0≤jmlet

si=k2rε12r1− · · · −ε 2ri, tj=k2r+ε12r1+ · · · +εm2rj. Thus by the triangular inequality:

|XtXs| ≤ |Xt0Xs0| +

i=1

|XsiXsi−1| + m j=1

|XtjXtj−1|

≤2r/8+

i=1

2(r+i)/8+ m j=1

2(r+j )/8≤ 3

1−21/82r/8c|ts|1/8

(14)

withc=3/(1−21/8)21/8. Therefore, by the density of dyadic rationals in[0, A], it follows that WT

sup

s,t∈[0,A];|st|≤

|XsXt|> c|st|1/8

CAε3/4,

whereε=2N. Since the right-hand side does not depend onT, we may take the limsup asT → ∞, and (47) follows

directly.

3.2. Proof of the Theorem1

Lemma 4. Letγ >0.There existsC >0such that for allnlarge enough, W

Aj|Kj1

Cf (tj)

tj

. (49)

Proof. We use an a priori rough upper-bound, based on the local time accumulated at the end of the intervalIj. Let τj =inf{t > tj1: Xt =0}. Conditionally onHj andFτj,(Xτj+t, t≥0)is a Brownian motion started at 0 by the strong Markov property. Thus by (25),

W

Cj|Fτj;Kj1, Hj

C f (tj)

tjτj.

Now, note that ifT0 denote the hitting time of zero and Wx denote the Wiener measure started fromx andSt = supstXs, then by translation invariance and the reflection principle, we have for allt >0 andx=0,

Wx(T0∈dt )

dt = d

dtW(St> x)= d dt2

x/ t

ev2/2 dv

√2π

=ct3/2xex2/2tct1, (50)

wherec >0 is a universal constant independent oft andx. As a consequence, conditioning on Ftj1 and letting x=Xtj1,

W

Aj|Ftj1, Xtj1=x

tj

tj1Wj∈ds|Ftj1, Xtj1 =x)Cf (tj)

tjs

Cf (tj) tj

tj1

tj1−s ds

s

C f (tj)

tj1 2

1

√ 1 2−u

du

uCf (tj)

tj . (51)

Taking the expectation of both sides, we deduce the result.

We now show how the above lemma can be extended to show that the probability of hitting zero during the intervalIj is uniformly small asn→ ∞. This is the key lemma of the proof, since it precisely deals with the way the self-interaction of the process “propagates” down from+∞to the finite window[tj1, tj]we are considering.

Lemma 5. There existsC >0andj0≥1such that W

Hj|Kn

Cf (tj)

tj , (52)

uniformly innj+1andj≥1.

Références

Documents relatifs

The lemma hereafter gives an expression of the non-exit probability for Brownian motion with drift a in terms of an integral involving the transition probabilities of the

Beyond semimartingales and Markov processes, local time process associated with Gaussian processes may exist (c.f. [8]) as occupation densi- ties. A particular example of

From the statistical point of view we find this problem interesting because it is in certain sense intermediate between the classical problem of drift estimation in a diffusion,

We define a partition of the set of integers k in the range [1, m−1] prime to m into two or three subsets, where one subset consists of those integers k which are &lt; m/2,

In the critical case H = 1/6, our change-of-variable formula is in law and involves the third derivative of f as well as an extra Brownian motion independent of the pair (X, Y

In Section 3 we apply Theorem 2.1 to prove the joint continuity of the local times of an (N, d)-mfBm X (Theorem 3.1), as well as local and uniform Hölder conditions for the

Appendix B: Variance of the boundary local time In [30], the long-time asymptotic behavior of the cumu- lant moments of the residence time and other functionals of reflected

We construct a sequence of Markov processes on the set of dom- inant weights of the Affine Lie algebra ˆ sl 2 (C) which involves tensor product of irreducible highest weight modules