• Aucun résultat trouvé

Random walk with heavy tail and negative drift conditioned by its minimum and final values

N/A
N/A
Protected

Academic year: 2021

Partager "Random walk with heavy tail and negative drift conditioned by its minimum and final values"

Copied!
19
0
0

Texte intégral

(1)

HAL Id: hal-00917448

https://hal.archives-ouvertes.fr/hal-00917448

Submitted on 11 Dec 2013

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Random walk with heavy tail and negative drift conditioned by its minimum and final values

Vincent Bansaye, Vladimir Vatutin

To cite this version:

Vincent Bansaye, Vladimir Vatutin. Random walk with heavy tail and negative drift conditioned by its minimum and final values. 2013. �hal-00917448�

(2)

Random walk with heavy tail and negative drift conditioned by its minimum and final values

Vincent Bansayeand Vladimir Vatutin, December 11, 2013

Abstract

We consider random walks with finite second moment which drifts to

−∞and have heavy tail. We focus on the events when the minimum and the final value of this walk belong to some compact set. We first specify the associated probability. Then, conditionally on such an event, we finely describe the trajectory of the random walk. It yields a decomposition the- orem with respect to a random time giving a big jump whose distribution can be described explicitly.

1 Introduction and main results

We consider a random walkS= (Sn :n 0) generated by a sequence (Xn : n1) of i.i.d. random variables distributed as a random variable X. Thus,

Sn=

n

X

i=1

Xi, (S0= 0).

We assume that the random walk has a negative drift

E[X] =a <0. (1)

and a heavy tail

A(x) =P(X > x) = l(x)

xβ , (2)

whereβ >2 andl(x) is a function slowly varying at infinity. Thus, the random variableX under the measurePdoes not satisfy the Cramer condition and has finite variance. We further suppose that, for any fixed ∆>0,

x

l(x+ ∆) l(x) 1

x→∞

−→ 0

CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France; e-mail:

bansaye@polytechnique.edu

Department of Discrete Mathematics, Steklov Mathematical Institute, 8, Gubkin str., 119991, Moscow, Russia; e-mail: vatutin@mi.ras.ru

(3)

which is equivalent to

P(X (x, x+ ∆]) = ∆βP(X > x)

x (1 +o(1)) = ∆βA(x)

x (1 +o(1)) (3) asx→ ∞.

To formulate the results of the present paper we introduce two important random variables

Mn = max(S1, . . . , Sn), Ln = min(S1, . . . , Sn)

and two right-continuous functions U : R R0 ={x0} and V : R R0 given by

U(x) = 1 +

X

k=1

P(Sk x, Mk <0) , x0,

V(x) = 1 +

X

k=1

P(Sk > x, Lk0) , x0,

and 0 elsewhere. In particularU(0) =V(0) = 1. It is well-known thatU(x) = O(x) for x → ∞. Moreover, V(x) is uniformly bounded in x in view of EX <0.

It will be convenient to write 1n for the ndimensional vector whose co- ordinates are all equal to 1 and set Sj,n = (Sj, Sj+1,· · · , Sn) if j n with Sn = S0,n and Sn,0 = (Sn, Sn1,· · ·, S0). Similar notation will be used for nonrandom vectors. Say,sn,0= (sn, sn1,· · · , s0). Let

bn=βP(X > an)

an .

With this notation in hands, we first describe the asymptotic behavior of the probability of the event that the random walk remains within the time interval [0, n] above some levelxand ends up at timenbelow the levelT.

Theorem 1 For anyx0 andT >x, asn→ ∞, P(Sn< T, Ln≥ −x) bnU(x)

Z x+T

0

V(z)dz (4) and for anyx0andT < x, asn→ ∞,

P(Sn> T, Mn< x) bnV(x) Z xT

0

U(z)dz . (5)

Our second goal is to demonstrate that if the event{Sn< T, Ln≥ −x} oc- curs then the trajectory of the random walk on [0, n] has a big jump of the orderan+O(

n), such a jump is unique and happens at the beginning of the trajectory. Using this fact we also describe the full trajectory of the random walk.

(4)

Theorem 2 For all x > 0 and T R there exists a sequence of numbers πj = πj(x) > 0, P

j0πj = 1, such that for each j the following properties hold:

(i)limn→∞P(Xjan/2|Ln≥ −x, SnT) =πj;

(ii) For each measurable and bounded functionF :RjRand each family of measurable uniformly bounded functionsFn:Rn+1Rsuch that

εlim0 sup

nN,snRn+1|Fn(sn+ǫ1n+1)Fn(sn)|= 0, (6) we have asn→ ∞

E[F(Sj1)Fnj(Sj,n)|Ln≥ −x, SnT, Xj an/2]

E[F(Sj1)|Lj1≥ −x]Eµ

Fnj(Snj,0)|L≥ −x

0, where S is a random walk with step X and positive drift, L is its global minimum andµ is a probability measure given by :

µ(dy) = 1y[x,T]θ1Py(L≥ −x)dy, θ= Z T

x

dyPy(L≥ −x). (7) In words, this theorem yields the decomposition of the trajectory of (Si : in) conditioned by its minimumLn and final valueSn. It says that condi- tionally onLn≥ −xandSn =s, Sjumps with probabilityπj at some (finite) timej. Before this time, Sis simply conditioned to be larger thanx. After this time, reversing the trajectory yields a random walkS (with positive drift) conditioned to be larger than x. The size of the jump at time j links the valueSj1 toSnj1=s+a(nj1) +

nWn, where, thanks to the central limit theorem,Wn converges in distribution, as n→ ∞to a Gaussian random variable. Thus this big jumps is of order an+Wn

n, as stated below. The proof is differed to Section 5.

Corollary 3 Let κ = inf{j 1 : Xj an/2}. Under P, conditionally on Ln≥ −xandSn T,κ converges in distribution to a proper random variable whose distributionj :j1) is specified by

πj =πj(x) = P(Ljx) P

k0P(Lkx)

and Xκan

n

converges in distribution to a centered Gaussian law with varianceσ2=V ar(X).

Proof.The expression ofπjcan be found in STEP 4 of the proof of Theorem 2, see (23). The second part of the corollary is an application of the second part of the mentioned theorem with

F(s0,· · · , sj) = 1, Fn(s1, ..., sn+1) =g((s1an)/ n)

(5)

forguniformly continuous and bounded if one takes into account the positivity of the drift of S allowing to neglect the condition L ≥ −xand to use the central limit theorem.

We note that random walks with negative (or positive) drift satisfying con- ditions (1) and (2) (and even weaker assumptions) have been investigated by many authors (see, for instance, [6, 11, 14, 15] and monograph [9] with refer- ences therein. Article [11] is the most close to the subject of the present paper.

Durrett has obtained there scaled limit results for the random walk meeting con- ditions (1) and (2) but conditionally on the minimum value only. He has shown that, for eachM athe size of the big jump may exceed the value M nwith a positive probability. The additional condition on the final value we impose in Theorem 1 modifies the size of the big jump by forcing it to be concentrated in a vicinity of point anwith deviation of order

n and allows us to provide in Theorem 2 a non scaled decomposition of the asymptotic conditional path.

We stress that when the increments of the random walk are in the domain of attraction of a Gaussian law with zero mean, such problems have been investi- gated. See, in particular, [16] for the convergence of the scaled random walk to the Brownian excursion.

The initial motivations to get the results of the present paper come from branching processes in random environment (BPRE). The survival probability of such processes are deeply linked to the behavior of the random walk associated with the successive log mean offspring [13, 3, 4, 1, 2, 8, 17], namely logm. The fine results given here are required to get the asymptotic survival probability of the subcritical class of BPRE such that this logmhas density

p(x) = l0(x)

xβ+1eρx, (8)

wherel0(x) is a function slowly varying at infinity, β >2, ρ(0,1). We refer to [5] for precise statements and proofs.

2 Preliminaries : Some classical results on ran- dom walks

Our arguments essentially use a number of statements from the theory of ran- dom walks, that are included into this section.

In the sequel we shall meet the situations in which the random walk starts from any point x R. In such cases we write for probabilities as usualPx(·). We use for brevityPinstead ofP0.

We define

τn = min{0kn:Sk= min(0, Ln)}, τ = min{k >0 : Sk<0}

(6)

and let

D=

X

k=1

1

kP(Sk0).

Now we list some known statements for the convenience of references. The first lemma is directly taken from [9], Theorems 8.2.4, page 376 and 8.2.18, page 389.

Lemma 4 Under conditions (1) and (2), asn→ ∞

P(Ln0) =P(τ > n)eDP(X > an) (9) and for any fixedx >0

nlim→∞

P(Ln ≥ −x)

P(τ > n) =U(x). (10)

The next statement is an easy corollary of Theorem 4.7.1, page 218 of mono- graph [9].

Lemma 5 LetX be a non-lattice random variable withE[X] =a <0 whose distribution satisfies condition (3). If S˜n =X1+· · ·+Xn+an, then for any

>0 uniformly inxn2/3, P

S˜n[x, x+ ∆)

=∆βnA(x)

x (1 +o(1)).

In the sequel we use several times the following lemma, in which i) does not require a proof and ii) is a special case of Theorem 1 in [10].

Lemma 6 Let(rn)be a regularly varying sequence with P

k=0rk <. i) If δn drn, ηn ern, then Pn

i=0δiηni crn with c = dP k=0ηk + eP

k=0δk asn→ ∞. ii) If P

k=0αktk = exp P

k=0rktk

for |t| < 1, then αn crn with c = P

k=0αk asn→ ∞.

We introduce two functions K1(λ) = 1

λexp (

X

n=1

1 nE

eλSn;Sn<0 )

= 1

λ 1 +

X

n=1

E

eλSn;Mn<0

!

= Z

0

eλxU(x)dx, (11)

K2(λ) = 1 λexp

( X

n=1

1 nE

eλSn;Sn0 )

= 1

λ 1 +

X

n=1

E

eλSn;Ln0

!

= Z

0

eλzV(z)dz, (12)

(7)

being well defined forλ >0.

Note that the intermediate equalities in (11) and (12) are simply versions of the Baxter identities (see, for instance, Chapter XVIII.3 in [12] or Chapter 8.9 in [7]).

3 Asymptotic behavior of the distribution of(Sn, Ln)

Basing on the three previous lemmas, we prove the following statement.

Lemma 7 Assume that E[X] <0 and that A(x) meets condition (3). Then, for anyλ >0 asn→ ∞

E

eλSn;τn =n

=E

eλSn;Mn<0

K1(λ)bn (13) and

E

eλSn;τ > n

=E

eλSn;Ln0

K2(λ)bn. (14) Proof. We prove (14) only. Statement (13) (proved in [17] under a bit stronger conditions) may be checked in a similar way. First we evaluate the quantity

E

eλSn;Sn0

=E

eλSn; 0Sn < λ1+ 2) logn

+O nβ2 . (15) Clearly, for any ∆>0

X

0k(β+2)λ11logn

eλ(k+1)∆P

k∆ +anS˜n(k+ 1)∆ +an

E

eλSn; 0Sn< λ1+ 2) logn

X

0k(β+2)λ11logn

eλk∆P

k∆ +anS˜n(k+ 1)∆ +an .

Recall that by Lemma 5 in the range ofk under consideration P

k∆ +anS˜n(k+ 1)∆ +an

= ∆βn

(k∆ +an)A(k∆ +an) (1 +o(1))

= ∆β

a A(an) (1 +o(1)),

where o(1) is uniform in 0 k + 2)λ11logn. Now passing to the limit asn→ ∞we get

X

k=0

eλ(k+1)∆ lim inf

n→∞

aE

eλSn; 0Sn< λ1+ 2) logn βA(an)

lim sup

n→∞

aE

eλSn; 0Sn < λ1+ 2) logn βA(an)

X

k=0

eλk∆.

(8)

Letting ∆0+, we see that

nlim→∞

aE

eλSn; 0Sn< λ1+ 2) logn

βA(an) =λ1.

Combining this with (15) we conclude that, asn→ ∞ E

eλSn;Sn0

β

A(an) (1 +o(1)) β

P(X > an). (16) We know by the Baxter identity (see, for instance, Chapter 8.9 in [7]) that for λ >0 andt[0,1]

1 + X n=1

tnE

eλSn;Ln0

= exp (

X

n=1

tn nE

eλSn;Sn0 )

.

From (16) and point i) of Lemma 6 withrn=bn we get forn→ ∞, E

eλSn;Ln0

K2(λ)βP(X > an)

an ,

whereK2(λ) is specified by (12). This gives statement (14) of the lemma.

Lemma 8 For x0, λ >0 we have asn→ ∞: Ex[eλSn; Mn<0] bnV(x)

Z 0

eλzU(z)dz, (17) Ex[eλSn;Ln0] bnU(x)

Z 0

eλzV(z)dz. (18) Proof. This proof follows the line for proving Proposition 2.1 in [1]. By the continuity theorem for Laplace transforms Lemmas 6 and 7 give for any x [0,) andλ >0

bn1E[eλSn;Mn<0, Sn>x] Z x

0

eλzU(z)dz, (19) bn1E[eλSn;Ln 0, Sn< x]

Z x

0

eλzV(z)dz. (20) Further, using duality we have

E[eλSn;Mn < x] =

n1

X

i=0

E[eλSn;S0, . . . , SiSi< x , Si> Si+1, . . . , Sn] + E[eλSn;S0, . . . , SnSn < x]

=

n1

X

i=0

E[eλSi;Li0, Si< x]·E[eλSn−i;Mni<0]

+E[eλSn;Ln0, Sn< x].

(9)

This formula together with (13), (20), the left continuity of V(z) for z > 0 implyingV(0) =V(0) = 1, and the equations

1 +

X

k=1

E[eλSk;Lk0, Sk< x]

= 1 + Z

(0,x)

eλzdV(z) = eλxV(x)λ Z x

0

eλzV(z)dz , 1 +

X k=1

E[eλSk;Mk<0] = λ Z

0

eλzU(z)dz yield by Lemma 6 i) that forλ >0 andx >0

bn1E[eλSn;Mn< x] V(x)eλx Z

0

eλzU(z)dz,

which gives (17) by multiplying by exp(λx). Using similar arguments one can get (18).

The continuity theorem for Laplace transforms and (17) and (18) yield the asymptotic distribution of (Sn, Ln) on compacts sets.

Proof of Theorem 1. By (18) and the continuity theorem for Laplace transforms for anyx0 andy > x we have

Ex[eλSn;Sn< y, Ln0] bnU(x) Z y

0

eλzV(z)dz giving

Px(Sn< y, Ln0) bnU(x) Z y

0

V(z)dz or

P(Sn< yx, Ln≥ −x) bnU(x) Z y

0

V(z)dz justifying (4).

The asymptotic representation (5) may be checked by the same arguments.

4 Conditional description of the random walk

In this and subsequent sections we agree to denote by C, C1, C2, ... positive constants which may be different in different formulas or even within one and the same complicated expression.

Our first result shows that the random walk may stay over a fixed level for a long time only if it has at least one big jump. Let

Bj(y) ={Xj+ay}, B(n)(y) =nj=1Bj(y).

(10)

Lemma 9 If E[X] =a <0 and condition (3) is valid then there exists δ0 (0,1/4)such that for all δ(0, δ0), kZ, andan/2uM,

Pu( max

1jnXjδan, Snk)εM(k)nβ1, where εM(k)M→∞0.

Proof. Set Yn = (Sn+an) where σ2 = V ar(X) and S0 = 0. It follows from Theorem 4.1.2 and Corollary 4.1.3 (i) in [9] (see also estimate (4.7.7) in the mentioned book) that ifr >2 andδ >0 are fixed then forxn2/3 and all sufficiently largen

P(B(n) xσr1

, Yn x)

nP(X+aσxr1)rδ

.

Sincel(x) in (2) is slowly varying,x1/4l(x)0 asx→ ∞. Hence we get for all sufficiently largenandβ >2

nP(X+aσxr1)rδ

C nl(x)

xβ rδ

C n xβ1/4

rδ

C 1

n1/6 rδ

. We fix now r > 2, δ0 <1/4 with 0 = 1/2 so that (rδ)/6 > β+ 1 for all δ(0, δ0). As a result we obtain that there exists γ >0 such that

Pu(B(n) xσr1

, Snan+u)Cnβ1γ for allxn2/3where now S0=u. Setting=0anwe get

Pu(B(n)0an), Sn≥ −an/2 +u)Cnβ1γ. Therefore, for everykZ

Pu( max

1jnXjδ0an; Snk)Cnβ1γ (21) for allan/2uM → ∞. Since the left-hand side is decreasing when δ0 0 the desired statement follows.

We know by (4) that for any fixedN andl≥ −N P(Ln≥ −N, Sn [l, l+ 1)) bnU(N)

Z N+l+1

N+l

V(z)dz, n→ ∞. Hence, applying Lemma 9 withu= 0 we conclude that, asn→ ∞

P(Ln≥ −N, Sn[l, l+ 1))P

Ln≥ −N, Sn[l, l+ 1); ¯B(n)0an) , meaning that for the event{Ln≥ −N, Sn[l, l+ 1)} to occur it is necessary to have at least one jump exceeding δ0an. The next statement shows that, in fact, there is exactly one such big jump on the interval [0, n] that gives the contribution of order bn to (4) and the jump occurs at the beginning of the interval.

(11)

Lemma 10 Under conditionsE[X] =a <0and (3) for any fixedl andδ >0

Jlim→∞lim sup

n→∞ bn1P

Ln≥ −N, max

JjnXjδan, Sn [l, l+ 1)

= 0 and, for any fixedJ

nlim→∞bn1P Ji6=j{Xiδan, Xj δan}

= 0.

Proof. Write for brevitySn[l) ifSn [l, l+ 1). Then P(Ln≥ −N, Xj δan, Sn[l))

Z

N

P(Sj1ds, Lj1≥ −N)× Z

δan

P(Xjdt)P(Snj[lts), Lnj≥ −tsN)

Z

N

P(Sj1ds, Lj1≥ −N) Z

δan

P(Xj dt)P(Snj [lts)). By condition (3),

P(Xj [t))CP(X > t) t , t >0.

This estimate and its monotonicity intgives Z

N

P(Sj1ds;Lj1≥ −N) Z

δan

P(Xjdt)P(Snj[lts))

C1

P(X δan) n

Z

N

P(Sj1ds;Lj1≥ −N) Z

δan

P(Snj [lts))dt.

Now Z

δan

P(Snj [lts))dt Z

−∞

dt

Z lts+1 lts

P(Snjdw)

= Z

−∞

P(Snjdw)

Z lsw+1 lsw

dt= 1.

Thus,

P(Ln≥ −N, Xjδan, Sn[l))

CP(X > an) n

Z

N

P(Sj1ds, Lj1≥ −N)

=CP(X > an)

n P(Lj1≥ −N) =C1bnP(Lj1≥ −N). By (10) the seriesP

j1P(Lj1≥ −N) converges meaning that a big jump may occur at the beginning only. Moreover, it is unique on account of the estimate

P(Xi δan, Xjδan) =O l2(n)n

=o(bn)

(12)

for alli6=j with max (i, j)J andβ >2.

The next lemma gives an additional information about the properties of the random walk in the presence of a big jump. Let

Rδ(M, K) =

δanX1anM

n,|Sn| ≤K and

R(M, K) =

X1an+M

n,|Sn| ≤K .

Lemma 11 Under conditions E[X] =a <0 and (3) for any δ(0,1) and each fixedK,

Mlim→∞lim sup

n→∞

bn1P(Rδ(M, K)∪ R(M, K)) = 0.

Proof. Similarly to the previous lemma we have P(Rδ(M, K)) =

Z anM n δan

P(Sn1[Kx, Kx])P(X1dx)

CP(X > δan) δan

Z anM n δan

P(Sn1[2Kx,2Kx])dx

= CP(X > δan) δan

Z anMn δan

dx

Z 2Kx

2Kx

P(Sn1dv)

4KCP(X > δan) δan

Z 2Kδan

2Kan+Mn

P(Sn1dv)

4KCP(X > δan)

δan P Sn1≥ −2Kan+M n and

P(R(M, K)) = Z

an+Mn

P(Sn1[Kx, Kx])P(X1dx)

CP(X > an) an

Z an+M

n

P(Sn1[2Kx,2Kx])dx

4KCP(X > an)

an P Sn12KanM n

. Since lim supn→∞P(|Sn1+an| ≥M

n) decreases to 0 as M → ∞ by the central limit theorem, the desired statement follows.

5 Proof of Theorem 2

We start by the following important statement.

Références

Documents relatifs

In Section 5, we treat the problem of large and moderate deviations estimates for the RWRS, and prove Proposition 1.1.. Finally, the corresponding lower bounds (Proposition 1.3)

The combinatorial representation theory of these algebras allows us to describe a generalized Pitman transformation which realizes the con- ditioning on the set of paths of the

Keywords: Markov chains, recurrence, heavy tails, moments of passage times, random dy- namical systems.. AMS 2010 subject classifications: Primary: 60J05; (Secondary:

The only previous results on passage time moments that we know of are due to Doney [3] in the case of a zero-drift, spatially homogeneous random walk, and a sub- set of the

Using Karlin-McGregor’s spectral representations in terms of orthogonal polynomials and first associated orthogonal polynomials, exact expressions are obtained for first-return

As X 1 , Y 1 are two independent Gaussian random variables, the second term in the upper bound has exponential moments by Fact 2.5.. The same arguments will directly apply to prove

Branching processes with selection of the N rightmost individuals have been the subject of recent studies [7, 10, 12], and several conjectures on this processes remain open, such as

When positive and negative feedback information is available, we can calculate each node’s proximity to the feedback nodes, and use the prox- imity to infer the usefulness