• Aucun résultat trouvé

On the survival of a class of subcritical branching processes in random environment

N/A
N/A
Protected

Academic year: 2021

Partager "On the survival of a class of subcritical branching processes in random environment"

Copied!
30
0
0

Texte intégral

(1)

HAL Id: hal-00844584

https://hal.archives-ouvertes.fr/hal-00844584v2

Submitted on 19 Dec 2013

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

On the survival of a class of subcritical branching processes in random environment

Vincent Bansaye, Vladimir Vatutin

To cite this version:

Vincent Bansaye, Vladimir Vatutin. On the survival of a class of subcritical branching processes in

random environment. 2013. �hal-00844584v2�

(2)

On the survival of a class of subcritical branching processes in random environment

Vincent Bansaye

and Vladimir Vatutin

, December 19, 2013

Abstract

Let Z

n

be the number of individuals in a subcritical BPRE evolving in the environment generated by iid probability distributions. Let X be the logarithm of the expected offspring size per individual given the environment. Assuming that the density of X has the form

p

X

(x) = x

−β−1

l

0

(x)e

−ρx

for some β > 2, a slowly varying function l

0

(x) and ρ ∈ (0, 1) , we find the asymptotic of the survival probability P (Z

n

> 0) as n → ∞ , prove a Yaglom type conditional limit theorem for the process and describe the conditioned environment. The survival probability decreases exponen- tially with an additional polynomial term related to the tail of X . The proof uses in particular a fine study of a random walk (with negative drift and heavy tails) conditioned to stay positive until time n and to have a small positive value at time n, with n → ∞ .

1 Introduction

We consider the model of branching processes in random environment intro- duced by Smith and Wilkinson [16]. The formal definition of these processes looks as follows. Let N be the space of probability measures on N

0

= { 0, 1, 2, ... } . Equipped with the metric of total variation N becomes a Polish space. Let e be a random variable taking values in N . An infinite sequence E = ( e

1

, e

2

, . . .) of i.i.d. copies of e is said to form a random environment. A sequence of N

0

-valued random variables Z

0

, Z

1

, . . . is called a branching process in the random envi- ronment E , if Z

0

is independent of E and, given E , the process Z = (Z

0

, Z

1

, . . .) is a Markov chain with

L (Z

n

| Z

n−1

= z

n−1

, E = (e

1

, e

2

, . . .)) = L ξ

n1

+ · · · + ξ

nzn−1

(1)

CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France; e-mail:

bansaye@polytechnique.edu

Department of Discrete Mathematics, Steklov Mathematical Institute, 8, Gubkin str., 119991, Moscow, Russia; e-mail: vatutin@mi.ras.ru

(3)

for every n ≥ 1, z

n−1

∈ N

0

and e

1

, e

2

, . . . ∈ N, where ξ

n1

, ξ

n2

, . . . are i.i.d.

random variables with distribution e

n

. Thus, Z

n

=

Zn−1

X

i=1

ξ

ni

(2)

and, given the environment, Z is an ordinary inhomogeneous Galton-Watson process. We will denote the corresponding probability measure and expectation on the underlying probability space by P and E , respectively.

Let X = log

 X

k≥0

ke ( { k } )

 , X

n

= log

 X

k≥0

ke

n

( { k } )

 , n = 1, 2, ..., be the logarithms of the expected offspring size per individual in the environ- ments and

S

0

= 0, S

n

= X

1

+ · · · + X

n

, n ≥ 1, be their partial sums.

This paper deals with the subcritical branching processes in random envi- ronment, i.e., in the sequel we always assume that

E [X ] = − b < 0. (3)

The subcritical branching processes in random environment admit an additional classification, which is based on the properties of the moment generating func- tion

ϕ(t) = E e

tX

= E

 X

k≥0

ke ( { k } )

t

 , t ≥ 0.

Clearly, ϕ

(0) = E [X ]. Let

ρ

+

= sup { t ≥ 0 : ϕ(t) < ∞}

and ρ

min

be the point where ϕ(t) attains its minimal value on the interval [0, ρ

+

∧ 1]. Then a subcritical branching process in random environment is called

weakly subcritical if ρ

min

∈ (0, ρ

+

∧ 1) ,

intermediately subcritical if ρ

min

= ρ

+

∧ 1 > 0 and ϕ

min

) = 0, strongly subcritical if ρ

min

= ρ

+

∧ 1 and ϕ

min

) < 0.

Note that this classification is slightly different from that given in [9]. Weakly

subcritical and intermediately subcritical branching processes have been studied

in [14, 1, 2, 3] in detail. Let us recall that ϕ

+

∧ 1) > 0 for the weakly subcritical

case.

(4)

The strongly subcritical case is also well studied for the case ρ

+

≥ 1, i.e., if ρ

min

= ρ

+

∧ 1 = 1 and ϕ

(1) < 0. In particular, it was shown in [14] and refined in [5] that if ϕ

(1) = E

Xe

X

< 0 and E

Z

1

log

+

Z

1

< ∞ then, as n → ∞ P (Z

n

> 0) ∼ K ( E [ξ])

n

, K > 0, (4) and, in addition,

n

lim

→∞

E

s

Zn

| Z

n

> 0

= Ψ(s), (5)

where Ψ(s) is the probability generating function of a proper nondegenerate random variable on Z

+

. This statement is actually an extension of the classical result for the ordinary subcritical Galton-Watson branching processes.

2 Main results

Our main concern in this paper is the strongly subcritical branching processes in random environment with ρ

+

∈ (0, 1). More precisely, we assume that the following condition is valid:

Hypothesis A. The distribution of X has density p

X

(x) = l

0

(x)

x

β+1

e

ρx

, (6)

where l

0

(x) is a function slowly varying at infinity, β > 2, ρ ∈ (0, 1) and, in addition,

ϕ

(ρ) = E Xe

ρX

< 0. (7)

This assumption can be relaxed by assuming that p

X

(x) is the density of X for x large enough, or that the tail distribution P (X ∈ [x, x + ∆)) ∼ R

x+∆

x

p

X

(y)dy for x → ∞ (uniformly with respect to ∆ ≤ 1).

Clearly, ρ = ρ

+

< 1 under Hypothesis A. Observe that the case ρ = ρ

+

= 0 not included in Hypothesis A has been studied in [17] and yields a new type of the asymptotic behavior of subcritical branching processes in random environment.

Namely, it was established that, as n → ∞

P (Z

n

> 0) ∼ K P (X > nb) = K l

0

(nb)

(nb)

β

, K > 0, (8) so that the survival probability decays with a polynomial rate only. Moreover, for any ε > 0, some constant σ > 0 and any x ∈ R

P

log Z

n

− log Z

[nε]

+ n (1 − ε) b σ √

n ≤ x | Z

n

> 0

= P (B

1

− B

ε

≤ x)

where B

t

is a standard Brownian motion. Therefore, given the survival of the

population up to time n, the number of individuals in the process at this mo-

ment tends to infinity as n → ∞ that is not the case for other types of subcritical

(5)

processes in random environment.

The goal of the paper is to investigate the asymptotic behavior of the survival probability of the process meeting Hypothesis A and to prove a Yaglom-type conditional limit theorem for the distribution of the number of individuals. To this aim we additionally assume that the sequence of conditional probability measures

P

[x]

( · ) = P ( · | X = x)

is well defined for x → ∞ under Hypothesis A. We provide in Section 3 natural examples when this assumption and Hypothesis B below are valid.

Denote by L = {L} the set of all proper probability measures L ( · ) of non- negative random variables. Our next condition concerns the behavior of the measures P

[x]

as x → ∞ :

Hypothesis B. There exists a probability measure P

on L such that, as x → ∞ ,

P

[x]

= ⇒ P

where the symbol = ⇒ stands for the weak convergence of measures.

Setting

a = − ϕ

(ρ) ϕ (ρ) > 0,

we are now ready to formulate the first main result of the paper.

Theorem 1 If

E [ − log (1 − e ( { 0 } ))] < ∞ , E

e

X

X

k≥1

e ( { k } ) k log k

 < ∞ (9) and Hypotheses A and B are valid, then there exists a constant C

0

> 0 such that, as n → ∞

P (Z

n

> 0) ∼ C

0

ρϕ

n1

(ρ) e

anρ

P (X > an) ∼ C

0

ρϕ

n1

(ρ) l

0

(n)

(an)

β+1

. (10) We stress that ϕ(ρ) ∈ (0, 1). Moreover, the explicit form of C

0

can be found in (49). The proof is given in Section 6. We now quickly explain this asymptotic behavior and give at the same time an idea of the proof. In the next Section, some examples of processes satisfying the assumptions required in Theorem 1 can be found.

For the proof, we introduce in Section 4.1 a new probability measure P.

Under this new probability measure, the random walk S = (S

n

, n ≥ 0) has the drift − a < 0 and the heavy tail distribution of its increments has polynomial decay β. Adding that E [exp(ρX )] = ϕ (ρ), we will get the survival probability as

ϕ

n

(ρ) E

e

ρSn

P(Z

n

> 0 | e)

≈ const × ϕ

n

(ρ) P(L

n

≥ 0, S

n

≤ N )

(6)

where L

n

is the minimum of the random walk up to time n and N is (large but) fixed.

We then make use of the properties of random walks with negative drift and heavy tails of increments established in [7] to show that

P(L

n

≥ 0, S

n

≤ N ) ≈ const × P(X

1

∈ [an − M √

n, an + M √

n], S

n

∈ [0, 1]) for n large enough and conclude using the central limit theorem.

Our second main result is a Yaglom-type conditional limit theorem.

Theorem 2 Under the conditions of Theorem 1,

n

lim

→∞

E

s

Zn

| Z

n

> 0

= Ω(s),

where Ω(s) is the probability generating function of a proper nondegenerate random variable on Z

+

.

We see that, contrary to the case ρ

min

= ρ

+

∧ 1 = 0 this Yaglom-type limit theorem has the same form as for the ordinary Galton-Watson subcritical processes.

Introduce a sequence of generating functions f

n

(s) = f (s; e

n

) =

X

k=0

e

n

( { k } )s

k

, 0 ≤ s ≤ 1, specified by the environmental sequence (e

1

, e

2

, ..., e

n

, ...) and denote

f

j,n

= f

j+1

◦ · · · ◦ f

n

, f

n,j

= f

n

◦ · · · ◦ f

j+1

(j < n), f

n,n

= Id. (11) For every pair n ≥ j ≥ 1, we define a random function g

j

: R

+

→ [0, 1] , a tuple of random variables

W

n,j

= 1 − f

n,j

(0)

e

SnSj

(12)

and a random variable W

j

on [0, 1] such that

(i) the distribution of g

j

is given by P

and that of W

j

is given by the (com- mon) distribution of lim

n→∞

W

n,j

, which exists by monotonicity;

(ii) f

0,j−1

, g

j

and (W

n,j

, W

j

, f

k

: k ≥ j + 1) are independent for each n ≥ j (it is always possible, the initial probability space being extended if required).

Then we can set c

j

=

Z

−∞

E [1 − f

0,j−1

(g

j

(e

v

W

j

))] e

ρv

dv

and state the following result. It describes the environments that provide sur-

vival of the population until time n.

(7)

Theorem 3 For each j ≥ 1, i) the following limit exists

π

j

= lim

n→∞

P (X

j

≥ an/2 | Z

n

> 0) = c

j

ϕ

j

(ρ) P

k≥1

c

k

ϕ

k

(ρ) .

ii) for each measurable and bounded function F : R

j

→ R and each family of measurable uniformly bounded functions F

n

: R

n+1

→ R the difference

E [F (S

0

, . . . , S

j−1

)F

n−j

(S

n

− S

j−1

, X

j+1

, . . . , X

n

) | Z

n

> 0, X

j

≥ an/2]

− c

j1

E

F (S

0

, . . . , S

j−1

) Z

−∞

F

n−j

(v, X

n

, ..., X

j+1

)G

j,n

(v)dv

goes to 0 as n → ∞ , where

G

j,n

(v) := (1 − f

0,j−1

(g

j

(e

v

W

n,j

))) e

ρv

.

Detailed descriptions of the properties of the random function g

j

and the random variable W are given by (21) and before the proof of Lemma 16, re- spectively. We refer to [2, 3, 4, 5] for similar questions in the subcritical and critical regimes. Here the conditioned environment is different since a big jump appear, whereas the rest of the random walk looks like the original one. Let us now focus on this exceptional environment explaining the survival event and give a more explicit result.

Corollary 4 Let κ = inf { j ≥ 1 : X

j

≥ an/2 } . Under P , conditionally on Z

n

>

0, κ converges in distribution to a proper random variable whose distribution is given by (π

j

: j ≥ 1). Moreover, conditionally on { Z

n

> 0, X

j

≥ an/2 } , the distribution law of (X

κ

− an)/(V arX √

n) converges to a law µ specified by µ(B) = c

j1

E

1(G ∈ B) Z

−∞

(1 − f

0,j−1

(g

j

(e

v

W

j

))) e

ρv

dv

for any Borel set B ⊂ R , where G is a centered gaussian random variable with variance V arX , which is independent of (f

0,j−1

, g

j

).

3 Examples

We provide here some examples meeting the conditions of Theorem 1. Thus, we assume that Hypothesis A is valid and we focus on the existence and convergence of P

[x]

. Let us first deal with the existence of random reproduction laws e for which the conditional probability

P

[x]

( · ) = P ( · | X = x)

is well defined.

(8)

Example 0. Assume that the environment e takes its values in some set M of probability measures such that for all µ, ν ∈ M

X

k≥0

kµ(k) < X

k≥0

kν(k) ⇒ µ ≤ ν,

where µ ≤ ν means that ∀ l ∈ N , µ[l, ∞ ) ≤ ν [l, ∞ ). We note that Hypothesis A ensures that P ( ·| X ∈ [x, x + ǫ)) is well defined. Then, for every H : M → R

+

which is non decreasing in the sense that µ ≤ ν implies H (µ) ≤ H (ν), we get that the functional

E [H (e) | X ∈ [x, x + ǫ)]

decreases to some limit p(H ) as ǫ → 0. Thus, writing H

l,y

(µ) = 1 if µ[l, ∞ ) ≥ y and 0 otherwise, we can define P

[x]

via

P

[x]

( e [l, ∞ ) ≥ y) = p(H

l,y

) to get the expected conditional probability.

Let us now focus on Hypothesis B.

Example 1. Let f (s; e) = P

k≥0

e ( { k } ) s

k

be the (random) probability gen- erating function corresponding to the random measure e ∈ N and let (with a slight abuse of notation) ξ = ξ (e) ≥ 0 be the integer-valued random variable with probability generating function f (s; e), i.e., f (s; e) = E

s

ξ(e)

.

It is not difficult to understand that if E [log f

(1; e)] < 0 and there exists a deterministic function g(λ), λ ≥ 0, with g(λ) < 1, λ > 0, and g(0) = 1, such that, for every ε > 0

y

lim

→∞

P

e : sup

0≤λ<∞

f

e

λ/y

; e

− g(λ) > ε

f

(1; e) = y = e

x

= 0, then Hypothesis B is satisfied for the respective subcritical branching process.

We now give two more explicit examples for which Hypothesis B holds true and note that mixing the two classes described in these examples would provide a more general family which satisfies Hypothesis B.

Let N

f

⊂ N be the set of probability measures on N

0

such that e = e (t, y) ∈ N

f

⇐⇒ f (s; e) = 1 − t + t

1 + yt

1

(1 − s)

where t ∈ (0, 1] and y ∈ (0, ∞ ) , and let L

g

⊂ L be the set of probability measures such that

L = L (t, y) ∈ L

g

⇐⇒ g(t, λ) = Z

e

λy

L(t, dy) = 1 − t + t

2

t + λ .

(9)

Let, further, B = B

1

× B

2

⊂ (0, 1] × (0, ∞ ) be a Borel set. We write e = e (t, y) ∈ T ( B ) ⊆ N

f

if (t, y) ∈ B

and

L = L(t, y) ∈ T ( B

1

) ⊆ L

g

if t ∈ B

1

.

Let (θ, ζ) be a pair of random variables with values in (0, 1] × (0, ∞ ) such that for a measure P

( · ) with support on (0, 1] and any Borel set B

1

⊆ (0, 1],

x

lim

→∞

P (θ ∈ B

1

| ζ = x) = P

(θ ∈ B

1

) exists.

With this notation in view we describe the desired two examples.

Example 2. Assume that the support of the probability measure P is concen- trated on the set N

f

only and the random environment e is specified by the relation

e =e (θ, ζ) ⇐⇒ f (s; e) = 1 − θ + θ

2

θ + ζ (1 − s) . Clearly, log f

(1; e) = log ζ. Thus,

P (e (θ, ζ) ∈ T ( B )) = P (f (s; e) : (θ, ζ) ∈ B ) and if B = B

1

× { x } then

x

lim

→∞

P f

e

λζ1

; e

: (θ, ζ) ∈ B| ζ = e

x

= P

(θ ∈ B

1

)

= P

(g(λ; θ) : θ ∈ B

1

) = P

(L(θ; y) ∈ T ( B

1

)) .

Note that if P (θ = 1 | ζ = x) = 1 for all sufficiently large x we get a particular case of Example 1.

Example 3. If the support of the environment is concentrated on probability measures e ∈ N such that, for any ε > 0

y

lim

→∞

P

e :

ξ(e) f

(1; e) − 1

> ε

f

(1; e) = e

X

= y

= 0 (13)

and the density of the random variable X = log f

(1; e) is positive for all suffi- ciently large x, then g(λ) = e

λ

. Condition (13) is satisfied if, for instance,

y

lim

→∞

P e : V arξ(e) (f

(1; e))

2

> ε

f

(1; e) = y

!

= 0.

(10)

4 Preliminaries

4.1 Change of probability measure

A nowadays classical technique of studying subcritical branching processes in random environment (see, for instance, [14, 4, 2, 3] ) is similar to that one used to investigate standard random walks satisfying the Cramer condition. Namely, denote by F

n

the σ − algebra generated by the tuple (e

1

, e

2

, ..., e

n

; Z

0

, Z

1

, ..., Z

n

) and let P

(n)

be the restriction of P to F

n

. Setting

m = ϕ (ρ) = E e

ρX

,

we introduce another probability measure P by the following change of measure dP

(n)

= m

n

e

ρSn

d P

(n)

, n = 1, 2, ... (14) or, what is the same, for any random variable Y

n

measurable with respect to F

n

we let

E [Y

n

] = m

n

E

Y

n

e

ρSn

. (15)

By (7),

E [X ] = m

1

E Xe

ρX

= ϕ

(ρ) /ϕ (ρ) = − a < 0. (16) Applying a Tauberian theorem we get

A(x) = P (X > x) = E

I { X > x } e

ρX

m = 1

m Z

x

e

ρy

p

X

(y)dy

= 1

m Z

x

l

0

(y)dy y

β+1

∼ 1

mβ l

0

(x)

x

β

= l(x)

x

β

, (17)

where l(x) is a function slowly varying at infinity. Thus, the random variable X under the measure P does not satisfy the Cramer condition and has finite vari- ance.

The density of X under P is

p

X

(x) = − A

(x) = 1 m

l

0

(x) x

β+1

and it satisfies (see Theorem 1.5.2 p22 in [8]) for each M ≥ 0 and ǫ(x) → 0 as x → 0,

p

X

(x + tǫ(x)x) p

X

(x)

x→∞

−→ 1, (18)

uniformly with respect to t ∈ [ − M, M ]. In particular, A(x + ∆) − A(x) = − ∆βA(x)

x (1 + o(1)) (19)

as x → ∞ and setting

b

n

= β A(an)

an ,

(11)

we have

b

n1

p

X

(an + t √

n)

n

−→

→∞

1, (20)

uniformly with respect to t ∈ [ − M, M ].

Let Φ = { Φ } be the metric space of the Laplace transforms Φ(λ) = R

0

e

λu

L (du), λ ∈ [0, ∞ ), of the laws from L endowed with the metric

d(Φ

1

, Φ

2

) = sup

21≤λ≤2

| Φ

1

(λ) − Φ

2

(λ) | .

Since the Laplace transform of the distribution of a nonnegative random vari- able is completely determined by its values on any interval of the positive half- line, convergence Φ

n

→ Φ as n → ∞ in metric d is equivalent to weak conver- gence L

n

→ L

w

of the respective probability measures.

From now on, to avoid confusions we agree to use P and E for the symbols of probability and expectation in the case when the respective distributions are not associated with the measures P or P.

Let F= { f (s) } be the set of all probability generating functions of integer-valued random variables η ≥ 0, i.e. f (s) = E [s

η

] and let Φ

(f)

⊂ Φ be the closure (in metric d) of the set of all Laplace transforms of the form

Φ(λ; f ) = f (exp {− λ/f

(1) } ) , f ∈ F .

The probability measure P on N generates a natural probability measure on the metric space Φ

(f)

which we denote by the same symbol P.

Introduce a sequence of probability measures on Φ

(f)

by the equality P

[x]

( · ) = P ( · | f

(1; e) = e

x

) .

With this new probability measure, Hypothesis B is now equivalent to

Hypothesis B’. There exists a measure P

( · ) on Φ

(f)

(with the support on Φ(λ) : Φ(0) = 1, Φ(λ) < 1, λ > 0) such that, as x → ∞

P

[x]

= ⇒ P

.

In the other words, Hypothesis B’ means that there exists a (random) a.s.

continuous on [0, ∞ ) function g( · ) with values in Φ

(f)

such that, for every con- tinuous bounded functional H on Φ

(f)

x

lim

→∞

E

[x]

[H(Φ)] = E

[H(g)] . (21) Since, for any fixed λ ≥ 0 the functional H

λ

(Φ) = Φ(λ) is continuous on Φ

(f)

, we have for y = e

x

y

lim

→∞

E h

f (e

λ/y

; e) | f

(1; e) = y i

= E

[g(λ)] , λ ∈ [0, ∞ ) (22)

(12)

and E

[g(0)] = 1, E

[g(λ)] < 1 if λ > 0. The prelimiting functions at the left-hand side of (22) have the form

E h

f (e

λ/y

; e) | f

(1; e) = y i

= E h

e

λξ(e)/y

| f

(1; e) = y i

and, therefore, are the Laplace transforms of the distributions of some random variables. Hence, by the continuity theorem for Laplace transforms there exists a proper nonnegative random variable θ such that

y

lim

→∞

E h

f (e

λ/y

; e) | f

(1; e) = y i

= E

e

λθ

, λ ∈ [0, ∞ ).

Let now

h(s) = E [s

υ

] =

X

k=0

h

k

s

k

, h(1) = 1

be the (deterministic) probability generating function of the nonnegative integer- valued random variable υ. Since, for any fixed λ ≥ 0 the functional H

λ,h

(Φ) = h (Φ(λ)) is continuous on Φ

(f)

, we have

y

lim

→∞

E h h

f (e

λ/y

; e)

| f

(1; e) = y i

= E

[h (g(λ))] , λ ∈ [0, ∞ ). (23) The prelimiting and limiting functions are monotone and continuous on [0, ∞ ).

Therefore, convergence in (23) is uniform in λ ∈ [0, ∞ )

Further, denoting by ξ

i

(e), i = 1, 2, ... independent copies of ξ(e) we get E h

h

f (e

λ/y

; e)

| f

(1; e) = y i

=

X

k=0

h

k

E h

f

k

(e

λ/y

; e) | f

(1; e) = y i

=

X

k=0

h

k

E

"

exp (

− λ y

k

X

i=1

ξ

i

(e) )

f

(1; e) = y

#

= E

exp

− λ y Ξ

f

(1; e) = y

, where

Ξ(e) =

υ

X

i=1

ξ

i

(e).

Thus, similarly to the previous arguments there exists a proper random variable Θ such that

y

lim

→∞

E

exp

− λ y Ξ(e)

f

(1; e) = y

= E

e

λΘ

, λ ∈ [0, ∞ ). (24)

As above, this convergence is uniform with respect to λ ∈ [0, ∞ ).

(13)

4.2 Some useful results on random walks

We pick here from [7] several results on random walks with negative drift and heavy tails useful for the forthcoming proofs. Recall that b

n

= βA(an)/(an), and introduce three important random variables

M

n

= max(S

1

, . . . , S

n

) , L

n

= min(S

1

, . . . , S

n

), and

τ

n

= min { 0 ≤ k ≤ n : S

k

= L

n

}

and two right-continuous functions U : R → R

0

= { x ≥ 0 } and V : R → R

0

given by

U(x) = 1 + X

k=1

P ( − S

k

≤ x, M

k

< 0) , x ≥ 0,

V (x) = 1 +

X

k=1

P ( − S

k

> x, L

k

≥ 0) , x ≤ 0,

and 0 elsewhere. In particular U (0) = V (0) = 1. It is well-known that U (x) = O(x) for x → ∞ . Moreover, V ( − x) is uniformly bounded in x in view of EX < 0.

With this notation in hands we recall the following result established in Lemma 7 of [7].

Lemma 5 Assume that E [X ] < 0 and that A(x) meets condition (19). Then, for any λ > 0 as n → ∞

E

e

λSn

; τ

n

= n

= E

e

λSn

; M

n

< 0

∼ b

n

Z

∞ 0

e

λz

U (z) dz (25) and

E

e

λSn

; τ > n

= E

e

λSn

; L

n

≥ 0

∼ b

n

Z

0

e

λz

V ( − z) dz. (26) Moreover from (19) and (20) in [7], we know that for λ > 0

b

n1

E[e

λSn

; M

n

< 0 , S

n

< − x] → Z

x

e

λz

U (z) dz, (27) b

n1

E[e

λSn

; L

n

≥ 0 , S

n

> x] →

Z

x

e

λz

V ( − z) dz. (28) and gathering Lemmas 9,10,11 in [7] yields

Lemma 6 If E [X ] = − a < 0 and condition (19) is valid then

(i) there exists δ

0

∈ (0, 1/4) such that for an/2 − u ≥ M and all δ ∈ (0, δ

0

) and k ∈ Z

P

u

( max

1≤j≤n

X

j

≤ δan, S

n

≥ k) ≤ ε

M

(k)n

β1

,

(14)

where ε

M

(k) ↓

M→∞

0. Moreover, for any fixed l and δ ∈ (0, 1)

J

lim

→∞

lim sup

n→∞

b

n1

P

L

n

≥ − N, max

J≤j≤n

X

j

≥ δan, S

n

∈ [l, l + 1)

= 0.

(ii) for any fixed δ ∈ (0, 1) and K ≥ 0,

M

lim

→∞

lim sup

n→∞

b

n1

P δan ≤ X

1

≤ an − M √

n or X

1

≥ an + M √

n; | S

n

| ≤ K

= 0.

(iii) for each fixed δ > 0 and J ≥ 2

n

lim

→∞

b

n1

P ∪

Ji6=j

{ X

i

≥ δan, X

j

≥ δan }

= 0.

Combining the limit for J → ∞ in (i) with (iii), we get that for any fixed N, K ≥ 0

n

lim

→∞

b

n1

P ∪

ni6=j

{ X

i

≥ δan, X

j

≥ δan } ; L

n

≥ − N, | S

n

| ≤ K

= 0. (29)

5 Proofs

In this section we use the notation

E

e

[ · ] = E [ · | E ] , P

e

( · ) = P ( · | E )

i.e., consider the expectation and probability given the environment E . Our aim is to prove the following statement.

Lemma 7 If Hypotheses A and B are valid then there exists a constant C

0

> 0 such that, as n → ∞

P (Z

n

> 0) ∼ C

0

m

n

β P (X > an)

an = C

0

m

n

b

n

. (30) We recall from the discussion in Preliminaries that Hypotheses A and B (or B’) ensure that there exists g(λ) a.s. continuous on [0, ∞ ) with g(0) = 1 and with E [g(λ)] < 1, λ > 0, such that for every continuous bounded function H on [0, 1]

y

lim

→∞

sup

λ≥0

E h

H (f (e

λ/y

)) | f

(1) = y i

− E

[H (g(λ))]

= 0. (31) Making the change of measure in accordance with (14) and (15) we see that it is necessary to show that, as n → ∞

E

P

e

(Z

n

> 0) e

ρSn

∼ C

0

b

n

. (32)

The proof of this fact is conducted into several steps which we split into

subsections.

(15)

5.1 Time of the minimum of S

First, we prove that the contribution to E

P

e

(Z

n

> 0) e

ρSn

may be of order b

n

only if the minimal value of S within the interval [0, n] is attained at the beginning or at the end of this interval. To this aim we use, as earlier, the notation τ

n

= min { 0 ≤ k ≤ n : S

k

= L

n

} and show that the following statement is valid.

Lemma 8 Given Hypotheses A and B we have

M

lim

→∞

lim

n→∞

b

n1

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

∈ [M, n − M ]

= 0.

Proof. In view of the estimate P

e

(Z

n

> 0) ≤ min

0≤k≤n

P

e

(Z

n

> 0) ≤ exp

0≤

min

k≤n

S

k

= e

Sτn

we have

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

∈ [M, n − M ]

≤ E

e

SτnSn

; τ

n

∈ [M, n − M ]

=

n−M

X

k=M

E h

e

(1ρ)Sk+ρ(SkSn)

; τ

n

= k i

=

n−M

X

k=M

E h

e

(1ρ)Sk

; τ

k

= k i E

e

ρSn−k

; L

n−k

≥ 0

. (33)

Hence, using Lemma 5 we get E

P

e

(Z

n

> 0) e

ρSn

; τ

n

∈ [M, n − M ]

[n/2]

X

k=M

+

n−M

X

k=[n/2]+1

 E h

e

(1ρ)Sk

; τ

k

= k i E

e

ρSn−k

; L

n−k

≥ 0

≤ C

1

n P

X > an 2

[n/2]

X

k=M

E h

e

(1ρ)Sk

; τ

k

= k i

+ C

2

n P

X > an 2

[n/2]

X

k=M

E

e

ρSk

; L

k

≥ 0

≤ ε

M

b

n

(34) where ε

M

→ 0 as M → ∞ .

The following statement easily follows from (34) by taking M = 0.

Corollary 9 Given Hypotheses A and B there exists C ∈ (0, ∞ ) such that, for all n = 1, 2, ...

E

P

e

(Z

n

> 0) e

ρSn

≤ E

e

SτnρSn

≤ Cb

n

.

(16)

5.2 Fluctuations of the random walk S

Introduce the event

C

N

= {− N < S

τn

≤ S

n

≤ N + S

τn

< N } . In particular, given C

N

− N < S

n

< N.

In what follows we agree to denote by ε

N

, ε

N,n

or ε

N,K,n

functions of the low indices such that

N

lim

→∞

ε

N

= lim

N→∞

lim sup

n→∞

| ε

N,n

| = lim

N→∞

lim sup

K→∞

lim sup

n→∞

| ε

N,K,n

| = 0, i.e., the lim sup (or lim) are sequentially taken with respect to the indices of ε

···

in the reverse order. Note that the functions are not necessarily the same in different formulas or even within one and the same complicated expression.

Lemma 10 Given Hypotheses A and B for any fixed k

N

lim

→∞

lim sup

n→∞

b

n1

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= k, C ¯

N

= 0 and

N

lim

→∞

lim sup

n→∞

b

n1

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= n − k, C ¯

N

= 0.

Proof. In view of (28) E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= k, S

n

− S

τn

≥ N

≤ E h

e

(1ρ)Sτn

e

ρ(SnSτn)

; τ

n

= k, S

n

− S

τn

≥ N i

≤ E

e

ρSn−k

; L

n−k

≥ 0, S

n−k

≥ N

≤ ε

N

b

n

where ε

N

→ 0 as N → ∞ since R

0

exp( − ρz)V ( − z)dz < ∞ . Further, E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= k, S

τn

≤ − N

≤ E h

e

(1ρ)Sτn

e

ρ(SnSτn)

; τ

n

= k, S

τn

≤ − N i

≤ e

(1ρ)N

E

e

ρSn−k

; L

n−k

≥ 0

≤ ε

N

b

n

. (35) This, in particular, means that

E h

e

(1ρ)Sτn

e

ρ(SnSτn)

; τ

n

= k, S

n

∈ / ( − N, N ) i

= ε

N,n

b

n

(36) and

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= k

= E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= k, S

τn

≥ − N, S

n

− S

τn

≤ N

+ ε

N,n

b

n

.

(17)

Similarly, by (27) E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= n − k, S

τn

≤ − N

≤ E h

e

(1ρ)Sτn

e

ρ(SnSτn)

; τ

n

= n − k, S

τn

≤ − N i

≤ E h

e

(1ρ)Sn−k

; τ

n−k

= n − k, S

n−k

≤ − N i

= E h

e

(1ρ)Sn−k

; M

n−k

< 0, S

n−k

≤ − N i

= ε

N,n

b

n

and

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= n − k, S

n

− S

τn

≥ N

≤ E h

e

(1ρ)Sτn

e

ρ(SnSτn)

; τ

n

= n − k, S

n

− S

τn

≥ N i

≤ e

ρN

E h

e

(1ρ)Sn−k

; τ

n−k

= n − k i

= e

ρN

E h

e

(1ρ)Sn−k

; M

n−k

< 0 i

= ε

N,n

b

n

. As a result we get

E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= n − k

= E

P

e

(Z

n

> 0) e

ρSn

; τ

n

= n − k, S

τn

≥ − N, S

n

− S

τn

≤ N

+ ε

N,n

b

n

. This completes the proof of the lemma.

Lemmas 8 and 10 easily imply the following statement:

Corollary 11 Under Hypotheses A and B E

P

e

(Z

n

> 0) e

ρSn

= E

P

e

(Z

n

> 0) e

ρSn

; | S

n

| < N ; τ

n

∈ [0, M ] ∪ [n − M, n]

+ ε

N,M,n

b

n

= E

P

e

(Z

n

> 0) e

ρSn

; | S

n

| < N

+ ε

N,n

b

n

= E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N

+ ˜ ε

N,n

b

n

(37) where

N

lim

→∞

lim sup

M→∞

lim sup

n→∞

| ε

N,M,n

| = lim

N→∞

lim sup

n→∞

( | ε

N,n

| + | ε ˜

N,n

| ) = 0.

5.3 Asymptotic of the survival probability

In this section we investigate in detail the properties of the survival probability for the processes meeting Hypotheses A and B. As we know (see (15)) this probability is expressed as

P (Z

n

> 0) = m

n

E

P

e

(Z

n

> 0) e

ρSn

. We wish to show that E

P

e

(Z

n

> 0) e

ρSn

is of order b

n

as n → ∞ .

(18)

First we get rid of the trajectories giving the contribution of the order o(b

n

) to the quantity in question. Let

D

N

(j) = {− N < S

τn

≤ S

n

< N, X

j

≥ δan } .

Lemma 12 If Hypotheses A and B are valid then there exists δ ∈ (0, 1/4) such that

E [P

e

(Z

n

> 0) exp( − ρS

n

)] =

J

X

j=1

E [P

e

(Z

n

> 0) exp( − ρS

n

); D

N

(j)] + ε

N,J,n

b

n

. (38) Proof. In view of Corollary 11, we just need to prove that

E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N

=

J

X

j=1

E [P

e

(Z

n

> 0) exp( − ρS

n

); D

N

(j)] + ε

N,J,n

b

n

. (39) From the estimate

P

e

(Z

n

> 0) exp( − ρS

n

) ≤ exp(S

τn

− ρS

n

) = exp((1 − ρ)S

τn

− ρ(S

n

− S

τn

)) ≤ 1 (40) we deduce by Lemma 6 (i) that

E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N, max

0≤j≤n

X

j

< δan

= ε

N,n

b

n

for δ ∈ (0, δ

0

) and E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N, max

J≤j≤n

X

j

≥ δan

= ε

N,J,n

b

n

. Thus,

E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N

= E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N, max

0≤j≤J

X

j

≥ δan

+ ε

N,J,n

b

n

. Finally thanks to Lemma 6(iii), there is only one big jump (before J ),i.e.

E

P

e

(Z

n

> 0) e

ρSn

; S

τn

≥ − N, S

n

< N, ∪

Ji6=j

{ X

i

≥ δan, X

j

≥ δan }

= ε

N,J,n

b

n

. It yields (39) and ends up the proof.

Now we fix j ∈ [1, J ] and investigate the quantity E [P

e

(Z

n

> 0) exp( − ρS

n

); D

N

(j)] .

First, we check that S

j−1

is bounded on the event we focus on.

(19)

Lemma 13 If Hypotheses A and B are valid then, for every fixed j E [P

e

(Z

n

> 0) exp( − ρS

n

); | S

j−1

| ≥ N, X

j

≥ δan] = ε

N,n

b

n

. Proof. First observe that

E [P

e

(Z

n

> 0) exp( − ρS

n

); S

j−1

≤ − N, X

j

≥ δan]

≤ E [exp((1 − ρ)S

τn

− ρ(S

n

− S

τn

)); S

j−1

≤ − N, X

j

≥ δan]

≤ E [exp((1 − ρ)S

τn

); S

j−1

≤ − N, X

j

≥ δan]

≤ E [exp( − (1 − ρ)N ); X

j

≥ δan]

= exp( − (1 − ρ)N )P(X ≥ δan) = ε

N,n

b

n

. Further, taking γ ∈ (0, 1) such that γβ > 1, we get

E [exp(S

τn

− ρS

n

); S

j−1

≥ n

γ

, X

j

≥ δan] ≤ P(S

j−1

≥ n

γ

)P(X ≥ δan)

≤ jP(X ≥ n

γ

/j)P(X ≥ δan) ∼ j

β+1

n

γβ

l(n

γ

)P(X ≥ δan) = ε

n

b

n

. (41) Consider now the situation S

j−1

∈ [N, n

γ

], j ≥ 2 and write

E [exp(S

τn

− ρS

n

); S

j−1

∈ [N, n

γ

], X

j

≥ δan]

= Z

nγ

N

Z

0

−∞

P(S

j−1

∈ dy, L

j−1

∈ dz)H

n,δ

(y, z), where

H

n,δ

(y, z) = Z

δan

P(X ∈ dt) Z

0

−∞

Z

v

P

y+t

(L

n−j

∈ dv, S

n−j

∈ dw)e

zv

e

ρw

= Z

δan+y

P(X ∈ dt − y) Z

0

−∞

Z

v

P

t

(L

n−j

∈ dv, S

n−j

∈ dw)e

zv

e

ρw

. By our conditions P(X ∈ dt − y) = P(X ∈ dt) (1 + o (1)) uniformly in y ∈ [0, n

γ

] and t ≥ δan. Thus, for all sufficiently large n

H

n,δ

(y, z) ≤ 2 Z

δan

P(X ∈ dt) Z

0

−∞

Z

v

P

t

(L

n−j

∈ dv, S

n−j

∈ dw)e

zv

e

ρw

≤ 2 Z

δan

P(X ∈ dt) Z

0

−∞

Z

v

P

t

(L

n−j

∈ dv, S

n−j

∈ dw)e

v

e

ρw

= 2

Z

δan

P(X ∈ dt)E

t

h e

Sτn−jρSn−j

i

≤ 2E

0

h e

Sτn−j+1ρSn−j+1

; X

1

≥ δan i

= 2H

n,δ

(0, 0).

(20)

By integrating this inequality we get for sufficiently large n Z

nγ

N

Z

0

−∞

P(S

j−1

∈ dy, L

j−1

∈ dz)H

n,δ

(y, z)

≤ 2 Z

nγ

N

Z

0

−∞

P(S

j−1

∈ dy, L

j−1

∈ dz)H

n,δ

(0, 0)

≤ 2P(S

j−1

≥ N )E

0

h e

Sτn−j+1ρSn−j+1

; X

1

≥ δan i . Since

b

n1

H

n,δ

(0, 0) = b

n

E h

e

Sτn−j+1ρSn−j+1

; X

1

≥ δan i

≤ b

n1

E h

e

Sτn−j+1ρSn−j+1

i

= O(1)

as n → ∞ (see Corollary 9) and P(S

j−1

≥ N) → 0 as N → ∞ , we obtain E [exp(S

τn

− ρS

n

); S

j−1

∈ [N, n

γ

], X

j

≥ δn] = ε

N,n

b

n

. (42) Combining (41) and (42) proves the lemma.

Lemma 14 Given Hypotheses A and B we have for each fixed j

E [P

e

(Z

n

> 0) exp( − ρS

n

); | S

n

− S

j−1

| > K, X

j

≥ δan] = ε

K,n

(j)b

n

. Proof. We know from Lemma 13 that only the values S

j−1

≤ N for sufficiently large but fixed N are of importance. Thus, we just need to prove that for fixed N

E

e

SτnρSn

; S

j−1

≤ N, | S

n

− S

j−1

| > K, X

j

≥ δan

= ε

N,K,n

(j)b

n

where lim

K→∞

lim sup

n→∞

| ε

N,K,n

(j) | = 0. To this aim we set L

j,n

= min { S

k

− S

j−1

: j − 1 ≤ k ≤ n } and, using the inequality S

τn

≤ S

j−1

+ L

j,n

, deduce the estimate

E

e

SτnρSn

; S

j−1

≤ N, | S

n

− S

j−1

| > K, X

j

≥ δan

≤ E h

e

Sj−1+Lj,nρ(SnSj−1)ρSj−1

; S

j−1

≤ N, | S

n

− S

j−1

| > K i

= E h

e

(1ρ)Sj−1

; S

j−1

≤ N i E h

e

Lj,nρ(SnSj−1)

; | S

n

− S

j−1

| > K i . We conclude with E

e

(1ρ)Sj−1

; S

j−1

≤ N

< ∞ and we can now control the term

E h

e

Lj,nρ(SnSj−1)

; | S

n

− S

j−1

| > K i

= E h

e

Sτn−j+1ρSn−j+1

; | S

n−j+1

| > K i by ε

K,n

b

n

. Indeed it is now exactly the term controlled in a similar situation in (36).

We give the last technical lemma.

(21)

Lemma 15 Assume that g is a random function which satisfies (31). Then for every (deterministic) probability generating function h (s) and every ε > 0 there exists κ > 0 such that

E [1 − h(g(e

v

w))] − E h

1 − h(g(e

v

w)) i

≤ h

(1)ε for | v − v

| ≤ κ, w ∈ [0, 2].

Proof. Clearly,

E [1 − h(g(e

v

w))] − E h

1 − h(g(e

v

w)) i

≤ h

(1)E h

| g(e

v

w) − g(e

v

w) | i . We know that 0 ≤ g(λ) ≤ 1 for all λ ∈ [0, ∞ ), g(λ) is nonincreasing with respect to λ a.s. and has a finite limit as λ → ∞ . Therefore, g(λ) is a.s. uniformly continuous on [0, ∞ ) implying that a.s.

κ

lim

→0

sup

|v−v|≤κ,w∈[0,2]

| g(e

v

w) − g(e

v

w) | = 0.

Hence, by the bounded convergence theorem sup

|v−v|≤κ,w∈[0,2]

E h

| g(e

v

w) − g(e

v

w) | i

≤ E

"

sup

|v−v|≤κ,w∈[0,2]

| g(e

v

w) − g(e

v

w) |

#

goes to zero as κ → 0, which ends up the proof.

Let σ

2

= V arX , S

n,j

= S

n

− S

j

, 0 ≤ j ≤ n, and G

n,j

= − S

n,j

+ an

σ √ n . Using the notation (11), we write

P

e

(Z

n

> 0) = 1 − f

0,n

(0)

put X

j,n

= (X

j+1

, · · · , X

n

) , X

n,j

= (X

n

, · · · , X

j+1

) , and set

Y

j

= F(S

0

, S

0,j−1

), Y

j,n

= F

n

(S

n

− S

j−1

, X

j,n

), Y

n,j

= F

n

(S

n

− S

j−1

, X

n,j

), where F, F

n

are positive equibounded measurable functions.

Since f

j,n

is distributed as f

n,j

, we write E[Y

j

Y

j,n

P

e

(Z

n

> 0) e

ρSn

; X

j

≥ δan]

= E[Y

j

Y

j,n

(1 − f

0,n

(0)) e

ρSn

; X

j

≥ δan]

= E[Y

j

Y

n,j

(1 − f

0,j−1

(f

j

(f

n,j

(0))))e

ρSn

; X

j

≥ δan]

= E[Y

j

e

ρSj−1

Y

n,j

(1 − f

0,j−1

(f

j

(f

n,j

(0)))) e

ρ(SnSj−1)

; X

j

≥ δan]

= E[Y

j

e

ρSj−1

Y

n,j

1 − f

0,j−1

(f

j

(1 − e

Sn,j

W

n,j

))

e

ρSn,j−1

; X

j

≥ δan]

(22)

where W

n,j

were defined in (12). Our aim is to obtain an approximation to this expression.

To simplify notation we let

¯ h (s) = 1 − h(s)

for a probability generating function h (s). For fixed positive M and K, we set B

j,n

= { S

n,j

∈ [ − K, K ], X

j

− na ∈ [ − M √

n, M √ n] } , and define

F

n,j

(h, K, M ) = E h

e

ρ(Sn,j−1)

Y

n,j

¯ h f

j

exp

− e

Sn,j

W

n,j

; B

j,n

i .

We now introduce a random function g

j

on the probability space (Ω, P), whose distribution is specified by P

(i.e. E(H (g

j

)) = E

(H(g)) for any bounded H). Moreover we choose g

j

such that g

j

is independent of (f

k

: k 6 = j). As we have mentioned, it is always possible by extending the initial probability space if required. We denote Y

n,j

(v) = F

n

(v, X

n,j

) and consider

O

n,j

(h, K, M ) = Z

K

−K

e

ρv

dvE

Y

n,j

(v)¯ h (g

j

(e

v

W

n,j

)) ; σG

n,j

∈ [ − M, M ] where g

j

is independent of (S

k

: k ≥ 0) and (f

k

: k 6 = j).

Lemma 16 For all K, M ≥ 0 and and any probability generating function h we have

n

lim

→∞

| b

n1

F

n,j

(h, K, M ) − O

n,j

(h, K, M ) | = 0 Proof. Let F

j,n

be the σ-algebra generated by the random variables

(f

k

, X

k

) , k = 1, 2, ..., j − 1, j + 1, ..., n and

V (y, X

j,n

) = e

ρy

F

n

(y; X

n,j

) 1

{|y|∈[−K,K]}

.

Using the uniform convergence (20), the change of variables t = (x

j

− an − M √

n)/ √

n ensures that, b

n1

F

n,j

(h, K, M )

= b

n1

E

Z

an+M√n an−M√

n

V (S

n,j

+ x

j

, X

n,j

)

× E ¯ h f

j

exp

− e

Sn,j

W

n,j

|F

j,n

; X

j

= x

j

p

Xj

(x

j

)dx

j

∼ E

Z

an+M√n an−M√

n

V (S

n,j

+ x

j

, X

n,j

)

× E ¯ h f

j

exp

− e

Sn,j

W

n,j

|F

j,n

; X

j

= x

j

dx

j

,

(23)

when n → ∞ . Moreover, the uniform convergence in (23) with respect to any compact set of λ from [0, ∞ ) ensures that, uniformly for | x − an | ≤ M n

1/2

, w ∈ [0, 2] and | v | ≤ K we have

| E ¯ h (f

j

(exp ( − e

v

w))) | X

j

= x

− E ¯ h(g

j

(e

v

w))

| ≤ ε

n

. Denoting F

j,n

the σ-algebra generated by the random variables

X

k

, k = 1, 2, ..., j − 1, j + 1, ..., n we get, as n → ∞ , with x

n,j

= (x

n

, . . . , x

j+1

),

b

n1

F

n,j

(h, K, M )

∼ E

"

Z

an+M√ n an−M√n

V (S

n,j

+ x

j

, X

n,j

)E ¯ h(g

j

(e

Sn,j+xj

W

n,j

)) |F

j,n

dx

j

#

= E

"

Z

an+M√ n an−M√

n

V (S

n,j

+ x

j

, X

n,j

)¯ h(g

j

(e

Sn,j+xj

W

n,j

))dx

j

#

Z

an+M√n an−M√

n

dx

j

Z

|xn,j−1|≤K

V ( | x

n,j−1

| , x

n,j

)

× E h

h(g ¯

j

(e

|xn,j−1|

W

n,j

) | X

n,j

= x

n,j

i Y

n

i=j+1

p

Xi

(x

i

) dx

i

. Making the change of variables

v = | x

n,j−1

| = x

n

+ x

n−1

+ ... + x

j

; z

i

= x

i

, i = j + 1, ..., n and setting

D

n,j

(K, M ) =

| v | ≤ K, | v − x

j+1

− x

j+2

− ... − x

n

+ an | ≤ M √ n , we arrive at

b

n1

F

n,j

(h, K, M )

∼ Z

Dn,j(K,M)

e

ρv

F

n

(v, x

n,j

)E h(g ¯

j

(e

v

W

n,j

) | X

n,j

= x

n,j

n

Y

i=j+1

p

Xi

(x

i

) dx

i

dv

∼ Z

|v|≤K

e

ρv

E

Y

n,j

(v) ¯ h(g

j

(e

v

W

n,j

); σG

n,j

∈ [ − M, M ] dv.

It completes the proof.

Observe that by monotonicity

n

lim

→∞

W

n,j

= lim

n→∞

1 − f

n,j

(0)

e

SnSj

= W

j

a.s. (43) and W

j

=

d

W, j = 1, 2, ... where P(W ∈ (0, 1]) = 1 in view of conditions (9) and Theorem 5 in [6] II.

We can state now the key result:

Références

Documents relatifs

We study the asymptotics of the survival probability for the critical and decomposable branching processes in random environment and prove Yaglom type limit theorems for

In Section 2 we describe the model of Mandelbrot’s martingale in a random environment, and state for this martingale a variante of Theorems 1.1 and 1.2: see Theorems 2.3 and 2.4,

We consider a branching random walk with a random environment in time, in which the offspring distribution of a particle of generation n and the distribution of the displacements of

Based on the recent advances in the study of conditioned limit theorems for sums of functions defined on Markov chains from [11, 12, 13, 14], the goal of the present paper is to

We add that for the problem of upper large deviations, the rate function has been established in [10, 8] and finer asymptotic results in the case of geometric offspring

Uniqueness of the SDE (60) (Hypothesis H3) has to be proven to conclude for the convergence. From the pioneering works of Yamada and Watanabe, several results have been obtained

Keywords: multi-type branching process, survival probability, random environment, prod- uct of matrices, critical case.. AMS classification 60J80,

Conditionally on the realization of this sequence (called environment) we define a branching random walk and find the asymptotic behavior of its maximal particle.. It starts with