• Aucun résultat trouvé

Random dynamical systems with systematic drift competing with heavy-tailed randomness

N/A
N/A
Protected

Academic year: 2021

Partager "Random dynamical systems with systematic drift competing with heavy-tailed randomness"

Copied!
27
0
0

Texte intégral

(1)

HAL Id: hal-01429031

https://hal.archives-ouvertes.fr/hal-01429031

Submitted on 6 Jan 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Random dynamical systems with systematic drift competing with heavy-tailed randomness

Vladimir Belitsky, Mikhail Menshikov, Dimitri Petritis, Marina Vachkovskaia

To cite this version:

Vladimir Belitsky, Mikhail Menshikov, Dimitri Petritis, Marina Vachkovskaia. Random dynamical systems with systematic drift competing with heavy-tailed randomness. Markov Processes and Related Fields, Polymath, 2016, 22, pp.629-652. �hal-01429031�

(2)

Random dynamical systems with systematic drift competing with heavy-tailed randomness

Vladimir BELITSKYa

Mikhail MENSHIKOVb Dimitri PETRITISc

Marina VACHKOVSKAIAd

a. Instituto de Matem´atica e Estat´ıstica, Universidade de S˜ao Paulo, rua do Mat˜ao 1010, CEP 05508–090, S˜ao Paulo, SP, Brazil, belitsky@ime.usp.br

b. Department of Mathematical Sciences, University of Durham, South Road, Durham DH1 3LE,United Kingdom Mikhail.Menshikov@durham.ac.uk

c. Institut de Recherche Math´ematique, Universit´e de Rennes I, Campus de Beaulieu, 35042 Rennes, France, dimitri.petritis@univ-rennes1.fr

d. Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University of Campinas–UNICAMP, Rua Sergio Buarque de Holanda, 651 Campinas, SP, CEP 13083-859, Brazil

marinav@ime.unicamp.br

10 May 2016

Abstract

Motivated by the study of the time evolution of random dynamical systems arising in a vast variety of domains — ranging from physics to ecology —, we establish conditions for the occurrence of a non-trivial asymptotic behaviour for these systems in the absence of an ellipt- icity condition. More precisely, we classify these systems according to their type and — in the recurrent case — provide with sharp conditions quantifying the nature of recurrence by estab- lishing which moments of passage times exist and which do not exist. The problem is tackled by mapping the random dynamical systems into Markov chains onRwith heavy-tailed in- novation and then using powerful methods stemming from Lyapunov functions to map the resulting Markov chains into positive semi-martingales.

Keywords: Markov chains, recurrence, heavy tails, moments of passage times, random dy- namical systems.

AMS 2010 subject classifications:Primary: 60J05; (Secondary: 37H10, 34F05).

(3)

1 Introduction

1.1 Motivation and description of the model

Dynamical systems arise in several applied domains (economy, ecology, etc.) as models of evolution. We study in this paper the combined ac- tion of randomness and systematic deterministic bias that leads to a subtle competition of two antagonistic effects.

Suppose that there exists a universal mapping f : R+R+ — veri- fying certain conditions that will be precised later — and consider the following random dynamical system Xn+1 = An+1Xnf(An+1Xn), where (An)n1 are a sequence of independent and identically distributed R+- valued random variables with law ν and XnR+ for all n ≥ 0. Not to complicate unnecessarily the model, we assume that ν has always a density, with respect to either the Lebesgue measure on the non-negative axis or the counting measure of some infinitely denumerable unbounded subset of R+. We address the question about the asymptotic behaviour of Xn, as n → ∞. The situtation limnXn = 0 has a special signific- ance since can be interpreted as the extinction of certain natural resources, or the bankruptcy of certain financial assets, etc. The dual situation of limnXn = ∞ can also be interpreted as the proliferation of certain species, or the creation of instabilities due to the formation of speculative bubbles, etc. (see [2] for instance).

Since the previous Markov chain is multiplicative, it is natural to work at logarithmic scale and consider the additive version of the dynamical system ξn+1 = ξn+αn+1+ψ(ξn +αn+1), with αn = lnAn and ψ(z) = lnf(ez), for z ∈ R. Therefore, the Markov chain becomes now an R- valued one reading ξn+1 = ξn +αn+1+ψ(ξn +αn+1). Obviously, ξn → +a.s. ⇔Xn →+a.s. andξn → −a.s. ⇔Xn →0 a.s.

An important class of non-uniformly elliptic random dynamical sys- tems are those(Xn)that — when considered at logarithmic scale as above

— haveψ(t) =±|t|γ, for 0< γ <1 andt∈ R+. Now using the element- ary inequalities (see [5, §19, p. 28], for instance) aγ− |b|γ ≤ (a+b)γ ≤ aγ+|b|γ, it turns out that the dynamical system readsξn+1 =ξn+αn+1±

|ξn +αn+1|γ = ξn +αn+1± |ξn|γ+O(αγn+1). Now, forγ ∈]0, 1[, the term O(αγn+1)in the above expression turns out to be subdominant.

For the aforementioned reasons, we study in this paper the Markov

(4)

chains onX=R+ defined by one of the following recursions ζn+1 = (ζn+αn+1ζγn)+, or

ζn+1 = (ζn+αn+1+ζγn)+,

with γ ∈]0, 1[ and ζ0 = x a.s.; here z+ = max(0,z) and x ∈ X. The sequence(αn)n1are a family of independentR-valued random variables having common distribution. This distribution can be supposed discrete or continuous but will always be assumed having one- or two-sided heavy tails. The heaviness of the tails is quantified by the order of the fractional moments failing to exist.

1.2 Main results

In all statements below, we make the

Global assumption 1.1. The sequence(αn)nNare independent and identic- ally distributed real random variables. The common law is denoted byµ and is supposed to be µ λ where λ is a reference measure on R; we denote bym = the corresponding density. Additionally,µ is supposed to be heavy-tailed (preventing thus integrability of the random variables αn).

Let (ζn)nN be a Markov chain on a measurable space (X,X); de- note, as usual, by Px the probability on the trajectory space conditioned to ζ0 = x and, for A ∈ X, define τA = inf{n ≥ 1 : ζn ∈ A}. Our pa- per is devoted in establishing conditions under which the timeτAis finite (a.s.) or infinite (with strictly positive probability) and in case it is a.s. finite which of its moments exist. These results constitute the first step toward establishing more general results on the Markov chain like recurrence or transience, positive recurrence and existence of invariant probability, etc.

However, the latter need more detailed conditions on the communication structure of states of the chain like φ-accessibility,φ-recurrence, maximal irreducibility measures and so on (see [9, 8, 7] for instance). All those ques- tions are important but introduce some technicalities that blur the picture that we wish to reveal here, namely that questions onτAcan be answered with extreme parsimony on the hypotheses imposed on the Markov chain, by using Lyapunov functions. As a matter of fact, the only communication

(5)

property imposed on the Markov chain is mere accessibility whose defin- ition is recalled here for the sake of completeness.

Definition 1.2. Let(Zn)be a Markov chain on(X,X)with stochastic ker- nel P and A ∈ X. Denote by P the probability on its trajectory space induced by Pand by Pxthe law of trajectories conditioned on{Z0 = x}. We say that Aisaccessiblefrom x6∈ A, ifPx(τA <) >0.

Theorem 1.3. Let(ζn+1)be the Markov chain defined by the recursion ζn+1 =ζnζγn +αn+1, n≥0,

where0<γ<1and the random variables(αn)have a common lawµsupported by R+, satisfying the condition µ([0, 1]) > 0 and whose density with respect to the Lebesgue measure, for large y > 0, reads m(y) = 1R+(y)cyy1θ, with θ ∈]0, 1[. Let a >1and denote by A := Aa = [0,a]. Then A is accessible from any point x >a. Additionally, the following hold.

1. Assume that there exist constants0 < b1 < b2 < such that b1 ≤cy ≤ b2for all y∈ X.

(a) Ifθ >1−γthenPx(τ <) =1. Additionally,

• if q< 1θ

γ thenEx(τAq) <∞, and

• if q≥ 1θ

γ thenEx(τAq) = ∞.

(b) Ifθ <1−γthenPx(τA <∞) <1, and this implies transience.

2. Assume further thatlimycy =c >0andθ =1−γ.

(a) If cπcsc(πθ) <θthenPx(τA < ) =1. Additionally, there exists a uniqueδ0 ∈]0,θ[such that

• if q< 1δ0

γ thenEx(τAq) <∞, and

• if q> 1δ0

γ thenEx(τAq) = ∞.

(b) If cπcsc(πθ) > θ thenPx(τA < ) < 1, and this implies transi- ence.

Theorem 1.4. Let(ζn+1)be the Markov chain defined by the recursion ζn+1= (ζn+ζnγαn+1)+

(6)

for n ≥ 0, where 0 < γ < 1 and the common law of the random variables (αn)is supported byR+and has density m with respect to the Lebesgue measure verifying m(y) = 1R+(y)cyy1θ for large y > 0, with θ ∈]0, 1[ . Assume further thatlimycy =c >0. Then the state0is accessible and

1. Ifθ <1−γthenEx(τ0q) <∞, for all q >0.

2. If θ = 1−γthen Px(τ0 < ) = 1. Additionally, there exists a unique δ0 ∈]0,∞[such that

• If q< δ0

θ thenEx(τ0q)<∞, and

• if q> δθ0 thenEx(τ0q) =∞.

3. Ifθ >1−γthenPx(τ0 <)<1, and this implies transience.

Remark 1.5. Ifb1 ≤ cy ≤ b2 butcy 6→ c then the conclusions established in the cases of strict inequalities θ < 1−γ or θ > 1−γ remain valid.

Nevertheless, we are unable to treat the critical caseθ =1−γ.

Remark 1.6. In both the above theorems, the boundedness or existence of limit conditions on(cy)imply that the tails have power decay, i.e. there existsCsuch that the tail estimateP(α >y) ≥ yCθ holds. Nevertheless, the control we impose is much sharper because we wish to treat the critical case. If we are not interested in the critical case, the control on (cy) can be considerably weakened by assuming only the tail estimate. Results established with such weakened control on the tails are given in theorems 1.8 and 1.9 below.

Remark 1.7. Although, in both the above theorems, the existence and uniqueness ofδ0, occurring in the critical caseθ =1−γ, can be established by general convexity arguments, the sharp determination of its value re- quires the complete knowledge of the distribution function of (αn). Nev- ertheless, if we defineKδ,θ = Γ(1θΓθ()1Γ(θδ)

δ) , the value of δ0in 1.3 can be ap- proximated by the solution of the transcendental equationcKδ0 =1. Sim- ilarly, if we define1 by Lδ,θ = Γ(Γ1(+1δ)Γ(−θ)

θ+δ) , the value ofδ0 in 1.4 can be ap- proximated by the solution of the transcendental equationcLδ0+δ0 =0.

1It is recalled that the transcendental functionΓ, defined byΓ(z):=R

0 exp(−t)tz−1dt for<z>0, can be analytically continued onC\ {0,1,2,3, . . .}; its analytic continu- ation can be expressed byΓ(z) =R

0 [exp(−t)nm=0(−t)m!m]tz−1dtfor−(n+1)<<z<

nandnN(see [3,§1.1 (9), p. 2] for instance).

(7)

The faster the convergencecy →c, the better are these approximations; the determination becomes exact when cy is constant (depending onθ). Not to complicate the study in the sequel, we sketch the proofs (for the critical cases) of the theorems 1.3 and 1.4 forcyis constant.

Theorem 1.8. Let(ζn)be the Markov chain defined by the recursive relation ζn+1 = (ζnζγn +αn+1)+, n≥0, (1) where0 <γ<1and the random variables(αn)have common law with support extending to both negative and positive parts of the real axis. Let a > 1 and denote by A := Aa = [0,a]. Then A is accessible and the following statements hold.

1. Suppose that there exist a positive constant C and a parameter θ ∈ ]0, 1[ such thatP(α1 > y) ≤ Cyθ. Ifθ > 1−γ, then∀q < 1θγ,Ex(τAq) <

∞.

2. Suppose that there exist a positive constants C,C0and parametersθ,θ0with 0<θ <θ0 <1such thatP(α1>y) ≥C0yθandP(α1 <−y) ≤Cyθ0 (the right tails are heavier than the left ones). Ifθ <1−γ, thenPx(τA <

∞) <1, and this implies transience.

Theorem 1.9. Assume that the Markov chain (ζn) is defined by the recursive relation

ζn+1 = (ζn+ζγn +αn+1)+, n≥0,

where0 <γ<1and the random variables(αn)have common law with support extending to both negative and positive parts of the real axis. Let a > 1 and suppose that the set A := Aa = [0,a]is accessible.

1. Suppose there exist a positive constant C and a parameterθ with0< θ <

1, such thatP(α1 <−y) ≤Cyθ. Ifθ >1−γ, thenPx(τA < ) <1, and this implies transience.

2. Suppose there exist positive constant C,C0 and parametersθ andθ0, with 0 < θ < θ0 < 1, such that P(α1 > y) ≤ C0yθ0 and P(α1 < −y) ≥ Cyθ. Ifθ <1−γthen the state0is recurrent and∀q <1,Ex(τAq) <∞.

Remark 1.10. Some comments are due:

(8)

1. The systematic drift termζγn, although subdominant with respect to ζn, is far from being trivial. As a matter of fact, this term is respons- ible for the failure of uniform ellipticity when we consider the system at exponential scale. In particular if the random innovationαn+1 is integrable, then the asymptotic behaviour of the random dynamical system is determined solely from the systematic drift term. Only when the innovation has heavy tails, a competition between system- atic drift and random perturbation can built-up leading to interest- ing critical phenomena.

2. It is worth noting that the previous results establish not only recur- rence or transience properties but a fine stratification of the the para- meter space according to which moments of the passage timeτAex- ist. For the cases where moments E(τq) < ∞, for q ≥ 1, exist (see for instance theorem 1.4) the above results lead immediately to the existence of an invariant probability.

2 Proofs

2.1 Results from the constructive theory of Markov chains

The Markov chains we consider evolve on the setX=R+. Our proofs rely on the possibility of constructing measurable functions g: XR+ (with some special properties regarding their asymptotic behaviour) that are su- perharmonic with respect to the discrete Laplacian operator D = P−I;

consequently, the image of the Markov chain under g becomes a super- martingale outside some specific sets. For the convenience of the reader, we state here the principal theorems from the constructive theory, de- veloped in [4] and in [1], rephrased and adapted to the needs and notation of the present paper. We shall use repeatedly these theorems in the sequel.

In the sequel (Zn) denotes a Markov chain on X, having stochastic kernelP. We denote by

Dom+(P) : {f :XR+ : f measurable s.t. ∀x∈ X, Z

XP(x,dy)f(y) <}. We denote by D = P−I the Markov operator whose actionDom+(P) 3

(9)

g7→ Dgreads Dg(x) =

Z

XP(x,dy)g(y)−g(x) =E(g(Zn+1)−g(Zn)|Zn =x). Notice that when g isP-superharmonic, then (g(Zn))is a positive super- martingale.

Let f : XR+and a>0. We denote bySa(f) = {x ∈X : f(x) ≤ a}, thesublevel setof f. We say that the functiontends to infinity, f →∞, if

∀n ∈N,cardSn(f) <∞.

Theorem 2.1(Fayolle, Malyshev, Menshikov [4, Theorems 2.2.1 and 2.2.2]).

Let (Zn) be a Markov chain on X with kernel P and for a ≥ 0, denote by A := Aa = [0,a].

1. If there exist a pair (f,x0), where x0 > 0 and f ∈ Dom+(P) such that f → ∞, D f(x) ≤ 0 for all x ≥ x0, and A := Ax0 is accessible, then Px0(τA <) =1.

2. If there exist a pair (f,A), where A is a subset of Xand f ∈ Dom+(P) such that

(a) D f(x)≤0for x6∈ A, and

(b) there exists y∈ Ac : f(y)<infxA f(x), thenPx0(τA <) <1.

Theorem 2.2 (Aspandiiarov, Iasnogorodski, Menshikov [1, Theorems 1 and 2]). Let (Zn) be a Markov chain on X with kernel P and f ∈ Dom+(P) such that f →∞.

1. If there exist strictly positive constants a,p,c such that the set A:=Sa(f) is accessible, fp ∈ Dom+(P), and D fp(x) ≤ −c fp2(x) on Ac, then Ex(τAq)<for all q < p/2.

2. It there exist g∈ Dom+(P)and

(a) a constant b >0such that f ≤bg,

(b) constants a,c1 >0such that Dg(x)≥ −c1on{g>a},

(c) constants c2 >0and r >1such that gr ∈ Dom+(P)and Dgr(x)≤ c2gr1(x)on{g>a},

(10)

(d) a constant p > 0 such that fp ∈ Dom+(P) and D fp(x) ≥ 0 on {f >ab},

thenEx(τSq

ab(f)) = for all q> p.

Notation 2.3. Forh :R+R+,ρR, we writeh(x) xρ, if limxh(x)xρ = 1 and h(x)xρ, if there exist a function h1 such that h(x) ≤ h1(x) and h1(x) xρ.

2.2 Proof of the theorems 1.3 and 1.4

The main theorems are stated under the condition that the reference meas- ure λis the Lebesgue measure onR(or onR+). To simplify notation, we write λ(dy) = dy for Lebesgue measure. The case of µ having a dens- ity with respect to the counting measure on Z requires a small technical additional step as will be explained in the remark 2.11 below.

In the sequel, we shall use a Lyapunov function, g, depending on a parameterδ6=0, reading

g(x) =

xδ, x≥1

1, x<1 (if δ <0) and

g(x) = xδ(if δ >0).

in general the choiceδ >0 is made to prove recurrence andδ<0 to prove transience. The range of values of δ will be determined from the specific context as explained below.

Lemma 2.4. Let(ζn) be the Markov chain of the theorem 1.3 and suppose that x is very large. For arbitrary y0 ≥1andδ <θ,

Dg(x)(x−xγ)δ Z

[y0,∞[

1+ y x−xγ

δ

−1

m(y)dy−δ xγ x−xγ

.

(11)

Proof. Assume everywhere in the sequel that x is very large. The para- meterδis allowed to be positive or negative.

Dg(x) = Z

R+[(x−xγ+y)δ−xδ]m(y)dy

= (x−xγ)δ Z

R+

"

1+ y x−xγ

δ

1+ x

γ

x−xγ δ#

m(y)dy (x−xγ)δ

"

Z

R+

1+ y x−xγ

δ

m(y)dy−1δ x

γ

x−xγ

# . For arbitrary y0R+, the integral R

R+ in the previous formula can be split into R

]0,y0[+R

[y0,∞[. In the sequel we shall consider only the case x y0. Ifδ < 0 then the function y 7→ (1+xyxγ)δ is decreasing, hence supy∈]0,y0[(1+xyxγ)δ ≤1. On the contrary, whenδ>0, the corresponding function is increasing and we have supy∈]0,y0[(1+xyxγ)δ ≤(1+xy0xγ)δ 1+δxy0xγ. In any situation,

Z

]0,y0[

1+ y x−xγ

δ

m(y)dyµ(]0,y0[) +|δ| y0 x−xγ. The remaining integral can be written as

Z

[y0,∞[

1+ y x−xγ

δ

m(y)dy= Z

[y0,∞[

1+ y x−xγ

δ

−1

m(y)dy+µ([y0,∞[). Replacing these expressions into the formula forDg(x)yields

Dg(x)(x−xγ)δ Z

[y0,∞[

1+ y x−xγ

δ

1

m(y)dy−δ x

γ

x−xγ

, because, for xsufficiently large, xy0xγ is negligible compared to xxγxγ. Remark 2.5. Note that since 0 < γ < 1, the asymptotic majorisation Dg(x)xδ

R

[y0,∞[

1+xyxγ

δ

−1

m(y)dy−δxxγxγ

isequivalentto the one established in lemma 2.4.

(12)

Lemma 2.6. Let δ < θ < 1. Suppose further that there exist constants 0 <

b1 ≤ b2 < such that for all y ≥ y0, for some y0 > 0, we have b1 ≤ cy ≤ b2. Then, the integral

I(x):= Z

[y0,∞[

1+ y x−xγ

δ

−1

m(y)dy, asymptotically for large x, satisfies

δB1Kδ,θxθI(x)δB2Kδ,θxθ,

where(B1,B2) = (b1,b2)ifδ>0, and(B1,B2) = (b2,b1)whenδ <0.

Proof. Write I(x) :=

Z

[y0,∞[

1+ y x−xγ

δ

−1

m(y)dy = Z

[y0,∞[cy

(1+ xyxγ)δ1 y1+θ dy.

Consider firstδ >0; in this case the integrand is positive, hence b1I1(x) ≤ I(x)≤b2I1(x),

where I1(x) := R

[y0,∞[

(1+x−xyγ)δ1

y1+θ dy. We estimate then, for fixed y0 and largex(so,y0is small compared to x) and performing the change of vari- ableu= xyxγ,

I1(x) := Z

[y0,∞[

(1+ xyxγ)δ−1 y1+θ dy

= (x−xγ)θ Z

y0 x−xγ

(1+u)δ−1

u1+θ duxθ Z

0

(1+u)δ−1 u1+θ du Now forδ <θ<1 (recall thatθ >0)

Z

0

(1+u)δ−1

u1+θ du =−1 θ

Z u= u=0

[(1+u)δ1]d(uθ)

=h1 θ

(1+u)δ−1 uθ

iu= u=0 +δ

θ Z

0

(1+u)δ1 uθ du

=0+ δ θ

Γ(1−θ)Γ(θδ)

Γ(1−δ) =δKδ,θ.

(13)

The claimed majorisation I1(x)xθδKδ,θ is obtained immediately. The minoration is obtained similarly. Ifδ <0, the integrand is negative, hence the role ofb1andb2must be interchanged.

Lemma 2.7. Letδ < θ <1. Suppose further that cy → c. Then for alle > 0, there exists a y0such that for x y0we have

I(x) := Z

[y0,∞[

(1+ y

x−xγ)δ−1

m(y)dy =cδKδ,θ(x−xγ)θ(1+eO(1)). Proof. Observe that I(x) = cI1(x) +R

[y0,∞[(cy −c)(1+

y x−xγ)δ1

y1+θ dy. Now, since cy → c, it follows that for all e > 0 one can choose y0 such that for y ≥ y0, we have|cy−c| ≤ e. We then immediately conclude that the absolute value of the above integral is majorised byeI1(x).

Lemma 2.8. 1. Letθ ∈]0, 1[and c>0. If cπcsc(πθ) <θthen there exists a uniqueδ0 :=δ0(c,θ) ∈]0,θ[such that cKδ0−1=0.

2. Let θ ∈]0, 1[, c > 0. Then for every fixed θ ∈]0, 1[, Lδ,θ+ 1

θ < 0 for all δ > 0and there exists a uniqueδ0 := δ0(c,θ) ∈]0,∞[ such that cLδ0+ δ0 =0.

Proof. For allδ∈]−∞,θ[, the quantitiesKδ,θ andLδ,θare well defined.

1. By standard results2 on Γ functions, K(0,θ) = 1θcsc(πθ). For fixed θ, the function Kδ,θ is strictly increasing and continuous in δ — as follows from its integral representation —and limδθKδ,θ = ∞, from which follows the existence ofδ0 ∈]0,θ[verifying the claimed equal- ity.

2. From [3, §1.2, formulæ (4) and (1)] follow immediately that L0,θ =

1

θ < 0 and limδ Γ(−Lδ,θθ)δθ = 1. The strict monotonicity and con- tinuity (in δ) of the function Lδ,θ follows from its integral represent- ation: Lδ,θ+ 1

θ = R1 0

((1u)δ1)

u1+θ du. HenceLδ,θ Γ(−θ)δθ for largeδ.

Since the asymptotic behaviour ofLδ,θ is negative and sublinear inδ, it follows that there exists a sufficiently largeδ0for which the claimed equality holds. Additionally, the strict monotonicity of Lδ,θ com- bined with the fact that L0,θ = −1θ < 0 guarantees that Lδ,θ+1θ < 0 for allδ>0.

2We used the identityΓ(z)Γ(1z) =πcsc(πz), (see [3, formula§1.2 (6)] for instance).

(14)

Lemma 2.9. Let f :R+R+be a given function; define the dynamical system (Xt)tNby X0= x0and recursively Xt+1 = f(Xt), for t∈ N. For a>0define T[0,a](x0) = inf{t≥1 :Xt ≤a}.

1. If f(x) = x−xγ for someγ∈]0, 1[and x0 1, then T[0,a](x0) x

1−γ

10γ. 2. If f(x) = x−xγ+1 for some γ ∈]0, 1[, a > 1, and x0 a, then

T[0,a](x0) x

1−γ 0

1γ.

Proof. 1. The derivative of the function f satisfies 0 < f0(x) < 1 for all x >1. Therefore, successive iterates fn(x0)eventually reach the interval [0, 1] for all x0 > 1 in a finite number of steps T[0,1](x0). To estimate this number, start by approximating, for Xt = x and 1 <

x < x0, the difference Xt+1−Xt = ∆Xt = −Xtγ by the differential dXt =−Xtγdt. Then

T[0,a](x0) =

Z T[0,a](x0)

0 dt =−

Z a

x0 XtγdXt = 1

1−γ(x10γ−a1γ) x

1γ 0

1−γ. 2. Using the same arguments, and denoting by F the hypergeometric

function, we estimate (see [3] for instance);

T[0,a](x0) = Z a

x0

dXt

1−Xγ =x0F(1, 1

γ, 1+1

γ,x0γ)−aF(1, 1

γ, 1+ 1

γ,aγ) 1

1−γx01γ.

Proof of the theorem 1.3: First we need to prove accessibility of A = [0,a], with a > 1 from any pointx > a. Denote byr := µ([0, 1]) > 0. Since the dynamical system evolving according to the iteration of the functionf(x) = x−xγ +1 reaches A in finite time TA(x), as proven in lemma 2.9, the Markov chain can reachAin timeτAverifyingPx(τA ≤TA+1)≥CrTA(x) >

0, for allx a.

We substitute the estimates obtained in lemmata 2.6 and 2.7 into the expression forDgobtained in lemma 2.4.

1. Assume thatb1 ≤cy ≤b2.

(15)

(a) Choose 0<δ <θ. Then Dg(x) = (x−xγ)δ

δ x

γ

x−xγ +b2δKδ,θ(x−xγ)θ+O(x1)

= −δxδ+γ1+δb2Kδ,θxδθ+O(xδθ1).

Ifθ > 1−γ, the dominant term readsδxδ+γ1 which is neg- ative. Hence,(g(ζn))is a supermartingale tending to infinity if ζn →∞. We conclude then by theorem 2.1.

• To prove finiteness of moments up toθ/(1−γ), considerp such that 0< pδ <θ. Then

Dgp(x)δpxδp+γ1 =−δpg(x)p1−γδ ≤ −Cg(x)p2, provided that 1δ < 12

γ. The latter, combined with the in- equalitypδ<θ, establishes the majorisation by−Cg(x)p2. This allows to conclude by theorem 2.2.

• To prove the non existence of moments for q ≥ θ/(1−γ), denote by f(x) = x−xγ. Define Z0 = x and recursively Zn+1 = f(Zn) as in lemma 2.9; similarly the Markov chain can be rewritten ζ0 = x and recursively ζn+1 = f(ζn + αn+1)as long asζn >1.

Now remark thatZ1 = f(x) < f(x+α1) = ζ1; a simple re- cursion shows thatZn+1 = fn(x+α1) < ζn+1. Obviously T[0,1](x+α1, 0)< τ0. Henceτ0 >C(x+α1)1γ >C(α1)1γ by lemma 2.9 and subsequentlyEx(τ0q)CE(α1)q(1γ) = wheneverq(1−γ) ≥θ.

(b) Choose nowδ<0. Using the same arguments as above, we see that the dominant term isδb1Kδ,θxδθ which is again negative.

Hence (g(ζn))is a bounded supermartingale. We conclude by using theorem 2.1.

2. Assume now that θ = 1−γ and cy → c > 0. In this situation, for every e > 0 we can choose y0 such that for y ≥ y0, we have asymptotically, forx y0and everyδ 6=0,

Dg(x) = δxδ+γ1

cKδ,θ−1+O(x1) +eO(1).

(16)

Therefore, the dominant term is δ(cKδ,θ −1)xδ+γ1. The sign of δ will thus be multiplied by the sign of the differencecKδ,θ−1.

(a) If cπcsc(πθ) < θ, by lemma 2.8, we can chose δ ∈]0,δ0[, so that that Dg(x) ≤ 0 while g tends to infinity. We conclude by theorem 2.1.

• To prove finiteness of moments of the time τA, for the δ chosen to establish recurrence, we can further choosep>1 so thatpδ<δ0. Then

Dgp(x)−pδx+γ1 =−pδg(x)p1−γδ ≤ −Cg(x)p2 whenever 1δγ >2 or 1δ12

γ. Combining with the condi- tion pδ < δ0we get p < 10γ and we conclude by theorem 2.2 that all moments up to 1δ0γ are finite.

• To prove non-existence of moments for q > 1δ0

γ, for any δ ∈]0,δ0[, we check immediatelyDg(x) ≥ −e. Now, choose r>1 such thatrδ>δ0and determine under which circum- stancesDgr(x) ≤Cg(x)r1. Computing explicitly, we get

Dgr(x) rδ(cKrδ,θ−1)x+γ1 ≤Cg(x)r1−γδ ≤Cg(x)r1 whenever 1δγ > 1 or equivalently 1δ > 11

γ. But the latter inequalities are always verified for 0 < δ < δ0. Similarly, for any p such that pδ > δ0, i.e. for p > δ0

δ > 1δ0

γ, we get Dgp(x)≥0. We conclude, by theorem 2.2, that all moments q > 1δ0γ ofτAfail to exist.

(b) Ifδ < 0 andcπcsc(πθ) > θ, then (g(ζn))is a bounded super- martingale. We conclude by theorem 2.1.

Lemma 2.10. Let(ζn)be the Markov chain of the theorem 1.4 and assume that x is very large. For the Lyapunov function g withδ <θ <1, we have

Dg(x) = (x+xγ)δ

Z x+xγ 0

"

1− y x+xγ

δ

−1

# µ(dy) +(x+xγ)δ−xδ

µ([0,x+xγ])−xδµ([x+xγ,∞[).

(17)

Proof. Write simply Dg(x) =

Z

R+ (x+xγ−y)+δµ(dy)−xδ

= (x+xγ)δ

Z x+xγ 0

1− y x+xγ

δ

µ(dy)−xδ

= (x+xγ)δ

Z x+xγ 0

"

1− y x+xγ

δ

−1

# µ(dy) +(x+xγ)δµ([0,x+xγ])−xδ.

Proof of the theorem 1.4: First we need to establish accessibility of the state 0. But this is obvious since from any x>0 theP(α1> x+xγ) >0.

We only sketch the proof since it uses the same arguments as the proof of the theorem 1.3. It is enough to consider the casecy = c since the case cyc will give rise to an additional corrective term that will be negli- gible. With this proviso, the integral appearing in the right hand side of the expression for Dg(x)in the previous lemma 2.10 reads

Z x+xγ 0

"

1− y x+xγ

δ

−1

#

µ(dy) =c(x+xγ)θ Z 1

0

(1−u)δ−1 u1+θ du

=c(x+xγ)θ(Lδ,θ+1 θ),

whereLδ,θis defined in lemma 2.8.. It is further worth noting thatLδ,θ0, for allδR+. Therefore,

1. If θ < 1−γ, then the dominant terms in the expression of Dg are those with xδθ, hence, choosing δ > 0, we get Dg(x) ≤ cxδθLδ,θ. Since the value ofDg(x)is always negative i.e. the process(g(ζn))is a supermartingale tending to infinity. We conclude by theorem 2.1.

To establish the existence of all moments, it is enough to check that Dgp(x)cxθLpδ,θ−Cg(x)pθδ ≤ −Cg(x)p2

wheneverδ>θ/2. But sinceLδ,θ is defined and negative for all pos- itiveδ, we conclude that all positive moments ofτ0exist by theorem 2.2.

(18)

2. Whenθ = 1−γ, then all terms are of the same order and Dg(x) xδθ(cLδ,θ+δ). From lemma 2.8, for fixedθ and c > 0, there exists δ0 > 0 such thatcLδ0+δ0 =0. We conclude then that asymptotic- ally, for largex,

Dg(x)xδθ(cLδ,θ−1),

the sign of the discrete Laplacian is negative (positive) depending on the value ofδbeing smaller (larger) thanδ0.

Chooseδ>0 and psuch that pδ<δ0. ThenDgp(x)−Cg(x)pθδ

−Cg(x)p2 whenever 1δ < 2

θ and, consequently, p < 0

θ . Then we conclude by theorem 2.2 thatEx(τ0q)<for allq < δθ0 as claimed.

To show that moments higher than δθ0 fail to exist, choose δ < δ0. It is then evident that Dg(x) −Cxδθ ≥ −e, for some e > 0.

There exists thenr > 1 such thatrδ >δ0; estimating then Dgr(x) Cg(x)rθδ we conclude immediately that 0 ≤ Dgr(x) ≤ Cg(x)r1 whenever θδ >1. We conclude then by theorem 2.2 that for allq> δθ0, we haveEx(τ0q) = ∞.

3. If θ > 1−γ, the dominant term is δxδγ+1µ([0,x+xγ]) that can be made negative by choosing δ < 0 and x sufficiently large. We conclude by theorem 2.1.

Remark 2.11. In this subsection, we assumed that the law µ of the ran- dom variables (αn)is absolutely continuous with respect to the Lebesgue measure on R+. If instead the law is absolutely continuous with respect to the counting measure on the positive integers, the integrals in the ex- pression of Dg become sums. Now, the sums over the positive integers can be replaced by integrals. It turns out that the error committed in such a replacement isalwaysa subleading term in the expression of Dg, leaving the conclusion unaffected.

Remark 2.12. The two previous theorems have been established by as- suming that the random variables (αn) are always positive and act in the opposite direction of the systematic drift xγ. By examining the proofs of the theorems however, it is evident that nothing will change if the random variables are both sided, even with both sided heavy tails, provided that

Références

Documents relatifs

Lassila et McGuinness (Lassila &amp; McGuinness, 2001) proposent une classification des ontologies selon la richesse de leur structure interne. Telle qu’illustrée à la

random threshold defined for the directed polymer model in a random environment with heavy tails (we recall its definition in Section

In addition, and thanks to the universality of Theorem 1.6, one may establish various new outcomes that were previously completely out of reach like when X is an unconditional

The only previous results on passage time moments that we know of are due to Doney [3] in the case of a zero-drift, spatially homogeneous random walk, and a sub- set of the

In this paper, we consider heavy tailed lifetime data subject to random censoring and competing risks, and use the Aalen-Johansen estimator of the cumulative incidence function

Spectral theory; Objective method; Operator convergence; Stochastic matrices; Random matrices; Reversible Markov chains; Random walks; Random Graphs; Probability on trees; Random

Spectral theory; Objective method; Operator convergence; Logarithmic potential; Ran- dom matrices; Random Graphs; Heavy tailed distributions; α-stable

Key words: Extremal shot noises, extreme value theory, max-stable random fields, weak convergence, point processes, heavy tails.. ∗ Laboratoire LMA, Universit´e de Poitiers, Tlport