• Aucun résultat trouvé

Passage of Lévy Processes across Power Law Boundaries at Small Times

N/A
N/A
Protected

Academic year: 2021

Partager "Passage of Lévy Processes across Power Law Boundaries at Small Times"

Copied!
45
0
0

Texte intégral

(1)

HAL Id: hal-00020500

https://hal.archives-ouvertes.fr/hal-00020500v2

Submitted on 8 Jan 2008

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Passage of Lévy Processes across Power Law Boundaries

at Small Times

Jean Bertoin, Ronald Doney, Ross Maller

To cite this version:

Jean Bertoin, Ronald Doney, Ross Maller. Passage of Lévy Processes across Power Law Boundaries at Small Times. Annals of Probability, Institute of Mathematical Statistics, 2008, 36 (1), pp.160-197. �10.1214/009117907000000097�. �hal-00020500v2�

(2)

hal-00020500, version 2 - 8 Jan 2008

Passage of L´evy Processes across Power Law

Boundaries at Small Times

J. Bertoin

R. A. Doney

R. A. Maller

§

Abstract

We wish to characterise when a L´evy process Xtcrosses boundaries

like tκ, κ > 0, in a one or two-sided sense, for small times t; thus, we

en-quire when lim supt↓0|Xt|/tκ, lim supt↓0Xt/tκ and/or lim inft↓0Xt/tκ

are almost surely (a.s.) finite or infinite. Necessary and sufficient con-ditions are given for these possibilities for all values of κ > 0. Often (for many values of κ), when the limsups are finite a.s., they are in fact zero, as we show, but the limsups may in some circumstances take finite, nonzero, values, a.s. In general, the process crosses one or two-sided boundaries in quite different ways, but surprisingly this is not so for the case κ = 1/2. An integral test is given to distinguish the possibilities in that case. Some results relating to other norming sequences for X, and when X is centered at a nonstochastic function, are also given.

2000 MSC Subject Classifications: primary: 60J30, 60F15;

secondary: 60G10, 60G17, 60J65.

Keywords: L´evy processes, crossing power law boundaries, limiting and lim-sup behaviour.

This research was partially supported by ARC grants DP0210572 and DP0664603Laboratoire des Probabilit´es, Universit´e Pierre et Marie Curie, Paris, France;

email:jbe@ccr.jussieu.fr

Department of Mathematics, University of Manchester, Manchester M13 9PL, UK;

email:rad@ma.man.ac.uk.

§Centre for Mathematical Analysis, and School of Finance and Applied Statistics,

(3)

1

Introduction

Let X = (Xt, t ≥ 0) be a L´evy process with characteristic triplet (γ, σ, Π),

where γ ∈ R, σ2 ≥ 0, and the L´evy measure Π has R(x2 ∧ 1)Π(dx) finite.

See [1] and [15] for basic definitions and properties. Since we will only be concerned with local behaviour of Xt, as t ↓ 0, we can ignore the “big jumps”

in X (those with modulus exceeding 1, say), and write its characteristic exponent, Ψ(θ) = 1 tlog Ee iθXt, θ ∈ R, as Ψ(θ) = iγθ − σ2θ2/2 + Z [−1,1] (eiθx− 1 − iθx)Π(dx). (1.1)

The L´evy process is of bounded variation, for which we use the notation X ∈ bv, if and only if σ2 = 0 and R

|x|≤1|x|Π(dx) < ∞, and in that case, we

denote by

δ := γ − Z

[−1,1]

xΠ(dx)

its drift coefficient.

We will continue the work of Blumenthal and Getoor [3] and Pruitt [13], in a sense, by investigating the possible limiting values taken by t−κX

t as

t ↓ 0, where κ > 0 is some parameter. Recall that Blumenthal and Getoor [3] introduced the upper-index

β := inf  α > 0 : Z |x|≤1|x| α Π(dx) < ∞  ∈ [0, 2],

which plays a critical role in this framework. Indeed, assuming for simplicity that the Brownian coefficient σ2 is zero, and further that the drift coefficient

δ is also 0 when X ∈ bv, then with probability one, lim sup t↓0 |Xt| tκ =  0 ∞ according as  κ < 1/β κ > 1/β. (1.2)

See also Pruitt [13]. Note that the critical case when κ = 1/β is not covered by (1.2)

One application of this kind of study is to get information on the rate of growth of the process relative to power law functions, in both a one and a two-sided sense, at small times. More precisely, we are concerned with the values of lim supt↓0|Xt|/tκ and of lim supt↓0Xt/tκ (and the behaviour of

(4)

lim inft↓0Xt/tκ can be deduced from the limsup behaviour by performing a

sign reversal). For example, when

lim sup

t↓0

|Xt|

tκ = ∞ a.s., (1.3)

then the regions {(t, y) ∈ [0, ∞) × R : |y| > atκ} are entered infinitely often

for arbitrarily small t, a.s., for all a > 0. This can be thought of as a kind of “regularity” of X for these regions, at 0. We will refer to this kind of behaviour as crossing a “two-sided” boundary. On the other hand, when

lim sup

t↓0

Xt

tκ = ∞ a.s., (1.4)

we have “one-sided” (up)crossings; and similarly for downcrossings, phrased in terms of the liminf. In general, the process crosses one or two-sided bound-aries in quite different ways, and, often (for many values of κ), when the limsups are finite a.s., they are in fact zero, a.s., as we will show. But the limsups may in some circumstances take finite, nonzero, values, a.s. Our aim here is to give necessary and sufficient conditions (NASC) which distinguish all these possibilities, for all values of κ > 0.

Let us eliminate at the outset certain cases which are trivial or easily deduced from known results. A result of Khintchine [10] (see Sato ([15], Prop. 47.11, p. 358)) is that, for any L´evy process X with Brownian coefficient σ2 ≥ 0, we have

lim sup

t↓0

|Xt|

p

2t log | log t| = σ a.s. (1.5) Thus we immediately see that (1.3) and (1.4) cannot hold for 0 < κ < 1/2; we always have limt↓0Xt/tκ = 0 a.s. in these cases. Of course, this also agrees

with (1.2), since, always, β ≤ 2. More precisely, recall the decomposition Xt = σBt+ Xt(0), where X(0) is a L´evy process with characteristics (γ, 0, Π)

and B is an independent Brownian motion. Khintchine’s law of the iterated logarithm for B and (1.5) applied for X(0) give

− lim inf

t↓0

Xt

p

2t log | log t| = lim supt↓0

Xt

p

2t log | log t| = σ a.s. (1.6) So the one and two-sided limsup behaviours of X are precisely determined when σ2 > 0 (regardless of the behaviour of Π(·), which may not even be

(5)

present). With these considerations, it is clear that throughout, we can as-sume

σ2 = 0 . (1.7) Furthermore, we can restrict attention to the cases κ ≥ 1/2.

A result of Shtatland [16] and Rogozin [14] is that X /∈ bv if and only if − lim inf t↓0 Xt t = lim supt↓0 Xt t = ∞ a.s.,

so (1.3) and (1.4) hold for all κ ≥ 1, in this case (and similarly for the liminf). On the other hand, when X ∈ bv, we have

lim

t↓0

Xt

t = δ, a.s.,

where δ is the drift of X (cf. [1], p.84). Thus if δ > 0, (1.4) holds for all κ > 1, but for no κ ≤ 1, while if δ < 0, (1.4) can hold for no κ > 0; while (1.3) holds in either case, with κ > 1, but for no κ ≤ 1. Thus, when X ∈ bv, we need only consider the case δ = 0.

The main statements for two-sided (respectively, one-sided) boundary crossing will be given in Section 2 (respectively, Section 3) and proved in Section 4 (respectively, Section 5). We use throughout similar notation to [5], [6] and [7]. In particular, we write Π# for the L´evy measure of −X,

then Π(+) for the restriction of Π to [0, ∞), Π(−) for the restriction of Π# to

[0, ∞), and

Π(+)(x) = Π((x, ∞)), Π(−)(x) = Π((−∞, −x)) ,

Π(x) = Π(+)(x) + Π(−)(x), x > 0, (1.8) for the tails of Π(·). Recall that we assume (1.7) and that the L´evy measure Π is restricted to [−1, 1]. We will often make use of the L´evy-Itˆo decomposition, which can be written as

Xt = γt +

Z

[0,t]×[−1,1]

xN(ds, dx) , t ≥ 0 , (1.9) where N(dt, dx) is a Poisson random measure on R+× [−1, 1] with

inten-sity dtΠ(dx) and the Poissonian stochastic integral above is taken in the compensated sense. See Theorem 41 on page 31 in [12] for details.

(6)

2

Two-Sided Case

In this section we study two-sided crossings of power law boundaries at small times. We wish to find a necessary and sufficient condition for (1.3) for each value of κ ≥ 1/2. This question is completely answered in the next two theorems, where the first can be viewed as a reinforcement of (1.2).

Theorem 2.1 Assume (1.7), and take κ > 1/2. When X ∈ bv, assume its drift is zero. (i) If Z 1 0 Π(xκ)dx < ∞, (2.1) then we have lim t↓0 Xt tκ = 0 a.s. (2.2)

(ii) Conversely, if (2.1) fails, then

lim sup

t↓0

|Xt− a(t)|

tκ = ∞ a.s.,

for any deterministic function a(t) : [0, ∞) 7→ R.

Remark 1 It is easy to check that (2.1) is equivalent to Z

[−1,1]|x| 1/κ

Π(dx) < ∞.

The latter holds for 0 < κ ≤ 1/2 for any L´evy process, as a fundamental property of the L´evy canonical measure ([1], p.13). (2.2) always holds when 0 < κ < 1/2, as mentioned in Section 1, but not necessarily when κ = 1/2.

The case κ = 1/2 which is excluded in Theorem 2.1 turns out to have interesting and unexpected features. To put these in context, let’s first review some background. Khintchine [10] (see also Sato ([15], Prop. 47.12, p. 358)) showed that, given any function h(t), positive, continuous, and nondecreasing in a neighbourhood of 0, and satisfying h(t) = opt log | log t| as t ↓ 0, there is a L´evy process with σ2 = 0 such that lim sup

t↓0|Xt|/h(t) = ∞ a.s.

(7)

process satisfies lim supt↓0|Xt|/

t = ∞ a.s. Thus the implication (2.1) ⇒ (2.2) is not in general true when κ = 1/2.

On the other hand, when κ = 1/2, Theorem 2.1 remains true for example when X ∈ bv, in the sense that both (2.1) and (2.2) then hold, as follows from the fact that Xt = O(t) a.s. as t ↓ 0.

Thus we can have lim supt↓0|Xt|/

t equal to 0 or ∞ a.s., and we are led to ask for a NASC to decide between the alternatives. We give such a condition in Theorem 2.2 and furthermore show that lim supt↓0|Xt|/√t may

take a positive finite value, a.s. Remarkably, Theorem 2.2 simultaneously solves the one-sided problem. These one sided cases are further investigated in Section 3, where it will be seen that, by contrast, the one and two sided situations are completely different when κ 6= 1/2.

To state the theorem, we need the notation

V (x) = Z

|y|≤x

y2Π(dy), x > 0. (2.3)

Theorem 2.2 (The case κ = 1/2.) Assume (1.7), and put

I(a) = Z 1 0 x−1exp  − a 2 2V (x)  dx and λ∗I := inf{a > 0 : I(a) < ∞} ∈ [0, ∞]

(with the convention, throughout, that the inf of the empty set is +∞). Then, a.s., − lim inf t↓0 Xt √ t = lim supt↓0 Xt √ t = lim supt↓0 |Xt| √ t = λ ∗ I. (2.4)

Remark 2 (i) (2.4) forms a nice counterpart to the iterated log version in (1.5) and (1.6).

(ii) If (2.1) holds for some κ > 1/2, then V (x) = o(x2−1/κ) as x ↓

0, so R01exp{−a2/2V (x)}dx/x converges for all λ > 0. Thus λ

I = 0 and

limt↓0Xt/

t = 0 a.s. in this case, according to Theorem 2.2. Of course, this agrees with Theorem 2.1(i).

(iii) The convergence of R|x|≤e−ex

2log | log |x||Π(dx) implies the

(8)

we have limt↓0|Xt|/

t = 0 a.s. for all such L´evy processes. A finite posi-tive value, a.s., for lim supt↓0|Xt|/

t can occur only in a small class of L´evy processes whose canonical measures have Π(dx) close to |x|−3dx near 0. For

example, we can find a Π such that, for small x, V (x) = 1/ log | log x|. Then R1/2

0 exp{−a

2/2V (x)}dx/x = R1/2

0 | log x|−a

2/2

dx/x is finite for a > √2 but infinite for a ≤ √2. Thus lim supt↓0|Xt|/

t = √2 a.s. for this process; in fact, lim supt↓0Xt/√t =

2 a.s., and lim inft↓0Xt/√t = −

√ 2 a.s.

(iv) Theorem 2.2 tells us that the only possible a.s. limit, as t ↓ 0, of Xt/

t is 0, and that this occurs iff λ∗

I = 0, i.e., iff I(λ) < ∞ for all λ > 0.

Similarly, the iterated log version in (1.6) gives that the only possible a.s. limit, as t ↓ 0, of Xt/

p

t log | log t| is 0, and that this occurs iff σ2 = 0. When

κ > 1/2, Theorem 2.1 gives that limt↓0Xt/tκ = 0 a.s. iff R01Π(xκ)dx < ∞,

provided, when κ ≥ 1, the drift δ = 0.

Another result in this vein is that we can have limt↓0Xt/tκ = δ a.s for a

constant δ with 0 < |δ| < ∞, and κ > 0, iff κ = 1, X ∈ bv, δ is the drift, and δ 6= 0.

The following corollary shows that centering has no effect in the two-sided case.

Corollary 2.1 Assume (1.7), and, if X ∈ bv, assume it has drift zero. Suppose lim supt↓0|Xt|/tκ = ∞ a.s., for some κ ≥ 1/2. Then

lim sup

t↓0

|Xt− a(t)|

tκ = ∞ a.s., for any nonstochastic a(t). (2.5)

Finally, in this section, Table 1 summarises the conditions for (1.3):

Table 1

Value of κ NASC for lim supt↓0|Xt|/tκ = ∞ a.s. (when σ2 = 0)

0 ≤ κ < 1 2 Never true κ = 1 2 λ∗I = ∞ (See Theorem 2.2) 1 2 < κ ≤ 1 R1 0 Π(x κ)dx = ∞ κ > 1, X ∈ bv, δ = 0 R01Π(x κ)dx = ∞ κ > 1, X ∈ bv, δ 6= 0 Always true κ > 1, X /∈ bv Always true

(9)

3

One-Sided Case

We wish to test for the finiteness or otherwise of lim supt↓0Xt/tκ, so we

proceed by finding conditions for

lim sup

t↓0

Xt

tκ = +∞ a.s. (3.1)

In view of the discussion in Section 1, and the fact that the case κ = 1/2 is covered in Theorem 2.2, we have only two cases to consider:

(a) X /∈ bv, 1/2 < κ < 1;

(b) X ∈ bv, with drift δ = 0, κ > 1.

For Case (a), we need to define, for 1 ≥ y > 0, and for λ > 0, W (y) := Z y 0 Z 1 x zΠ(+)(dz)dx, and then J(λ) := Z 1 0 exp   −λ y2κ−1κ W (y) ! κ 1−κ  dy y . (3.2) Also let λ∗ J := inf{λ > 0 : J(λ) < ∞}.

Theorem 3.1 Assume (1.7) and keep 1/2 < κ < 1. Then (3.1) holds if and only if (i) R01Π(+)(xκ)dx = ∞, or (ii) R01Π(+)(xκ)dx < ∞ =R1 0 Π (−) (xκ)dx, and λ∗ J = ∞.

When (i) and (ii) fail, we have in greater detail: suppose (iii) R01Π(xκ)dx < ∞, or (iv) R01Π(+)(xκ)dx < ∞ =R1 0 Π (−) (xκ)dx and λ∗ J = 0. Then lim sup t↓0 Xt tκ = 0 a.s. (3.3) Alternatively, suppose (v) R01Π(+)(xκ)dx < ∞ =R1 0 Π (−) (xκ)dx and λ∗ J ∈ (0, ∞). Then lim sup t↓0 Xt

(10)

Remark 3 (i) Using the integral criterion in terms of J(λ), it’s easy to give examples of all three possibilities (0, ∞, or in (0, ∞)) for lim supt↓0Xt/tκ,

in the situation of Theorem 3.1.

(ii) Note that X /∈ bv when R01Π(+)(xκ)dx = ∞ or R1 0 Π

(−)

(xκ)dx = ∞

in Theorem 3.1, because Π(±)(xκ) ≤ Π(±)(x) when 0 < x < 1 and κ < 1, so

R1

0 Π(x)dx = ∞.

(iii) It may seem puzzling at first that a second moment-like function, V (·), appears in Theorem 2.2, whereas W (·), a kind of integrated first mo-ment function, appears in Theorem 3.1. Though closely related, in general, V (x) is not asymptotically equivalent to W (x), as x → 0, and neither func-tion is asymptotically equivalent to yet another second moment-like funcfunc-tion on [0, ∞), U(x) := V (x)+x2Π(x). V (x) arises naturally in the proof of

Theo-rem 2.2, which uses a normal approximation to certain probabilities, whereas W (x) arises naturally in the proof of Theorem 3.1, which uses Laplace trans-forms and works with spectrally one-sided L´evy processes. It is possible to reconcile the different expressions; in fact, Theorem 2.2 remains true if V is replaced in the integral I(λ) by U or by W . Thus these three functions are equivalent in the context of Theorem 2.2 (but not in general). We explain this in a little more detail following the proof of Theorem 3.1.

Next we turn to Case (b). When X ∈ bv we can define, for 0 < x < 1, A+(x) = Z x 0 Π(+)(y)dy and A(x) = Z x 0 Π(−)(y)dy. (3.5)

Theorem 3.2 Assume (1.7), suppose κ > 1, X ∈ bv, and its drift δ = 0. If Z

(0,1]

Π(+)(dx)

x−1/κ+ A(x)/x = ∞ (3.6)

then (3.1) holds. Conversely, if (3.6) fails, then lim supt↓0Xt/tκ ≤ 0 a.s.

Remark 4 (i) It’s natural to enquire whether (3.6) can be simplified by con-sidering separately integrals containing the components of the integrand in (3.6). This is not the case. For each κ > 1, it is possible to find a L´evy process X ∈ bv with drift 0 for which (3.6) fails but

Z (0,1] x1κΠ(+)(dx) = ∞ = Z (0,1] (x/A(x)) Π(+)(dx).

(11)

The idea is to construct a continuous increasing concave function which is linear on a sequence of intervals tending to 0, which can serve as an A(x), and which oscillates around the function x 7→ x1−1/κ. Note that (3.6) is

equivalent to Z (0,1] min  x1/κ, x A(x)  Π(+)(dx) = ∞. We will omit the details of the construction.

(ii) It is possible to have lim supt↓0Xt/tκ < 0 a.s., in the situation of

Theorem 3.2, when (3.6) fails; for example, when X is the negative of a subordinator with zero drift. The value of the limsup can then be determined by applying Lemma 5.3 in Section 4.

(ii) For another equivalence, we note that (3.6) holds if and only if Z 1

0

Π(+)(tκ+ Xt(−))dt = ∞ a.s. (3.7)

where X(−)is a subordinator with drift 0 and L´evy measure Π(−). This can be

deduced from Erickson and Maller [8], Theorem 1, and provides a connection between the a.s. divergence of the L´evy integral in (3.7) and the upcrossing condition (3.1).

Table 2 summarises the conditions for (3.1):

Table 2

Value of κ NASC for lim supt↓0Xt/tκ = ∞ a.s. (when σ2 = 0)

0 ≤ κ < 12 Never true κ = 12, X /∈ bv See Theorem 2.2 1 2 < κ < 1, X /∈ bv See Theorem 3.1 1 2 ≤ κ ≤ 1, X ∈ bv Never true κ > 1, X ∈ bv, δ < 0 Never true κ > 1, X ∈ bv, δ = 0 See Theorem 3.2 κ > 1, X ∈ bv, δ > 0 Always true κ ≥ 1, X /∈ bv Always true

(12)

Our final theorem applies the foregoing results to give a criterion for

lim

t↓0

Xt

tκ = +∞ a.s. (3.8)

This is a stronger kind of divergence of the normed process to ∞, for small times. A straightforward analysis of cases, using our one and two-sided results, shows that (3.8) never occurs if 0 < κ ≤ 1, if κ > 1 and X /∈ bv, or if κ > 1 and X ∈ bv with negative drift. If κ > 1 and X ∈ bv with positive drift, (3.8) always occurs. That leaves just one case to consider, in:

Theorem 3.3 Assume (1.7), suppose κ > 1, X ∈ bv, and its drift δ = 0. Then (3.8) holds iff

KX(d) := Z 1 0 dy y exp ( −d(A+(y)) κ κ−1 y ) < ∞, for all d > 0, (3.9) and Z (0,1] x A+(x) Π(−)(dx) < ∞. (3.10)

Concluding Remarks. It’s natural to enquire about a one-sided version of Corollary 2.1: when is

lim sup

t↓0

Xt− a(t)

tκ < ∞ a.s., for some nonstochastic a(t)? (3.11)

Phrased in such a general way the question is not interesting since we can always make Xt = o(a(t)) a.s as t ↓ 0 by choosing a(t) large enough by

comparison with Xt (e.g., a(t) such that a(t)/

p

t log | log t| → ∞, as t ↓ 0, will do, by (1.5)), so the limsup in (3.11) becomes negative. So we would need to restrict a(t) in some way. Section 3 deals with the case a(t) = 0. Another choice is to take a(t) as a natural centering function such as EXt or

as a median of Xt. However, in our small time situation, EXt is essentially 0

or the drift of X, so we are led back to the case a(t) = 0 again (and similarly for the median). Of course there may be other interesting choices of a(t) in some applications, and there is the wider issue of replacing tκ by a more

general norming function. Some of our results in Sections 4 and 5 address the latter, but we will not pursue these points further here.

(13)

4

Proofs for Section 2

4.1

Proof of Theorem 2.1

The proof relies on a pair of technical results which we will establish first. Recall the notation V (x) in (2.3).

Proposition 4.1 Let b : R+ → [0, ∞) be any non-decreasing function such

that Z 1 0 Π(b(x))dx < ∞ and Z 1 0 V (b(x))b−2(x)dx < ∞. Then lim sup t↓0 |Xt− a(t)| b(4t) ≤ 1 a.s., where a(t) := γt − Z t 0 ds Z b(s)<|x|≤1xΠ(dx), t ≥ 0. (4.1)

Proof of Proposition 4.1: Recall the L´evy-Itˆo decomposition (1.9). In this setting, it is convenient to introduce

Xt(1) := Z [0,t]×[0,1] 1{|x|≤b(s)}xN(ds, dx) and Xt(2) := γt + Z [0,t]×[0,1] 1{b(s)<|x|≤1}xN(ds, dx) ,

where again the stochastic integrals are taken in the compensated sense. Plainly, X = X(1) + X(2).

The assumption R01Π(b(x))dx < ∞ implies that

N ({(s, x) : 0 ≤ s ≤ t and b(s) < |x| ≤ 1}) = 0

whenever t > 0 is sufficiently small a.s., and in this situation X(2) is just γt

minus the compensator, a.s.; i.e.,

Xt(2) = γt − Z t 0 ds Z b(s)<|x|≤1 xΠ(dx) = a(t).

(14)

On the one hand, X(1) is a square-integrable martingale with oblique bracket hX(1)it= Z t 0 ds Z |x|≤b(s) x2Π(dx) = Z t 0 V (b(s))ds ≤ tV (b(t)).

By Doob’s maximal inequality, we have for every t ≥ 0 P ( sup

0≤s≤t|X (1)

s | > b(2t)) ≤ 4tV (b(t))b−2(2t).

On the other hand, the assumptions that b(t) is non-decreasing and that R1 0 dxV (b(x))b−2(x) < ∞ entail ∞ X n=1 2−nV (b(2−n))b−2(2−n+1) < ∞.

By the Borel-Cantelli lemma, we thus see that

lim n→∞ sup0≤s≤2−n|X (1) s | b(2−n+1) ≤ 1 a.s.,

and the proof is completed by a standard argument of monotonicity.

Proposition 4.2 Suppose there are deterministic functions a : R+→ R and

b : (0, ∞) → (0, ∞), with b measurable, such that P  lim sup t↓0 |Xt− a(t)| b(t) < ∞  > 0. (4.2)

Then there is some finite constant C such that Z 1

0 Π(Cb(x))dx < ∞.

(4.3)

Proof of Proposition 4.2: Symmetrise X by subtracting an independent equally distributed X′ to get X(s)

t = Xt− Xt′, t ≥ 0. Then (4.2) and

Blu-menthal’s 0-1 law imply there is some finite constant C such that

lim sup t↓0 |Xt(s)| b(t) < C 2, a.s. (4.4)

(15)

Suppose now that (4.3) fails. Note that Π(s)(·) = 2Π(·), where Π(s) is

the L´evy measure of X(s), so that R01Π(s)(Cb(x))dx = ∞. Then from the L´evy-Itˆo decomposition, we have that, for every ε > 0,

#{t ∈ (0, ε] : |∆(s)t | > Cb(t)} = ∞, a.s., where ∆(s)t = X (s) t − X (s) t−. But whenever |∆ (s) t | > Cb(t), we must have |Xt−(s)| > Cb(t)/2 or |X (s)

t | > Cb(t)/2; which contradicts (4.4). Thus (4.3)

holds.

Finally, we will need an easy deterministic bound.

Lemma 4.1 Fix some κ ≥ 1/2 and assume Z

|x|<1|x| 1/κ

Π(dx) < ∞. (4.5) When κ ≥ 1, X ∈ bv and we suppose further that the drift coefficient δ = γ −R|x|≤1xΠ(dx) is 0. Then, as t → 0, a(t) = γt − Z t 0 ds Z sκ<|x|≤1 xΠ(dx) = o(tκ).

Proof of Lemma 4.1: Suppose first κ < 1. For every 0 < ε < η < 1, we have Z ε<|x|≤1|x|Π(dx) ≤ ε 1−1/κZ |x|≤η|x| 1/κΠ(dx) +Z η<|x|≤1|x|Π(dx) = ε1−1/κoη + c(η), say,

where, by (4.5), limη↓0oη = 0. Since κ < 1, it follows that

lim sup

t↓0 |a(t)|t

−κ ≤ κ−1o η,

and as we can take η arbitrarily small, we conclude that a(t) = o(tκ).

In the case κ ≥ 1, X has bounded variation with zero drift coefficient. We may rewrite a(t) in the form

a(t) = Z t 0 ds Z |x|≤sκ xΠ(dx).

(16)

The assumption (4.5) entails R|x|≤ε|x|Π(dx) = o(ε1−1/κ) and we again

con-clude that a(t) = o(tκ).

We now have all the ingredients to establish Theorem 2.1.

Proof of Theorem 2.1: Keep κ > 1/2 throughout. (i) Suppose (2.1) holds, which is equivalent to (4.5). Writing |x|1/κ = |x|1/κ−2x2, we see from an

integration by parts (Fubini’s theorem) that R01V (x)x1/κ−3dx < ∞. Note

that the assumption that κ 6= 1/2 is crucial in this step. The change of variables x = yκ now gives that R1

0 V (y

κ)y−2κdy < ∞. We may thus apply

Proposition 4.1 and get that

lim sup

t↓0

|Xt− a(t)|

tκ ≤ 4

κ a.s.

where a(t) is as in Lemma 4.1. We thus have shown that when (2.1) holds,

lim sup

t↓0

|Xt|

tκ ≤ 4

κ a.s.

For every c > 0, the time-changed process Xct is a L´evy process with L´evy

measure cΠ, so we also have lim supt↓0|Xct|t−κ ≤ 4κ a.s. As we may take c

as large as we wish, we conclude that

lim t↓0 |Xt| tκ = 0 a.s. (ii) By Proposition 4.2, if P  lim sup t↓0 |Xt− a(t)| b(t) < ∞  > 0,

then R01Π(Cxκ)dx < ∞ for some finite constant C. By an obvious change

of variables, this shows that (2.1) must hold. This completes the proof of Theorem 2.1.

Finally we establish Corollary 2.1.

Proof of Corollary 2.1: This hinges on the fact that when (2.5) fails, then (Xt− a(t))/tκ P→ 0, with a(t) defined as in (4.1) – even for κ = 1/2. We will

(17)

4.2

Proof of Theorem 2.2

We now turn our attention to Theorem 2.2 and develop some notation and material in this direction. Write, for b > 0,

Xt= Yt(b)+ Z (b) t , (4.6) with Yt(b) := Z [0,t]×[−1,1] 1{|x|≤b}xN(ds, dx) , (4.7) Zt(b) := γt + Z [0,t]×[−1,1] 1{b<|x|}xN(ds, dx) ,

where N(ds, dx) is a Poisson random measure on [0, ∞) × [−1, 1] with in-tensity dsΠ(dx), and the stochastic integrals are taken in the compensated sense.

Lemma 4.2 (No assumptions on X.) For every 0 < r < 1 and ε > 0, we have ∞ X n=1 P ( sup 0≤t≤rn|Z (rn/2) t | > εrn/2) < ∞, and as a consequence, lim n→∞r −n/2 sup 0≤t≤rn|Z (rn/2) t | = 0 a.s.

Proof of Lemma 4.2: Introduce, for every integer n, the set An:= [0, rn] × ([−1, −rn/2) ∪ (rn/2, 1]) , so that ∞ X n=1 P (N(An) > 0) ≤ ∞ X n=1 rnΠ(rn/2) ≤ (1 − r)−1 ∞ X n=1 Z rn rn+1 Π(√x)dx ≤ (1 − r)−1 Z 1 0 Π(√x)dx.

(18)

As the last integral is finite (always), we have from the Borel-Cantelli lemma that N(An) = 0 whenever n is sufficiently large, a.s.

On the other hand, on the event N(An) = 0, we have

Zt(rn/2)= t  γ − Z rn/2<|x|≤1 xΠ(dx)  , 0 ≤ t ≤ rn.

Again as a result of the convergence of R|x|≤1x2Π(dx), the argument in

Lemma 4.1 shows that the supremum over 0 ≤ t ≤ rn of the absolute value

of the right-hand side is o(rn/2). The Borel-Cantelli lemma completes the

proof.

In view of Lemma 4.2 we can concentrate on Y(

√ t)

t in (4.6). We next

prove:

Lemma 4.3 Let Y be a L´evy process with canonical measure ΠY, satisfying

EY1 = 0 and m4 < ∞, where mk :=Rx∈R|x|kΠY(dx), k = 2, 3, · · · . (i) Then lim t↓0 1 tE|Yt| 3 = m 3.

(ii) For any x > 0, t > 0, we have the bound P (Yt> x √ tm2) − F (x) ≤ Am3 √ tm3/22 (1 + x)3, (4.8) where F (x) = Z x e−y2/2dy/√2π = 1 2erfc(x/ √ 2)

is the tail of the standard normal distribution function, and A is an absolute constant.

Proof of Lemma 4.3: (i) We can calculate EY4

t = tm4+ 3t2m22. So by

Cheby-chev’s inequality for second and fourth moments, for x > 0, t > 0, 1 tP (|Yt| > x) ≤ m2 x21{0<x≤1}+ m4+ 3tm22 x4 1{x>1}.

We can also calculate 1 tE|Yt| 3 = 3 t Z 0 x2P (|Yt| > x)dx.

(19)

By [1], Ex. 1, p. 39, P (|Yt| > x)/t → ΠY(x), as t ↓ 0, for each x > 0. The

result (i) follows by dominated convergence.

(ii) Write Yt =Pni=1Y (i, t), for n = 1, 2, · · · , where Y (i, t) := Y (it/n) −

Y ((i − 1)t/n) are i.i.d., each with the distribution of Y (t/n). According to a non-uniform Berry-Esseen bound (Theorem 14, p.125 of Petrov [11]), for each n = 1, 2, · · · , (4.8) holds with the righthand side replaced by

AE|Y (t/n)|3 √ n(tm2/n)3/2(1 + x)3 = AE|Y (t/n)| 3/(t/n) √ tm3/22 (1 + x)3 .

By Part (i) this tends as n → ∞ to the righthand side of (4.8).

Proposition 4.3 In the notation (4.7), we have, for a > 0, 0 < r < 1, X n≥0 P Yr(rnn/2) > arn/2  < ∞ ⇐⇒ Z 1 0 p V (x) exp  −a2 2V (x)  dx x < ∞. (4.9)

Proof of Proposition 4.3: For every fixed t > 0, Y(

√ t)

s is the compensated

sum of jumps of X smaller in magnitude than √t, up to time s. It is a cen-tered L´evy process with canonical measure 1{|x|≤

t}Π(dx), x ∈ R. Applying

Lemma 4.3, we get m2 = V (√t) and m3 = R|y|≤t|y|3Π(dy) = ρ(√t), say.

Then we get, for x > 0,

|P  Yt(√t) > x q tV (√t)  − F (x)| ≤ Aρ( √ t) q tV3(t)(1 + x)3 . Replacing x by a/ q V (√t), a > 0, we have P  Yt(√t) > a√t− F  a/ q V (√t) ≤ ε(t) := Aρ( √ t) √ ta3 ,

(20)

and we claim that Pε(rn) < ∞. In fact, for some c > 0, ∞ X n=0 ρ(rn/2) rn/2 = ∞ X n=0 1 rn/2 X j≥n Z r(j+1)/2<|y|≤rj/2|y| 3Π(dy) = ∞ X j=0 j X n=0 r−n/2 ! Z r(j+1)/2<|y|≤rj/2|y| 3Π(dy) ≤ c ∞ X j=0 r−j/2 Z r(j+1)/2<|y|≤rj/2|y| 3Π(dy) ≤ c ∞ X j=0 Z r(j+1)/2<|y|≤rj/2 y2Π(dy) = c Z |y|≤1 y2Π(dy) < ∞.

The result (4.9) follows, since the monotonicity of F shows that the conver-gence of Pn≥1F a/pV (rn/2) is equivalent to that of

Z 1 0 F  a/ q V (√x)  dx x = X Z rn2 rn+12 F  a/ q V (√x)  dx x ,

and it is well-known that F (x) ∽ (2π)−1/2x−1e−x2/2

as x → ∞. We can now establish Theorem 2.2.

Proof of Theorem 2.2: Recall the definition of I(·) in the statement of The-orem 2.2. We will first show that for every given a > 0

I(a) < ∞ ⇒ lim sup

t↓0

Xt

t ≤ a a.s. (4.10) To see this, observe when I(a) < ∞, the integral in (4.9) converges, hence so does the series. Use the maximal inequality in Theorem 12, p.50 of Petrov

(21)

[11], to get, for t > 0, b > 0, x > 0, P  sup 0<s≤t Ys(b) > x  = lim k→∞P  max 1≤j≤⌈kt⌉Y (b) j/k > x  = lim k→∞P 1≤j≤⌈kt⌉max j X i=1 ∆(i, k, b) > x ! ≤ lim sup k→∞ 2P   ⌈kt⌉ X i=1 ∆(i, k, b) > x −p2ktV (b)/k   = 2PYt(b) > x − p 2tV (b), (4.11) where we note that ∆(i, k, b) := Yi/k(b)− Y(i−1)/k(b) 

i≥1 are i.i.d., each with

ex-pectation 0 and variance equal to V (b)/k. Given ε > 0, replace t by rn, b

by rn/2, and x by arn/2+p2rnV (rn/2), which is not larger than (a + ε)rn/2,

once n is large enough, in (4.11). The convergence of the series in (4.9) then gives X n≥0 P  sup 0<s≤rnY (rn/2) s > (a + ε)rn/2  < ∞ for all ε > 0. Hence by the Borel-Cantelli Lemma

lim sup n→∞ sup0<t≤rnY(r n/2) t rn/2 ≤ a, a.s. (4.12)

Using (4.6), together with Lemma 4.2 and (4.12), gives

lim sup

n→∞

sup0<t≤rnXt

rn/2 ≤ a, a.s.

By an argument of monotonicity, this yields

lim sup t↓0 Xt √ t ≤ a √ r, a.s. Then let r ↑ 1 to get lim supt↓0Xt/

t ≤ a a.s.

For a reverse inequality, we show that for every a > 0,

I(a) = ∞ ⇒ lim sup

t↓0

Xt

(22)

To see this, suppose that I(a) = ∞ for a given a > 0. Then the integral in (4.9) diverges when a is replaced by a − ε for an arbitrarily small ε > 0, because V (x) ≥ ε exp(−ε/2V (x))/2, for ε > 0, x > 0. Hence, keeping in mind (4.6), Lemma 4.2 and Proposition 4.3, we deduce

X

n≥0

P Xrn > a′rn/2= ∞ (4.14)

for all a′ < a. For a given ε > 0, define for every integer n ≥ 0 the events

An = {Xrn/(1−r)− Xrn+1/(1−r) > a′rn/2},

Bn = {|Xrn+1/(1−r)| ≤ εrn/2}.

Then the {An}n≥0 are independent, and each Bn is independent of the

collection {An, An−1, · · · , A0}. Further, Pn≥0P (An) = ∞ by (4.14), so

P (An i.o.) = 1. It can be deduced easily from [1], Prop. 2(i), p.16, that

Xt/

t → 0, as t ↓ 0, since (1.7) is enforced. Thus P (BP n) → 1 as n → ∞,

and then, by the Feller–Chung lemma ([4], p. 69) we can deduce that P (An ∩ Bn i.o.) = 1. This implies P (Xrn/(1−r) > (a′ − ε)rn/2 i.o.) = 1,

thus lim sup t↓0 Xt √ t ≥ (a ′− ε)1 − r a.s.,

in which we can let a′ ↑ a, ε ↓ 0 and r ↓ 0 to get (4.13).

As just mentioned, we have Xt/

√ t → 0, as t ↓ 0, so lim infP t↓0Xt/ √ t ≤ 0 ≤ lim supt↓0Xt/ √

t a.s. Together with (4.10) and (4.13), this gives the statements in Theorem 2.2 (replace X by −X to deduce the liminf statements from the limsup, noting that this leaves V (·) unchanged).

5

Proofs for Section 3

5.1

Proof of Theorem 3.1

We start with some notation and technical results. Recall we assume (1.7). Take 0 < κ < 1 and suppose first

Z 1

0

(23)

Define, for 0 < x < 1, ρκ(x) = 1 κ Z 1 x yκ1−1Π(+)(y)dy = Z 1 x1κ Π(+)(yκ)dy.

Since Π(+)(x) > 0 for all small x, ρκ(x) is strictly decreasing in a

neighbour-hood of 0, thus x−1/κρ

κ(x) is also strictly decreasing in a neighbourhood of

0, and tends to ∞ as x ↓ 0 (because of (5.1)). Also define U+(x) = 2

Z x

0

yΠ(+)(y)dy.

By a similar argument, x−2U

+(x) is strictly decreasing in a neighbourhood

of 0, and tends to ∞ as x ↓ 0.

Next, given α ∈ (0, κ), define, for t > 0,

c(t) = infx > 0 : ρκ(x)x−1/κ+ x−2U+(x) ≤ α/t .

Then 0 < c(t) < ∞ for t > 0, c(t) is strictly increasing, limt↓0c(t) = 0, and

tρκ(c(t))

cκ1(t)

+ tU+(c(t))

c2(t) = α. (5.2)

Since limt↓0ρκ(t) = ∞, we have limt↓0c(t)/tκ = ∞.

We now point out that (5.1) can be reinforced as follows.

Lemma 5.1 The condition (5.1) implies that Z 1

0

Π(+)(c(t))dt = ∞ .

Proof of Lemma 5.1: We will first establish Z 1

0

Π(dx)

x−1/κρκ(x) + x−2U+(x) = ∞. (5.3)

Suppose (5.3) fails. Since

f (x) := 1

(24)

is nondecreasing (in fact, strictly increasing in a neighbourhood of 0) with f (0) = 0, for every ε > 0 there is an η > 0 such that, for all 0 < x < η,

ε ≥ Z η x f (z)Π(dz) ≥ f(x)  Π(+)(x) − Π(+)(η), giving f (x)Π(+)(x) ≤ ε + f(x)Π(+)(η) = ε + o(1), as x ↓ 0. Letting ε ↓ 0 shows that

lim x↓0f (x)Π (+) (x) = lim x↓0 Π(+)(x) x−1/κρκ(x) + x−2U+(x) ! = 0. (5.4)

It can be proved as in Lemma 4 of [6] that this implies

lim x↓0  x−1/κρ κ(x) x−2U+(x)  = 0 or lim inf x↓0  x−1/κρ κ(x) x−2U+(x)  > 0. (5.5)

Then, since (5.3) has been assumed not to hold, Z 1/2 0 x1κ ρκ(x)Π(dx) < ∞ or Z 1/2 0 x2 U+(x)Π(dx) < ∞. (5.6)

Noting that, under (5.1),

ρκ(x) = Π (+) (1) − x1κΠ(+)(x) + Z 1 x yκ1Π(dy) ≤ Π(+)(1) + Z 1 x y1κΠ(dy) ∼ Z 1 x yκ1Π(dy), as x → 0,

we see that the first relation in (5.6) is impossible because it would imply the finiteness of Z 1/2 0 xκ1 Z 1 x yκ1Π(dy) −1 Π(dx);

but this is infinite by (5.1) and the Abel-Dini theorem. In a similar way, the second relation in (5.6) can be shown to be impossible. Thus (5.3) is proved. Then note that the inverse function c← of c exists and satisfies, by (5.2),

c←(x) = α

(25)

Thus, by (5.3), Z 1 0 c←(x)Π(dx) = ∞ = Z c←(1) 0 Π(+)(c(x))dx. (5.7) This proves our claim.

Proposition 5.1 For every κ < 1, (5.1) implies

lim sup

t↓0

Xt

tκ = ∞ a.s.

Proof of Proposition 5.1: The argument relies on the analysis of the com-pletely asymmetric case when the L´evy measure Π has support in [0, 1] or in [−1, 0]. Since κ < 1, we can assume γ = 0 without loss of generality, because of course γt = o(tκ). The L´evy-Itˆo decomposition (1.9) then yields

Xt= bXt+ eXt (5.8) with b Xt = Z [0,t]×[0,1] xN(ds, dx) and eXt = Z [0,t]×[−1,0] xN(ds, dx) , (5.9)

where, as usual, the Poissonian stochastic integrals are taken in the compen-sated sense.

Choose α so small that

(1 + κ)α < 1/2 and α/(1/2 − (1 + κ)α)2 ≤ 1/2, and then ε so small that c(ε) < 1. Observe that for every 0 < t < ε

t Z {c(t)<x≤1} xΠ(dx) = tc(t)Π(+)(c(t)) + tλ(c(t)) ≤ tU+c(t)(c(t)) + tλ(c(t)) ≤ α(1 + κ)c(t), (5.10) where the last inequality stems from (5.2) and

λ(x) := Z 1 x Π(+)(y)dy = Z 1 x y1−1κy 1 κ−1Π(+)(y)dy ≤ κx1− 1 κρκ(x)

(26)

(since κ < 1), so

tλ(c(t)) c(t) ≤

κtρκ(c(t))

cκ1(t) ≤ κα (by (5.2)).

We next deduce from Lemma 5.1 that for every ε > 0, the Poisson ran-dom measure N has infinitely many atoms in the ran-domain {(t, x) : 0 ≤ t < ε and x > c(t)}, a.s. Introduce

tε := sup{t ≤ ε : N({t} × (c(t), 1]) = 1},

the largest instant less than ε of such an atom. Our goal is to check that

P (Xtε−≥ −c(tε)/2) ≥ 1/33 (5.11)

for every ε > 0 sufficiently small, so that P (Xtε > c(tε)/2) > 1/33. Since

= o(c(t)), it follows that for every a > 0

P (∃t ≤ ε : Xt> atκ) ≥ 1/33 ,

and hence lim supt↓0Xt/tκ = ∞ with probability at least 1/33. The proof is

completed by an appeal to Blumenthal’s 0-1 law.

In order to establish (5.11), we will work henceforth conditionally on tε; recall from the Markov property of Poisson random measures that the

restriction of N(dt, dx) to [0, tε) × [−1, 1] is still a Poisson random measure

with intensity dtΠ(dx).

Recalling (5.10) and discarding the jumps b∆ of bX such that b∆s > c(tε)

for 0 ≤ s < tε in the stochastic integral (5.9), we obtain the inequality

Xtε−≥ bYtε−− α(1 + κ)c(tε) + eXtε− (5.12)

where bYtε− is given by the (compensated) Poissonian integral

b Ytε− :=

Z

[0,tε)×[0,c(tε)]

(27)

By a second moment calculation, there is the inequality P |bYtε−| > (1/2 − α(1 + κ))c(tε)  ≤ E|bYtε−| 2 (1/2 − α(1 + κ))2c2(t ε) ≤ tε R {0<x≤c(tε)}x 2Π(dx) (1/2 − α(1 + κ))2c2(t ε) ≤ tεU+(c(tε)) (1/2 − α(1 + κ))2c2(t ε) ≤ α (1/2 − α(1 + κ))2 ,

where the last inequality derives from (5.2). By choice of α, the final expres-sion does not exceed 1/2. We conclude that

P Ybtε−− α(1 + κ)c(tε) ≥ −c(tε)/2



≥ 1/2. (5.13) We will also use the fact that eX is a mean zero L´evy process which is spectrally negative (i.e., with no positive jumps), so

lim inf

t↓0 P ( eXt> 0) ≥ 1/16 ;

see [9], p. 320. As furthermore eX is independent of bYtε−, we conclude from

(5.12) and (5.13) that (5.11) holds provided that ε has been chosen small enough.

Now suppose (5.1) fails. The remaining results in Theorem 3.1 require the case κ > 1/2 of:

Proposition 5.2 Assume that Y is a spectrally negative L´evy process, has zero mean, and is not of bounded variation. Define, for y > 0, λ > 0,

WY(y) := Z y 0 Z 1 x zΠ(−)Y (dz)dx, and JY(λ) := Z 1 0 exp   −λ y2κ−1κ WY(y) ! κ 1−κ    dy y , (5.14)

(28)

where Π(−)Y is the canonical measure of −Y , assumed carried on (0, 1], and let λ∗

Y = inf{λ > 0 : JY(λ) < ∞}. Then with probability one, for 1/2 ≤ κ < 1,

lim sup t↓0 Yt tκ    = ∞ ∈ (0, ∞) = 0 according as λ∗Y    = ∞ ∈ (0, ∞) = 0 .

The proof of Proposition 5.2 requires several intermediate steps. Take Y as described, then it has characteristic exponent

ΨY(θ) =

Z

(0,1]

(e−iθx− 1 + iθx)Π(−)Y (dx).

So we can work with the Laplace exponent

ψY(θ) = ΨY(−iθ) =

Z

(0,1]

(e−θx− 1 + θx)Π(−)Y (dx), (5.15)

such that EeθYt = etψY(θ), t ≥ 0, θ ≥ 0.

Let T = (Tt, t ≥ 0) denote the first passage process of Y ; this is a

subor-dinator whose Laplace exponent Φ is the inverse function to ψY ([1], p.189),

and since Y (Tt) ≡ t we see that the alternatives in Proposition 5.2 can be

deduced immediately from

lim sup t↓0 Yt tκ    = ∞ ∈ (0, ∞) = 0 ⇐⇒ lim inft↓0 Tt t1/κ    = 0 ∈ (0, ∞) = ∞.

The subordinator T must have zero drift since if limt↓0Tt/t := c > 0 a.s.

then sup0<s≤TtYs = t (see [1], p.191) would give lim supt↓0Yt/t ≤ 1/c < ∞

a.s., thus Y ∈ bv, which is not the case. We can assume T has no jumps bigger than 1, and further exclude the trivial case when T is compound Poisson. So the main part of the proof of Proposition 5.2 is the following, which is a kind of analogue of Theorem 1 of Zhang [17].

Lemma 5.2 Let T be any subordinator with zero drift whose L´evy measure ΠT is carried by (0, 1] and has ΠT(0+) = ∞, where ΠT(x) = ΠT{(x, ∞)} for

x > 0. Put mT(x) =R0xΠT(y)dy and for d > 0 let

KT(d) := Z 1 0 dy y exp ( −d(mT(y)) γ γ−1 y ) , where γ > 1. (5.16)

(29)

Let d∗

K := inf{d > 0 : KT(d) < ∞} ∈ [0, ∞]. Then, with probability one,

(i) d∗ K = 0 iff lim t↓0 Tt tγ = ∞; (ii) d∗

K = ∞ iff lim inf t↓0

Tt

tγ = 0;

(iii) d∗

K ∈ (0, ∞) iff lim inf t↓0

Tt

tγ = c, for some c ∈ (0, ∞).

Before beginning the proof of Lemma 5.2, we need some preliminary re-sults. To start with, we need the following lemma.

Lemma 5.3 Let S = (St, t ≥ 0) be a subordinator, and a and γ positive

constants. Then

lim inf

t↓0

St

tγ ≤ a a.s. (5.17)

if and only if for every r ∈ (0, 1) and η > 0 X

n≥1

P (Srn ≤ (a + η)rnγ ) = ∞.

Proof of Lemma 5.3: One way is obvious, so suppose X

P (Srn ≤ arnγ/(1 − r)) = ∞. (5.18)

For a given ε > 0, define events

An = {Srn− Srn+1 ≤ arnγ/(1 − r)}, Bn= {Srn+1 ≤ εr(n+1)γ}, n ≥ 0.

Then the {An}n≥0 are independent, and each Bn is independent of the

col-lection {An, An−1, · · · , A0}. Further, Pn≥0P (An) = ∞ by (5.18) (recall S is

a subordinator), so P (An i.o.) = 1. Then, by the Feller–Chung lemma ([4],

p. 69) we can deduce that P (An∩ Bn i.o.) = 1, provided P (Bn) is bounded

away from 0: P (Bn) ≥ 1/2, say, for n large enough. To see that this is the

case here, take b > 0 and ε ∈ (0, 1) and truncate the jumps of S (which is of bounded variation) at bε > 0, where b will be specified more precisely shortly. Thus, let Sε

t =

P

0<s≤t∆Ss1{∆Ss≤bε)}. Now St− S

ε

t is nonzero only if there is

at least one jump in S of magnitude greater than bε up till time t, and this has probability bounded above by tΠS(bε), where ΠS is the L´evy measure

(30)

of S. Then by a standard truncation argument, and using a first-moment Markov inequality, P (St> εtγ) ≤ tR(0,bε]xΠS(dx) εtγ + tΠS(bε) ≤ t R (0,bε]xΠS(dx) εtγ + tΠS(bε) εtγ (once εt γ ≤ 1) = tmT(bε) εtγ ≤ tmT(b) εtγ .

Now choose b = h(εtγ−1/2), where h(·) is the inverse function to m T(·).

Then the last ratio is smaller than 1/2. Replacing t by rn+1 in this gives

P (Bn) ≥ 1/2 for n large enough. Finally P (An ∩ Bn i.o.) = 1 implies

P (Srn ≤ rnγ(a/(1 − r) + εrγ) i.o.) = 1, thus

lim inf t↓0 St tγ ≤ a 1 − r+ εr γ a.s.,

in which we can let ε ↓ 0 and r ↓ 0 to get (5.17).

Applying Lemma 5.3 to Tt, we see that the alternatives in (i)–(iii) of

Lemma 5.2 hold iff for some r < 1, for all, none, or some but not all, a > 0, X

n≥1

P (Trn ≤ arnγ ) < ∞. (5.19)

The next step is to get bounds for the probability in (5.19). One way is easy. Since ΠT(0+) = ∞, ΠT(x) is strictly positive, and thus mT(x) is

strictly increasing, on a neighbourhood of 0. Recall that we write h(·) for the inverse function to mT(·).

Lemma 5.4 Let T be a subordinator with canonical measure ΠT satisfying

ΠT(0+) = ∞. Then there is an absolute constant K such that, for any c > 0

and γ > 0, P (Tt≤ ctγ) ≤ exp  − ct γ h(2ctγ−1/K)  , t > 0. (5.20)

Proof of Lemma 5.4: We can write

Φ(λ) = −1t log Ee−λTt =

Z

(0,1] 1 − e −λxΠ

(31)

Markov’s inequality gives, for any λ > 0, c > 0, P (Tt ≤ ctγ) ≤ eλct

γ

E(e−λTt)

≤ exp{−λt(λ−1Φ(λ) − ctγ−1)}

≤ exp{−λt(KmT(1/λ) − ctγ−1)}, for some K > 0,

where we have used [1], Prop. 1, p. 74. Now choose λ = 1/h(2ctγ−1/K) and

we have (5.20).

The corresponding lower bound is trickier:

Lemma 5.5 Suppose that T is as in Lemma 5.2, and additionally satisfies limt↓0P (Tt≤ dtγ) = 0 for some d > 0 and γ > 1. Then for any c > 0

P (Tt ≤ ctγ) ≥ 1 4exp  − ct γ h(ctγ−1/4) 

for all small enough t > 0. (5.21)

Proof of Lemma 5.5: Take γ > 1 and assume limt↓0P (Tt ≤ dtγ) = 0, where

d > 0. First we show that tγ

h(tγ−1) → ∞ as t ↓ 0. (5.22)

To do this we write, for each fixed t > 0, Tt = Tt(1) + T (2)

t , where the

distributions of the independent random variables Tt(1) and T (2) t are specified by log Ee−λTt(1) = −t Z (0,h(εtγ−1)](1 − e −λx T(dx) and log Ee−λTt(2) = −t Z (h(εtγ−1),1](1 − e −λx T(dx),

for a given ε > 0. Observe that

ETt(1) = t Z (0,h(εtγ−1)] xΠT(dx) ≤ tmT(h(εtγ−1)) = εtγ, so that P (Tt> dtγ) ≤ P (Tt(1)> dtγ) + P (T (2) t 6= 0) ≤ ET (1) t dtγ + 1 − P (T (2) t = 0) ≤ ε/d + 1 − P (Tt(2) = 0).

(32)

Thus for all sufficiently small t,

P (Tt(2)= 0) ≤ ε/d + P (Tt≤ dtγ) ≤ 2ε/d,

because of our assumption that limt↓0P (Tt≤ dtγ) = 0. Now

P (Tt(2) = 0) = exp −tΠT(h(εtγ−1)) and tΠT(h(εtγ−1)) ≤ t h(εtγ−1)mT(h(εt γ−1)) = εtγ h(εtγ−1),

so we see, taking say ε = d/4, that h(tγ−1) ≤ atγ for a constant a > 0,

or, equivalently, h(t) ≤ at1+β, where β = 1/(γ − 1) > 0, for all sufficiently

small t. However, mT(·) is concave, so its inverse function h is convex,

so h(t/2) ≤ h(t)/2 ≤ a(1/2)β(t/2)1+β, or h(t) ≤ a(1/2)βt1+β, for small t.

Iterating this argument gives (5.22).

Now write η = η(t) = h(ctγ−1/4) and define processes (Y(i)

t )t≥0, i = 1, 2, 3,

such that (Tt)t≥0 and (Yt(1))t≥0 are independent, and (Yt(2))t≥0 and (Yt(3))t≥0

are independent, and are such that log E(e−λYt(i)) = −tR

(0,1](1−e−λx)Π (i) T (dx), i = 1, 2, 3, where Π(1)T (dx) = ΠT(η)δη(dx), Π(2)T (dx) = ΠT(η)δη(dx) + 1(0,η]ΠT(dx), Π(3)T (dx) = 1(η,1]ΠT(dx),

and δη(dx) is the point mass at η. Then we have Tt+ Yt(1) d = Yt(2)+ Yt(3), and P (Tt≤ ctγ) ≥ P (Tt+ Yt(1) ≤ ctγ) ≥ P (Yt(3) = 0)P (Y (2) t ≤ ctγ) = e−tΠT(η)P (Y(2) t ≤ ctγ).

Since tΠT(η) = η−1tηΠT(η) ≤ η−1tmT(η) ≤ ctγ/h(ctγ−1/4), (5.21) will follow

when we show that lim inft↓0P (Yt(2) ≤ ctγ) ≥ 1/4. By construction we have

EYt(2)= t Z η 0 xΠT(dx) + ηΠT(η)  = tmT(η) = ctγ 4 ,

(33)

so if we put Zt= Yt(2)− ctγ/4 and write tσt2 = EZt2 we can apply Chebychev to get P Yt(2) ≤ ctγ≥ P  Zt≤ ctγ 2  ≥ 5 9 when tσ2

t ≤ c2t2γ/9. To deal with the opposite case, tσ2t > c2t2γ/9, we use

the Normal approximation in Lemma 4.3. In the notation of that lemma, m3 = R|x|≤η|x|3Π(2)T (dx) and m2 = σt2, in the present situation. Since, then,

m3 ≤ ησt2, and we have η = h(ctγ−1/4) = o(tγ), as t ↓ 0, by (5.22), we get

sup x∈R P (Zt ≤ x √ tσt) − 1 + F (x) ≤ Aησ 2 t √ tσ3 t = o(1), as t ↓ 0. Choosing x = ctγ/(2 t) gives P (Zt≤ ctγ/2) ≥ 1/4, hence (5.21).

We are now able to establish Lemma 5.2. Again, recall, h(·) is inverse to mT(·).

Proof of Lemma 5.2: (i) Suppose that KT(d) < ∞ for some d > 0 and write

xn = h(crn(γ−1)), where γ > 1 and 0 < r < 1. Note that since h(x)/x is

increasing we have xn+1≤ rγ−1xn. Also we have mT(xn) = RmT(xn+1) where

R = r1−γ > 1. So for y ∈ [x n+1, xn), mT(y) γ γ−1 y ≤ mT(xn) γ γ−1 xn+1 = R γ γ−1mT(xn+1) γ γ−1 xn+1 = (Rc) γ γ−1 r(n+1)γ h(cr(n+1)(γ−1)). Thus Z xn xn+1 dy y exp ( −dmT(y) γ γ−1 y ) ≥ exp   − d(Rc) γ γ−1 r(n+1)γ h(cr(n+1)(γ−1))   log xn xn+1 ≥ (log R) exp  − c ′r(n+1)γ h(2c′r(n+1)(γ−1)/K)  ,

(34)

where K is the constant in Lemma 5.4 and we have chosen c =  K 2d γ−1 R−γ and c= Kc/2.

Then KT(d) < ∞ gives P∞1 P (Trn ≤ c′rnγ) < ∞, and so lim inft↓0Tt/tγ ≥

c′ > 0 a.s., by Lemma 5.3. Thus we see that lim inf

t↓0Tt/tγ = 0 a.s. implies

that KT(d) = ∞ for every d > 0, hence d∗K = ∞.

Conversely, assume that lim inft↓0Tt/tγ > 0 a.s. Then by Lemma 5.3,

P∞

1 P (Trn ≤ crnγ) < ∞ for some c > 0 and 0 < r < 1. Then P (Trn ≤

crnγ) → 0, Lemma 5.5 applies, and we have ∞ X 1 exp  − cr nγ h(crn(γ−1)/4)  < ∞.

Putting xn= h(crn(γ−1)/4) (similar to but not the same xnas in the previous

paragraph), and c′ = 4γ−1γ /c 1 γ−1, we see that ∞ X 1 exp ( −c′mT(xn) γ γ−1 xn ) < ∞. (5.23) We have mT(xn−1) = RmT(xn) where R = r1−γ > 1. Take L > R and let

kn = min(k ≥ 1 : xn−1L−k ≤ xn), so that xn−1L−kn ≤ xn. Then for any

d > 0 Z xn−1 xn exp ( −dmT(y) γ γ−1 y ) y−1dy ≤ kn X i=1 Z xn−1L1−i xn−1L−i exp  −dmT(y) 1 γ−1mT(y) y  y−1dy ≤ kn X i=1 Z xn−1L1−i xn−1L−i exp  −dmT(xn) 1 γ−1mT(xn−1L −i) xn−1L−i  y−1dy ≤ log L ∞ X i=1 exp  −dmT(xn) 1 γ−1L im T(xnL−1) xn−1  ≤ log L ∞ X i=1 exp  −dmT(xn) 1 γ−1L i−1m T(xn) xn−1  = log L ∞ X i=1 exp ( −dLi−1R−γ−1γ mT(xn−1) γ γ−1 xn−1 ) .

(35)

Approximate this last sum with an integral of the form R0∞aLx n dx, where an = exp(−c′mT(xn−1) γ γ−1/x n−1), with d = c′LR γ γ−1 = 4 γ γ−1Lr−γ/cγ−11 , to

see that it is bounded above by a constant multiple of an. It follows from

(5.23) that Pan < ∞, hence we get KT(d) < ∞, and Part (i) follows.

(ii) If KT(d) < ∞ for all d > 0 then, because c′ → 0 as d → ∞ at the

end of the proof of the forward part of Part (i), we have lim inft↓0Tt/tγ = ∞

a.s., i.e., limt↓0Tt/tγ = ∞ a.s. Conversely, if this holds, then because d → ∞

as c → 0 at the end of the proof of the converse part of Part (i), we have KT(d) < ∞ for all d > 0. This completes the proof of Lemma 5.2.

Proof of Proposition 5.2: To finish the proof of Proposition 5.2, we need only show that JY(λ) = ∞ for all λ > 0 is equivalent to KT(d) = ∞ for all d > 0,

where KT(d) is evaluated for the first-passage process T of Y, and γ = 1/κ.

We have from (5.15), after integrating by parts,

ψY(θ) = Z 1 0 (e−θx− 1 + θx)Π(−)Y (dx) = θ Z 1 0 (1 − e −θx(−) Y (x)dx,

and differentiating (5.15) gives

ψY′ (θ) = Z 1 0 x(1 − e −θx(−) Y (dx). So we see that θ−1ψ

Y(θ) and ψ′Y(θ) are Laplace exponents of driftless

subor-dinators, and using the estimate in [1], p.74, twice, we get

ψY(θ) ≍ θ2WfY(1/θ) and ψY′ (θ) ≍ θWY(1/θ), where gWY(x) = Rx 0 AY(y)dy and AY(x) := R1 x Π (−)

Y (y)dy, for x > 0, we recall

the definition of WY just prior to (5.14), and “ ≍ ” means that the ratio

of the quantities on each side of the symbol is bounded above and below by finite positive constants for all values of the argument. However, putting UY(x) =R0x2zΠ(−)Y (z)dz, for x > 0, we see that

WY(x) = Z x 0 Z 1 y zΠ(−)Y (dz)dy = 1 2UY(x) + fWY(x),

(36)

and f WY(x) = 1 2UY(x) + xAY(x); thus f WY(x) ≤ WY(x) = UY(x) + xAY(x) ≤ 2fWY(x). Hence we have θ2WY(1/θ) ≍ ψY(θ) ≍ θψY′ (θ). (5.24)

We deduce that JY(λ) = ∞ for all λ > 0 is equivalent to eJY(λ) = ∞ for all

λ > 0, where

e

JY(λ) =

Z 1

0

expn−λy1−κ−1 ψY(1/y)1−κ−κ o dy

y =

Z ∞

1

expn−λy1−κ1 ψ(y)1−κ−κo dy

y .

But we know that Φ, the exponent of the first-passage process T , is the inverse of ψY, so making the obvious change of variable gives

e JY(λ) = Z ∞ ψY(1) expn−λΦ(z)1−κ1 z1−κ−κo Φ ′(z)dz Φ(z) .

From (5.24) we deduce that zΦ′(z)/Φ(z) ≍ 1 for all z > 0, so J

Y(λ) = ∞ for

all λ > 0 is equivalent to bJY(λ) = ∞ for all λ > 0, where

b JY(λ) = Z 1 expn−λΦ(z)1−κ1 z1−κ−κ o dz z , = Z 1 0 expn−λΦ(z−1)1−κ1 z κ 1−κo dz z .

Since Φ(z−1) is bounded above and below by multiples of z−1m

T(z) ([1],

p.74), our claim is established.

We are now able to complete the proof of Theorem 3.1:

Proof of Theorem 3.1: The implications (i)⇒(3.1) and (iii)⇒(3.3) stem from Proposition 5.1 and Theorem 2.1, respectively. So we can focus on the situ-ation when Z 1 0 Π(+)(xκ)dx < ∞ = Z 1 0 Π(−)(xκ)dx .

(37)

Recall the decomposition (5.8) where bXt has canonical measure Π(+)(dx).

Thus from Theorem 2.1, bXt is o(tκ) a.s. as t ↓ 0. Further, eX is spectrally

negative with mean zero. When (ii), (iv) or (v) holds, X /∈ bv (see Remark 3 (ii)), so Xt/t takes arbitrarily large positive and negative values a.s. as t ↓ 0,

and lim inft↓0Xt/tκ ≤ 0 ≤ lim supt↓0Xt/tκ a.s. The implications (ii)⇒(3.1),

(iv)⇒(3.3) and (v)⇒(3.4) now follow from Proposition 5.2.

Remark 5 Concerning Remark 3 (iii): perusal of the proof of Theorem 2.2 shows that we can add to X a compound Poisson process with masses f±(t), say, at ±√t, provided Pn≥1√tnf±(tn) converges, and the proof

re-mains valid. The effect of this is essentially only to change the kind of trun-cation that is being applied, without changing the value of the limsup, and in the final result this shows up only in an alteration to V (x). Choosing f (t) ap-propriately, the new V (·) becomes U(·) or W (·), which are thus equivalent in the context of Theorem 2.2. Note that we allow κ = 1/2 in Proposition 5.2. We will omit further details, but the above shows there is no contradiction with Theorem 2.2.

5.2

Proof of Theorem 3.2

Theorem 3.2 follows by taking a(x) = xκ, κ > 1, in the Propositions 5.3 and

5.4 below, which are a kind of generalisation of Theorem 9 in Ch. III of [1]. Recall the definition of A(·) in (3.5).

Proposition 5.3 Assume X ∈ bv and δ = 0. Suppose a(x) is a positive deterministic measurable function on [0, ∞) with a(x)/x nondecreasing and a(0) = 0. Let a←(x) be its inverse function. Suppose

Z 1 0 Π(dx) 1/a←(x) + A −(x)/x = ∞. (5.25) Then we have lim sup t↓0 Xt a(t) = ∞ a.s. (5.26) Proof of Proposition 5.3: Assume X and a as specified. Then the function a(x) is strictly increasing, so a←(x) is well defined, positive, continuous, and

(38)

nondecreasing on [0, ∞), with a←(0) = 0 and a(∞) = ∞. Note that the function 1 a←(x)+ A(x) x = 1 a←(x)+ Z 1 0 Π(−)(xy)dy

is continuous and nonincreasing, tends to ∞ as x → 0, and to 0 as x → ∞. Choose α ∈ (0, 1/2) arbitrarily small so that

2α 2(1/2 − α)−2+ 1≤ 1, and define, for t > 0,

b(t) = inf  x > 0 : 1 a←(x)+ A(x) x ≤ α t  .

Then 0 < b(t) < ∞ for t > 0, b(t) is strictly increasing, limt↓0b(t) = 0, and

t a←(b(t)) +

tA(b(t))

b(t) = α. (5.27) Also b(t) ≥ a(t/α), and the inverse function b←(x) exists and satisfies

b←(x) = α 1/a←(x) + A −(x)/x . (5.28) Thus, by (5.25), Z 1 0 b←(x)Π(dx) = ∞ = Z 1 0 Π(+)(b(x))dx. (5.29) Set U(x) := 2 Z x 0 yΠ(−)(y)dy. Then we have the upper-bounds

tΠ(−)(b(t)) ≤ tA−(b(t)) b(t) ≤ α, and tU(b(t)) b2(t) ≤ 2tA(b(t)) b(t) ≤ 2α.

(39)

Since X ∈ bv and δ = 0 we can express X in terms of its positive and negative jumps, ∆(+)s = max(0, ∆s) and ∆(−)s = ∆(+)s − ∆s:

Xt= X 0<s≤t ∆(+)s X 0<s≤t ∆(−)s = Xt(+)− Xt(−), say. (5.30)

Recall that ∆(±)s ≤ 1 a.s. We then have

P (Xt(−)> b(t)/2) ≤ P X 0<s≤t (∆(−)s ∧ b(t)) > b(t)/2 ! + P ∆(−)s > b(t) for some s ≤ t ≤ P X 0<s≤t (∆(−)s ∧ b(t)) − tA(b(t)) > (1/2 − α)b(t) ! + tΠ(−)(b(t)).

Observe that the random variableP0<s≤t(∆(−)s ∧b(t))−tA(b(t)) is centered

with variance tU(b(t)). Hence

P X 0<s≤t (∆(−)s ∧ b(t)) − tA(b(t)) > (1/2 − α)b(t) ! ≤ tU−(b(t)) (1/2 − α)2b2(t),

so that, by the choice of α, we finally arrive at

P (Xt(−)> b(t)/2) ≤  2 (1/2 − α)2 + 1  α ≤ 1/2. (5.31) By (5.29), P (Xt(+) > b(t) i.o.) ≥ P (∆ (+) t > b(t) i.o.) = 1. Choose tn ↓ 0

such that P (Xt(+)n > b(tn) i.o.) = 1. Since the subordinators X

(+) and X(−)

are independent, we have P (Xtn > b(tn)/2 i.o.) ≥ lim m→∞P (X (+) tn > b(tn), X (−) tn ≤ b(tn)/2 for some n > m) ≥ (1/2)P (Xt(+)n > b(tn) i.o.) (by (5.31)) = 1/2.

In the last inequality we used the Feller–Chung lemma ([4], p. 69). Thus lim supt↓0Xt/b(t) ≥ 1/2, a.s. Now since a(x)/x is nondecreasing, we have

(40)

for α < 1, b(t)/a(t) ≥ a(t/α)/a(t) ≥ 1/α, so lim supt↓0Xt/a(t) ≥ 1/α a.s.

Letting α ↓ 0 gives lim supt↓0Xt/a(t) = ∞ a.s., as claimed in (5.26).

We now state a strong version of the converse of Proposition 5.3 which completes the proof of Theorem 3.2.

Proposition 5.4 The notation and assumptions are the same as in Propo-sition 5.3. If Z 1 0 Π(dx) 1/a←(x) + A −(x)/x < ∞, (5.32) then we have lim sup t↓0 Xt a(t) ≤ 0 a.s. (5.33) We will establish Proposition 5.4 using a coupling technique similar to that in [2]. For this purpose, we first need a technical lemma, which is intuitively obvious once the notation has been assimilated. Let Y be a L´evy process and ((ti, xi), i ∈ I) a countable family in (0, ∞) × (0, ∞) such that

the ti’s are pairwise distinct. Let (Yi, i ∈ I) be a family of i.i.d. copies of Y ,

and set for each i ∈ I

ρi := inf



s ≥ 0 : Ysi ≥ xi

∧ a←(xi),

where a(·) is as in the statement of Proposition 5.3. More generally, we could as well take for ρi any stopping time in the natural filtration of Yi, depending

possibly on the family ((ti, xi), i ∈ I).

Now assume that Tt:=

X

ti≤t

ρi < ∞ for all t ≥ 0 and

X

i∈I

ρi = ∞, a.s. (5.34)

Then T = (Tt, t ≥ 0) is a right-continuous non-decreasing process and (5.34)

enables us to construct a process Y′ by pasting together the paths (Yi s, 0 ≤

s ≤ ρi) as follows. If t = Tu for some (unique) u ≥ 0, then we set

Yt′ = X

ti≤u

Yi(ρi) .

Otherwise, there exists a unique u > 0 such that Tu− ≤ t < Tu, and thus a

unique index j ∈ I for which Tu− Tu− = ρj, and we set

Yt′ = X

ti<u

(41)

Lemma 5.6 Under the assumptions above, Y′ is a version of Y ; in

partic-ular its law does not depend on the family ((ti, xi), i ∈ I).

Proof of Lemma 5.6: The statement follows readily from the strong Markov property in the case when the family (ti, i ∈ I) is discrete in [0, ∞). The

general case is deduced by approximation.

We will apply Lemma 5.6 in the following framework. Consider a subor-dinator X(−) with no drift and L´evy measure Π(−); X(−) will play the role

of the L´evy process Y above. Let also X(+) be an independent subordinator

with no drift and L´evy measure Π(+). We write ((t

i, xi), i ∈ I) for the family

of the times and sizes of the jumps of X(+). By the L´evy-Itˆo decomposition,

((ti, xi), i ∈ I) is the family of the atoms of a Poisson random measure on

R+× R+ with intensity dt ⊗ Π(+)(dx).

Next, mark each jump of X(+), say (t

i, xi), using an independent copy

X(−,i)of X(−). In other words, (t

i, xi, X(−,i)), i ∈ Iis the family of atoms of

a Poisson random measure on R+×R+×D with intensity dt⊗Π(+)(dx)⊗P(−),

where D stands for the space of c`adl`ag paths on [0, ∞) and P(−) for the law

of X(−). Finally, define for every i ∈ I,

ρi := inf



s ≥ 0 : Xs(−,i)≥ xi

∧ a←(xi) .

Lemma 5.7 In the notation above, the family ((ti, ρi), i ∈ I) fulfills (5.34).

Further, the process

Tt: =

X

ti≤t

ρi, t ≥ 0

is a subordinator with no drift.

Proof of Lemma 5.7: Plainly,

X

i∈I

δ(ti,ρi)

is a Poisson random measure on R+× R+ with intensity dt ⊗ µ(dy), where

µ(dy) := Z

(0,∞)

(42)

and τx denotes the first-passage time of X(−)in [x, ∞). So it suffices to check

that R(0,∞)(1 ∧ y)µ(dy) < ∞.

In this direction, recall (e.g. Proposition III.1 in [1]) that there is some absolute constant c such that

E(−)(τx) ≤ cx A(x), ∀x > 0. As a consequence, we have Z (0,∞) yµ(dy) = Z (0,∞) Π(+)(dx)E(−)(τx∧ a←(x)) ≤ Z (0,∞) Π(+)(dx) E(−)(τx) ∧ a←(x)  ≤ c Z (0,∞) Π(+)(dx)  x A(x) ∧ a ←(x)  .

Recall that we assume that Π(+)has support in [0, 1]. It is readily checked

that convergence of the integral in (5.32) is equivalent to Z (0,∞) Π(+)(dx)  x A(x)∧ a ←(x)  < ∞. Our claim is established.

We can thus construct a process X′, as in Lemma 5.6, by pasting together

the paths (Xs(−,i), 0 ≤ s ≤ ρi). This enables us to complete the proof of

Proposition 5.4.

Proof of Proposition 5.4: An application of Lemma 5.6 shows that X′ is a

subordinator which is independent of X(+) and has the same law as X(−).

As a consequence, we may suppose that the L´evy process X is given in the form X = X(+)− X.

Set Yt:= Xt′+ a(t). For every jump (ti, xi) of X(+), we have by

construc-tion

Y (Tti) − Y (Tti−) = X

(−,i)

i) + a(Tti−+ ρi) − a(Tti−)

≥ X(−,i)(ρi) + a(ρi) (as a(x)/x increases)

(43)

By summation (recall that X(+) has no drift), we get that Y (T

t) ≥ Xt(+) for

all t ≥ 0.

As T = (Tt, t ≥ 0) is a subordinator with no drift, we know from the

result of Shtatland (1965) that Tt = o(t) as t → 0, a.s., thus with probability

one, we have for every ε > 0

Xt(+)≤ Xεt+ a(εt), ∀t ≥ 0 sufficiently small . Since a(x)/x increases, we deduce that for t sufficiently small

Xt

a(t) ≤

Xt(+)− X′ εt

a(εt)/ε ≤ ε which completes the proof.

5.3

Proof of Theorem 3.3

Suppose (3.8) holds, so that Xt > 0 for all t ≤ some (random) t0 > 0.

Thus X is irregular for (−∞, 0) and (3.10) follows from [2]. But [2] has that (3.10) impliesP0<s≤t∆(−)s = oP0<s≤t∆(+)s



, a.s., as t ↓ 0, so, for arbitrary ε > 0, Xt≥ (1−ε)P0<s≤t∆(+)s := (1−ε)Xt(+), a.s., when t ≤ some (random)

t0(ε) > 0. Now Xt(+) is a subordinator with zero drift. Apply Lemma 5.2

with γ = κ to get (3.9).

Conversely, (3.10) impliesP0<s≤t∆(−)s = oP0<s≤t∆(+)s



a.s., and (3.9) and Lemma 5.2 imply limt↓0P0<s≤t∆(+)s /tκ = ∞ a.s., hence (3.8).

Acknowledgements. We are grateful to the participants in the Kiola Re-search Workshop, February 2005, where this work began, for stimulating input. The second author also thanks the Leverhulme Trust for their fi-nancial support, and the second and third authors thank the University of Western Australia for support and provision of facilities in the summer of 2006.

References

[1] Bertoin, J. (1996) L´evy Processes. Cambridge University Press, Cam-bridge.

(44)

[2] Bertoin, J. (1997) Regularity of the half-line for L´evy processes. Bull. Sci. Math. 121, 345–354.

[3] Blumenthal, R.M. and Getoor, R.K. (1961) Sample functions of stochas-tic processes with stationary independent increments. J. Math. Mech. 10, 492–516.

[4] Chow, Y.S. and Teicher, H. (1988) Probability Theory: Independence, Interchangeability, Martingales 2nd Ed., Springer–Verlag.

[5] Doney, R. A. and Maller, R. A. (2002) Stability and attraction to nor-mality for L´evy processes at zero and infinity. J. Theoretical Probab. 15, 751-792.

[6] Doney, R. A. and Maller, R.A. (2002) Stability of the overshoot for L´evy processes. Ann. Probab. 30, 188–212.

[7] Doney, R. A. and Maller, R.A. (2005) Passage times of random walks and L´evy processes across power law boundaries. Prob. Theor. Rel. Fields. 133, 57–70.

[8] Erickson, K.B., and Maller, R.A. (2006) Finiteness of integrals of func-tions of L´evy processes, submitted.

[9] Gihmann, I.I. and Skorohod, A.V. (1975) Theory of Stochastic Processes, Springer.

[10] Khintchine, A. Ya. (1939) Sur la croissance locale des processes stochas-tiques homog`enes `a accroissements ind´ependants. Izv. Akad. Nauk SSSR 3, 487–508.

[11] Petrov, V.V. (1975) Sums of Independent Random Variables. Berlin Hei-delberg New York, Springer.

[12] Protter, Ph. (1990) Stochastic Integration and Differential Equations. A New Approach. Berlin, Springer.

[13] Pruitt, W.E. (1981) The growth of random walks and L´evy processes. Ann. Probab. 9, 948–956.

[14] Rogozin, B.A. (1968) Local behaviour of processes with independent increments. Theor. Probab. Appl. 13, 482–486.

Figure

Table 2 summarises the conditions for (3.1):

Références

Documents relatifs

A result on power moments of Lévy-type perpetuities and its application to the L p-convergence of Biggins’ martingales in branching Lévy processes.. ALEA : Latin American Journal

Keywords: First-passage time, last-passage time, scale function, failure time, Lévy process, gamma process, compound Poisson process, Brownian motion with drift.. AMS

In order to approximate the exit time of a one-dimensional diffusion process, we propose an algorithm based on a random walk.. Such an al- gorithm was already introduced in both

Let h &gt; 0.. Our second result concerns expectations of the number of times that an Itô process, observed at discrete times between 0 and a finite horizon T, visits

result is actually a particular case of [9], where it is proved that for any non creeping L´evy process, the law of the first passage time over x &gt; 0 is always

We provide short and simple proofs of the continuous time ballot theorem for processes with cyclically interchangeable increments and Kendall’s identity for spec- trally positive

Recently, efficient procedures have been proposed to draw random numbers from L´evy- stable distributions as well as for waiting times with power law tails in as similar way as for

The objective of this paper is to provide two estimates for the expectations of such total times spent by fairly general Itˆ o processes observed at discrete times; these