• Aucun résultat trouvé

Stationary distributions for jump processes with memory

N/A
N/A
Protected

Academic year: 2022

Partager "Stationary distributions for jump processes with memory"

Copied!
22
0
0

Texte intégral

(1)

www.imstat.org/aihp 2012, Vol. 48, No. 3, 609–630

DOI:10.1214/11-AIHP428

© Association des Publications de l’Institut Henri Poincaré, 2012

Stationary distributions for jump processes with memory

K. Burdzy

a,1

, T. Kulczycki

b,c,2

and R. L. Schilling

d,3

aDepartment of Mathematics, Box 354350, University of Washington, Seattle, WA 98195, USA. E-mail:burdzy@math.washington.edu bInstitute of Mathematics, Polish Academy of Sciences, ul. Kopernika 18, 51-617 Wrocław, Poland. E-mail:t.kulczycki@impan.pl cInstitute of Mathematics and Computer Science, Wrocław University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wrocław, Poland

dInstitut für Mathematische Stochastik, TU Dresden, D-01062 Dresden, Germany. E-mail:rene.schilling@tu-dresden.de Received 13 September 2010; revised 1 April 2011; accepted 7 April 2011

Abstract. We analyze a jump processesZwith a jump measure determined by a “memory” processS. The state space of(Z, S) is the Cartesian product of the unit circle and the real line. We prove that the stationary distribution of(Z, S)is the product of the uniform probability measure and a Gaussian distribution.

Résumé. Nous proposons d’étudier un processus à sautsZavec une mesure de sauts déterminée par un processusSreprésentant une “mémoire”. L’espace d’états de(Z, S)est le produit Cartesien du cercle trigonométrique et de l’axe réel. Nous démontrons que la distribution stationnaire de(Z, S)est la mesure produit d’une loi uniforme et d’une loi Gaussienne.

MSC:Primary 60J35; secondary 60H10; 60G51; 60J75; 60J55

Keywords:Stationary distribution; Stable Lévy process; Process with memory

1. Introduction

We are going to find stationary distributions for processes with jumps influenced by “memory.” This paper is a com- panion to [3]. The introduction to that paper contains a review of various sources of inspiration for this project, related models and results.

We will analyze a pair of real-valued processes(Y, S)such thatSis a “memory” in the sense that dSt=W (Yt)dt whereW is a C3function. The processY is a jump process “mostly” driven by a stable process but the processS affects the rate of jumps ofY. We refer the reader to Section2for a formal presentation of this model as it is too long for the introduction. The present article illustrates advantages of semi-discrete models introduced in [5] since the form of the stationary distribution for(Y, S)was conjectured in [5], Example 3.8. We would not find it easy to conjecture the stationary distribution for this process in a direct way.

The main result of this paper, i.e. Theorem3.7, is concerned with the stationary distribution of a transformation of (Y, S). In order to obtain non-trivial results, we “wrap”Y on the unit circle, so that the state space for the transformed process is compact. In other words, we consider(Zt, St)=(eiYt, St). The stationary distribution for(Zt, St)is the product of the uniform distribution on the circle and the normal distribution.

The Gaussian distribution of the “memory” process appeared in models discussed in [2,3]. In each of those papers, memory processes similar to S effectively represented “inert drift.” A heuristic argument given in the introduction

1Supported in part by NSF Grant DMS-09-06743 and by grant N N201 397137, MNiSW, Poland.

2Supported in part by grant N N201 373136, MNiSW, Poland.

3Supported in part by DFG grant Schi 419/5-1.

(2)

Table 1

Frequently used symbols

ab,ab max(a, b), min(a, b) a+,a max(a,0),min(a,0)

|x|1 m

j=1|xj|wherex=(x1, . . . , xm)Rm

ek thekth unit base vector in the usual orthonormal basis forRn Aα α(1+2α) 2α1

π(1α/2),α(0,2)

Dα |α|

∂xα11···∂xdαd,α=1, . . . , αd)Nd0 Ck k-times continuously differentiable functions

Cbk,Cck,Ck0 functions inCkwhich, together with all their derivatives up to orderk, are “bounded,” are “compactly supported,” and “vanish at infinity,” respectively

C(R2) all bounded and uniformly continuous functionsf:R2Rsuch that supp(f )R× [−N, N]for someN >0 C(R2) C(R2)Cb2(R2)

S {zC:|z| =1}unit circle inC

to [3] provides a justification for the Gaussian distribution, using the concepts of kinetic energy associated to drift and Gibbs measure. The conceptual novelty of the present paper is that the Gaussian distribution ofS in the stationary regime cannot be explained by kinetic energy becauseSaffects the jump distribution and not the drift ofZ.

The product form of the stationary distribution for a two-component Markov process is obvious if the two compo- nents are independent Markov processes. The product form is far from obvious if the components are not independent but it does appear in a number of contexts, from queuing theory to mathematical physics. The paper [5] was an attempt to understand this phenomenon for a class of models. The unexpected appearance of the Gaussian distribution in some stationary measures was noticed in [4] before it was explored more deeply in [2,5].

We turn to the technical aspects of the paper. The main effort is directed at determining the domain and a core of the generator of the process. A part of the argument is based on an estimate of the smoothness of the stochastic flow of solutions to (2.3).

1.1. Notation

Since the paper uses a large amount of notation, we collect some of the most frequently used symbols in Table1, for easy reference. Constantscwithout sub- or superscript are generic and may change their value from line to line.

2. The construction of the process and its generator

LetS= {z∈C: |z| =1}be the unit circle inC. Consider aC3functionV:S→Rsuch that

SV (z)dz=0 and set W (x)=V (eix),x∈R. Assume thatV is not identically constant. In this paper we will be interested in the Markov process(Yt, St)with state spaceR2and generatorG(Y,S)of the following form

G(Y,S)f (y, s)= −(Δy)α/2f (y, s)+Rf (y, s)+W (y)fs(y, s), (2.1)

with a domain that will be specified later. Here,(y, s)∈R2,α(0,2)and

(Δy)α/2f (y, s)=Aα lim

ε0+

|yx|

f (x, s)f (y, s)

|yx|1+α dx, Rf (y, s)=

π+y

−π+y

f (x, s)f (y, s)

W (y)W (x) s

+dx. (2.2)

Since−(Δ)α/2,α(0,2), is the generator of the symmetricα-stable process onR, we may think of the processYtas the perturbed symmetricα-stable process andStas the memory which changes the jumping measure of the processYt.

(3)

The definition of(Y, S)is informal. Below we will construct this process in a direct way and we will show that this process has the generator (2.1); see Proposition2.4. Our construction is based on the so-calledconstruction of Meyer;

see, e.g., [7,8] or [1], Section 3.1.

For any(y, s)∈R2let g(y, s, x)=

W (y)W (y+x) s

+1(−π,π)(x), x∈R, and

g(y, s,·)

1= π

−π

W (y)W (y+x) s

+dx.

Let g(y, s, x):=g(y, s, x)/g(y, s,·)1 if g(y, s,·)1=0. We let g(y, s,·) be the delta function at 0 when g(y, s,·)1=0. Ifg(y, s,·)1=0, we letFy,s(·)denote the cumulative distribution function of a random vari- able with densityg(y, s,·). Ifg(y, s,·)1=0, we letFy,s(·)denote the cumulative distribution function of a random variable that is identically equal to 0. We have

Fy,s1(v)=inf

x∈R: x

−∞

g(y, s, z) g(y, s,·, )1

dz≥v

so for anyv, the function(y, s)Fy,s1(v)is measurable. IfU is a uniformly distributed random variable on(0,1), thenFy,s1(U)has the densityg(y, s,·). Let(Un)n∈Nbe countably many independent copies ofU and setηn(y, s)= Fy,s1(Un).

Let X(t ) be a symmetric α-stable process on R, α(0,2), starting from 0 and N (t ) a Poisson process with intensity 1. We assume that(Un)n∈N,X(·)andN (·)are independent.

Let 0< σ1< σ2<· · ·be the times of jumps ofN (t ). Consider anyy, s∈Rand fort≥0 let Yt1=y+Xt,

St1=s+ t

0

W Yr1

dr, σ1(t)=

t

0

g

Yr1, Sr1,·

1dr, τ1=inf

t0

σ1(t)=σ1

(inf∅= ∞).

Now we proceed recursively. IfYtj,Stj,σj(t)are well defined on[0, τj)andτj<∞then we define fortτj, Ytj+1=y+Xt+

j n=1

ηn

Ynn), Snn) ,

Stj+1=s+Sjj)+ t

τjW Yrj+1

dr, σj+1(t)=τj+

t

τj

g

Yrj+1, Srj+1,·

1dr, τj+1= inf

tτj

σj+1(t)=σj+1 .

Letτ0=0(Yt, St)=(Ytj, Stj)forτj1t < τj,j≥1. It is easy to see that(Yt, St)is defined for allt≥0, a.s. If we putσ (t )=t

0g(Yr, Sr,·)1drthen we can represent(Yt, St)by the following closed-form expression,

Yt=y+Xt+N (σ (t)) n=1 ηn

Y (τn), S(τn) , St=s+t

0W (Yr)dr. (2.3)

(4)

We define the semigroup{Tt}t0of the process(Yt, St)forfCb(R2)by Ttf (y, s)=E(y,s)f (Yt, St), (y, s)∈R2.

By G(Y,S) we denote the generator of{Tt}t0 and its domain byD(G(Y,S)). We will show in Proposition 2.4that C2(R2)D(G(Y,S))and thatG(Y,S)f is given by (2.1) forfC2(R2), see Section1.1for the definition ofC2(R2).

Our construction of(Yt, St)is a deterministic map (Un)n∈N,

N (t )

t0, X(t )

t0

Y (t )

t0, S(t)

t0

.

This easily implies the strong Markov property for(Y, S). We will verify that(Zt, St):=(eiYt, St)is also a strong Markov process. We first show that the transition function of(Yt, St)is periodic.

Lemma 2.1. Let(Yt, St)be the Markov process defined by(2.3).Then P(y+2π,s)(YtA+2π, StB)=P(y,s)(YtA, StB) for all(y, s)∈R2and all Borel setsA, B⊂R.

Proof. Let Xt be a symmetric α-stable process, starting from 0, α(0,2), and let N (t ) be a Poisson process with intensity 1. By(Yty, Sts)we denote the process given by (2.3) with initial value(Y0y, S0s)=(y, s). The process (Y˜t,S˜t):=(Yty+2π, Sts)has the following representation

Y˜t=y+2π+Xt+

N (σ (t))˜ n=1

ηnY (˜ τ˜n),S(˜ τ˜n) , S˜t=s+

t

0

W (Y˜r)dr, whereσ (t )˜ =t

0g(Y˜r,S˜r,·)1drandτ˜k=inft0{ ˜σ (t)=σk}.

Note that for allx∈R,

g(y−2π, s, x)=g(y, s, x) and, therefore, g(y−2π, s,·)

1=g(y, s,·)

1.

It follows thatηn(y−2π, s)has the same distribution asηn(y, s). Since the functionW is periodic with period 2π, we haveW (Y˜r)=W (Y˜r−2π). Moreover,g(Y˜r,S˜r,·)1= g(Y˜r −2π,S˜r,·)1and,ηn(Y (˜ τ˜n),S(˜ τ˜n))has the same distribution asηn(Y (˜ τ˜n)−2π,S(˜ τ˜n)). This means that we can rewrite the representation of(Yty+2π, Sts)in the following way:

Y˜t=y+2π+Xt+

N (σ (t))˜ n=1

ηn

Y (˜ τ˜n)−2π,S(˜ τ˜n) , S˜t=s+

t 0

W (Y˜r−2π)dr, whereσ (t )˜ =t

0g(Y˜r−2π,S˜r,·)1drandτ˜k=inft0{ ˜σ (t)=σk}.

By subtracting 2πfrom both sides of the first equation we get Y˜t−2π=y+Xt+

N (σ (t))˜ n=1

ηnY (˜ τ˜n)−2π,S(˜ τ˜n) , S˜t=s+

t

0

W (Y˜r−2π)dr,

(5)

Table 2

Notation for processes and their generators

Process Semigroup Generator and domain

(Yt, St) Tt, t0 (G(Y,S),D(G(Y,S)))

(Zt, St)=(eiYt, St) TtS, t0 (G,D(G))

(Yˆt,Sˆt)=(Yˆ0+Xt,Sˆ0+t

0W (Yˆr)dr) Tˆt, t0 (G(Y ,ˆS)ˆ,D(G(Y ,ˆS)ˆ )) (Zˆt,Sˆt)=(eiYˆt,Sˆt) TˆtS, t0 (Gˆ,D(Gˆ))

withσ (t )˜ andτ˜k as before. Substituting Yˆt := ˜Yt−2πwe see that this is the defining system of equations for the process(Yty, Sts). Therefore, the processes(Yty, Sts)and(Yty+2π, Sts)have the same law.

We can now argue exactly as in [3], Corollary 2.3, to see that (Zt, St)=(eiYt, St)is indeed a strong Markov process. We define the transition semigroup of(Zt, St)forfC0(S×R)by

TtSf (z, s)=E(z,s)f (Zt, St), (z, s)∈S×R. (2.4)

The generator of{TtS}t0and its domain will be denotedGandD(G).

In the sequel we will need the following auxiliary processes Yˆt= ˆY0+Xt,

Sˆt= ˆS0+ t

0

W (Yˆr)dr, Zˆt=eiYˆt,

whereXt is a symmetricα-stable Lévy process onR,α(0,2), starting from 0. We will use notation from Table2.

We will now identify the generators of the processes(Yt, St)and(Zt, St)and link them with the generators of the processes(Yˆt,Sˆt)and(Zˆt,Sˆt).

Proposition 2.2. Let(Yt, St)be the process defined by(2.3)and letfC(R2).Then

tlim0+

Ttff

t exists ⇐⇒ lim

t0+

Tˆtff t exists, in the norm · .If one,hence both,limits exist,then

lim

t0+

Ttff

t = lim

t0+

Tˆtff

t +Rf, (2.5)

whereRf is given by(2.2).

Corollary 2.3. We have

fD(G)Cc(S×R) ⇐⇒ fD(Gˆ)Cc(S×R).

IffD(G)Cc(S×R)then Gf = ˆGf +RSf, where

RSf (z, s)=

S

f (w, s)f (z, s)

V (z)V (w) s

+dw.

(6)

Proposition 2.4. Let(Yt, St)be the process defined by(2.3).ThenC2(R2)D(G(Y,S))and forfC2(R2)we have

G(Y,S)f (y, s)= −(Δy)α/2f (y, s)+Rf (y, s)+W (y)fs(y, s) (2.6)

for all(y, s)∈R2withRf given by(2.2).

Moreover,C2(R2)D(G(Y ,ˆS)ˆ )and forfC2(R2)we have

G(Y ,ˆˆS)f (y, s)= −(Δy)α/2f (y, s)+W (y)fs(y, s) (2.7) for all(y, s)∈R2.

By Arg(z)we denote the argument ofz∈Ccontained in(−π,π]. ForgC2(S)let us put Lg(z)=Aα lim

ε0+

S∩{|Arg(w/z)|}

g(w)g(z)

|Arg(w/z)|1+αdw +Aα

n∈Z\{0}

S

g(w)g(z)

|Arg(w/z)+2nπ|1+αdw, (2.8)

where dwdenotes the arc length measure onS; note that

S dw=2π. It is clear that forfCc2(S×R),z=eiy, y, s∈Rwe have

(Δy)α/2f (y, s)˜ =Lzf (z, s). (2.9)

Corollary 2.5. We haveCc2(S×R)D(G)and forfCc2(S×R)we have Gf (z, s)=Lzf (z, s)+RSf (z, s)+V (z)fs(z, s)

for all(z, s)∈S×R,whereLis given by(2.8).

We also haveCc2(S×R)D(Gˆ)and forfCc2(S×R)we have Gˆf (z, s)=Lzf (z, s)+V (z)fs(z, s)

for all(z, s)∈S×R.

Remark 2.6. Proposition2.4shows that forfC2(R2)the generator of the process(Yt, St)defined by(2.3)is of the form(2.1).This is a standard result,the so-called “construction of Meyer,” but we include our own proof of this result so that the paper is self-contained.Moreover,Proposition2.2,Corollaries2.3and2.5are needed to identify a core forG.Corollary2.5is also needed to find the stationary measure for(Zt, St).

We will need two auxiliary results.

Lemma 2.7. There exists a constantc=c(M) >0such that for anyx∈ [−π,π]and anyu1=(y1, s1)∈R2,u2= (y2, s2)∈R2withs1, s2∈ [−M, M]we have

g(u1, x)g(u2, x)c

|u2u1| ∧1 .

Proof. Froma+=(a+ |a|)/2 we conclude that|a+b+| ≤ |ab|for alla, b∈R.

Letx∈ [−π,π],u1=(y1, s1)∈R2,u2=(y2, s2)∈R2ands1, s2∈ [−M, M]. We have g(u1, x)g(u2, x)

W (y1)W (y1+x) s1

+

W (y2)W (y2+x) s2

+

(7)

W (y1)W (y1+x) s1

W (y2)W (y2+x) s2

≤W (y1)W (y1+x) s1

W (y1)W (y1+x) s2 +W (y1)W (y1+x)

s2

W (y2)W (y2+x) s2

≤W (y1)W (y1+x)|s1s2| +W (y1)W (y2)|s2| +W (y1+x)W (y2+x)|s2|

≤2W|s1s2| +2MW

|y1y2|.

Since, trivially,|g(u1, x)g(u2, x)| ≤4WM, the claim follows withc=4(W+ W)(M+1).

As an easy corollary of Lemma2.7we get

Lemma 2.8. There exists a constant c=c(M) >0 such that for anyu1=(y1, s1)∈R2,u2=(y2, s2)∈R2 with s1, s2∈ [−M, M]we have

g(u1,·)

1g(u2,·)

1c

|u2u1| ∧1 .

Proof of Proposition2.2. LetfC(R2). Throughout the proof we will assume that supp(f )⊂R×(M0, M0) for someM0>0. Note that

|St| = S0+

t

0

W (Yr)dr

≤ |S0| + WM0+ W

for all starting points(Y0, S0)=(y, s)∈R× [−M0, M0]and all 0≤t≤1. Put M1=M0+ W.

If(Y0, S0)=(y, s) /∈R× [−M1, M1], then

|St| = S0+

t 0

W (Yr)dr

> M1W=M0, 0≤t≤1,

sof (Yt, St)=0. It follows that for any(y, s) /∈R× [−M1, M1]and 0< h≤1 we have E(y,s)f (Yh, Sh)f (y, s)

h =0.

By the same argument,

E(y,s)f (Yˆh,Sˆh)f (y, s)

h =0.

It now follows from the definition ofRf (y, s)thatRf (y, s)=0 for(y, s) /∈R× [−M1, M1]. It is, therefore, enough to consider(y, s)∈R× [−M1, M1].

The arguments above tell us that for all starting points (Y0, S0)=(y, s)∈R× [−M1, M1] and all 0≤t ≤1,

|St| ≤ |S0| + WM1+ W. Setting M=M1+ W,

we get from the definition of the functiongthat g(Yr, Sr,·)

1≤2π2WM, 0≤r≤1,

(8)

and so σ (t )=

t 0

g(Yr, Sr,·)

1dr≤4πWMt=c0t, 0≤t≤1, with the constantc0=4πWM.

From now on we will assume that(y, s)∈R× [−M1, M1]and 0< h≤1. We have Thf (y, s)f (y, s)

h =E(y,s)f (Yh, Sh)f (y, s) h

= 1 hE(y,s)

f (Yh, Sh)f (y, s);N σ (h)

=0 +1

hE(y,s)

f (Yh, Sh)f (y, s);N σ (h)

=1 +1

hE(y,s)

f (Yh, Sh)f (y, s);N σ (h)

≥2

=I+II+III.

Sinceσ (h)c0hwe obtain

|III| ≤ 2f h P(y,s)

N σ (h)

≥2

≤ 2f h P(y,s)

N (c0h)≥2

=2f1−ec0hc0hec0h

h −→

h0+0 uniformly for all(y, s)∈R× [−M1, M1].

Now we will consider the expression I. We have I= 1

hE(y,s)

f

y+Xh, s+ h

0

W (y+Xr)dr

f (y, s);N σ (h)

=0

= 1 hE(y,s)

f (Yˆh,Sˆh)f (y, s);N σ (h)

=0

= 1 hE(y,s)

f (Yˆh,Sˆh)f (y, s)

−1 hE(y,s)

f (Yˆh,Sˆh)f (y, s);N σ (h)

≥1

=I1+I2. Note that

I1=Tˆhf (y, s)f (y, s)

h .

It will suffice to prove that I2→0 and II→Rf. We have

|I2| ≤ 1

hE(y,s)f (Yˆh,Sˆh)f (y, s);N (c0h)≥1

=1−ec0h

h E(y,s)f (Yˆh,Sˆh)f (y, s).

Recall thatfC(R2)is bounded and uniformly continuous. We will use the following modulus of continuity ε(f;δ)=ε(δ)= sup

(y,s)∈R2

|y1|∨|sups1|≤δ

f (y+y1, s+s1)f (y, s).

(9)

Clearly,ε(δ)≤2fand limδ0+ε(δ)=0.

Note that forYˆ0=y,Sˆ0=swe haveYˆhy=Xh,Sˆhs=h

0 W (Yˆr)drwhich gives| ˆSts| ≤hWfor all th. It follows that

E(y,s)f (Yˆh,Sˆh)f (y, s)≤E(y,s) ε

sup

0<th

|Xt| ∨hW . SincetXt is right-continuous andX0≡0 we have, a.s.,

sup

0<th

|Xt| −→

h0+0 and, therefore, ε

sup

0<th

|Xt| ∨hW

h−→0+0.

By the bounded convergence theorem E(y,s)

ε

sup

0<th

|Xt| ∨hW

h−→0+0

uniformly for all (y, s)∈R× [−M1, M1]because the expressionε(sup0<th|Xt| ∨hW)does not depend on (y, s). It follows that

|I2| −→

h0+0

uniformly for all(y, s)∈R× [−M1, M1]. Now we turn to II. We have

II=1 hE(y,s)

f (Yh, Sh)f (Yh, s);N σ (h)

=1 +1

hE(y,s)

f (Yh, s)f (y, s);N σ (h)

=1

=II1+II2. Sinceσ (h)c0h

|II1| ≤ 1 hE(y,s)

f

Yh, s+ h

0

W (Yr)dr

f (Yh, s)

;N (c0h)≥1

≤ 1 hE(y,s)

ε

h 0

W (Yr)dr

;N (c0h)≥1

≤ 1 hE(y,s)

ε

hW

;N (c0h)≥1

=1−ec0h

h E(y,s)

ε

hW

h−→0+0

uniformly for all(y, s)∈R× [−M1, M1]. It will suffice to show that II2Rf. From now on we will use the following shorthand notation

Ut:=(Yt, St), Uˆt:=(Yˆt,Sˆt), u:=(y, s).

We have II2= 1

hE(y,s) f

y+Xh+η1(Uτ1), s

f

y+η1(Uτ1), s

;N σ (h)

≥1 +1

hE(y,s) f

y+η1(Uτ1), s

f (y, s);N σ (h)

≥1

(10)

−1 hE(y,s)

f

y+Xh+η1(Uτ1), s

f (y, s);N σ (h)

≥2

=II2a+II2b+II2c. Observe that

|II2c| ≤2f1−ec0hc0hec0h

h −→

h0+0

and that the convergence is uniform in(y, s)∈R× [−M1, M1].

Moreover,

|II2a| ≤ 1

hE(y,s)f

y+Xh+η1(Uτ1), s

f

y+η1(Uτ1), s;N (c0h)≥1

≤ 1 hE(y,s)

ε

sup

0th

|Xh|

;N (c0h)≥1

= 1−ec0h

h E(y,s)

ε

sup

0th

|Xh|

h−→0+0

uniformly for all(y, s)∈R× [−M1, M1]. It will suffice to show that II2bRf. Note that

N σ (h)

≥1 ⇐⇒ τ1h ⇐⇒

h

0

g(Ur,·)

1dr≥σ1. (2.10)

We claim that h

0

g(Ur,·)

1dr≥σ1 ⇐⇒

h

0

g(Uˆr,·)

1dr≥σ1. (2.11)

First, we assume thath

0 g(Ur,·)1dr≥σ1. This implies thatτ1h. Recall thatUr= ˆUr forr < τ1. Hence h

0

g(Uˆr,·)

1dr≥ τ1

0

g(Uˆr,·)

1dr= τ1

0

g(Ur,·)

1dr=σ1, where the last equality follows from the definition ofτ1.

Now let us assume thath

0 g(Ur,·)1dr < σ1. This implies thatτ1> h. Using againUr = ˆUr forr < h < τ1, we obtain

σ1>

h 0

g(Ur,·)

1dr= h

0

g(Uˆr,·)

1dr, which finishes the proof of (2.11).

By (2.10) and (2.11) we obtain II2b=1

hE(y,s) f

y+η1(Uτ1), s

f (y, s);τ1h

=1 hE(y,s)

f

y+η1(U1h)), s

f (y, s);τ1h

=1 hE(y,s)

f

y+η1(Uˆ1h)), s

f (y, s); h

0

g(Uˆr,·)

1dr≥σ1

.

(11)

We will use the following abbreviations:

u=(y, s), A=

h

0

g(Uˆr,·)

1dr≥σ1 , B=

h 0

g(u,·)

1dr≥σ1 , F =1

h f

y+η1(Uˆ1h)), s

f (y, s) . This allows us to rewrite II2bas

II2b=Eu[F;A] =Eu[F;B] +Eu[F, A\B] −Eu[F;B\A].

Recall thatX=(Xt)t0,N=(N (t))t0andU=(Un)n∈Nare independent. Therefore the probability measureP can be written in the formP=PX⊗PN⊗PU; the conditional probability, givenNorU, isPXand the corresponding expectation is denoted byEX. In a similar wayP(X,N )=PX⊗PN andE(X,N ) denote conditional probability and conditional expectation ifUis given. As usual, the initial (time-zero) value of the process under consideration is given as a superscript. Note thatUˆt =(Yˆt,Sˆt)is a function ofXand does not depend onN orU. In particular,Uˆt andσ1

are independent. Sinceσ1is the time of the first jump of the Poisson processN (t ), it is exponentially distributed with parameter 1. It follows that

Eu[F, A\B]

≤2f h Pu

h

0

g(Uˆr,·)

1dr≥σ1>

h

0

g(u,·)

1dr

=2f h EuX

e0hg(u,·)1dr−e0hg(Uˆr,·)1dr; h

0

g(Uˆr,·)

1dr >

h

0

g(u,·)

1dr

≤2f h EuXe

h

0 g(u,·)1dr−e

h

0 g(Uˆr,·)1dr

≤2f h EuX

h

0

g(u,·)

1g(Uˆr,·)

1

dr

≤2fEuX sup

0rh

g(u,·)

1g(Uˆr,·)

1.

For the penultimate inequality we used the elementary estimate|ea−eb| ≤ |ab|,a, b≥0. From Lemma2.8we infer that the last expression is bounded by

2fcEuX sup

0rh

| ˆUru| ∧1

=2fcEuX

sup

0rh

Xr,

r

0

W (Yˆt)dt ∧1

h−→0+0

uniformly for all(y, s)∈R× [−M1, M1]. This convergence follows from the right-continuity ofXr and the fact that

|r

0W (Yˆt)dt| ≤hW.

A similar argument shows that|Eu[F;B\A]| −→

h0+0 uniformly in(y, s)∈R× [−M1, M1]. It will suffice to show thatEu[F;B] →Rf.

Références

Documents relatifs

Unit´e de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unit´e de recherche INRIA Rh ˆone-Alpes, 655 avenue de l’Europe, 38330 MONTBONNOT ST

As in the Heston model, we want to use our algorithm as a way of option pricing when the stochastic volatility evolves under its stationary regime and test it on Asian options using

The stationary Boltzmann equation for hard and soft forces in the context of a two component gas is considered in the slab when the molecular masses of the 2 component are

In order to overcome the drowning effect in possibilistic stationary sequential decision problems, we have proposed two lexicographic criteria, initially introduced by Fargier

Regarding the literature on statistical properties of multidimensional diffusion processes in absence of jumps, an important reference is given by Dalalyan and Reiss in [9] where, as

API with a non-stationary policy of growing period Following our findings on non-stationary policies AVI, we consider the following variation of API, where at each iteration, instead

We prove that the class of discrete time stationary max-stable pro- cess satisfying the Markov property is equal, up to time reversal, to the class of stationary

Therefore, the null distribution of the resulting test statistic coincides with that of a comparison of margins, as in [4] or [3], and we can use their previous results to obtain