www.imstat.org/aihp 2012, Vol. 48, No. 3, 609–630
DOI:10.1214/11-AIHP428
© Association des Publications de l’Institut Henri Poincaré, 2012
Stationary distributions for jump processes with memory
K. Burdzy
a,1, T. Kulczycki
b,c,2and R. L. Schilling
d,3aDepartment of Mathematics, Box 354350, University of Washington, Seattle, WA 98195, USA. E-mail:burdzy@math.washington.edu bInstitute of Mathematics, Polish Academy of Sciences, ul. Kopernika 18, 51-617 Wrocław, Poland. E-mail:t.kulczycki@impan.pl cInstitute of Mathematics and Computer Science, Wrocław University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wrocław, Poland
dInstitut für Mathematische Stochastik, TU Dresden, D-01062 Dresden, Germany. E-mail:rene.schilling@tu-dresden.de Received 13 September 2010; revised 1 April 2011; accepted 7 April 2011
Abstract. We analyze a jump processesZwith a jump measure determined by a “memory” processS. The state space of(Z, S) is the Cartesian product of the unit circle and the real line. We prove that the stationary distribution of(Z, S)is the product of the uniform probability measure and a Gaussian distribution.
Résumé. Nous proposons d’étudier un processus à sautsZavec une mesure de sauts déterminée par un processusSreprésentant une “mémoire”. L’espace d’états de(Z, S)est le produit Cartesien du cercle trigonométrique et de l’axe réel. Nous démontrons que la distribution stationnaire de(Z, S)est la mesure produit d’une loi uniforme et d’une loi Gaussienne.
MSC:Primary 60J35; secondary 60H10; 60G51; 60J75; 60J55
Keywords:Stationary distribution; Stable Lévy process; Process with memory
1. Introduction
We are going to find stationary distributions for processes with jumps influenced by “memory.” This paper is a com- panion to [3]. The introduction to that paper contains a review of various sources of inspiration for this project, related models and results.
We will analyze a pair of real-valued processes(Y, S)such thatSis a “memory” in the sense that dSt=W (Yt)dt whereW is a C3function. The processY is a jump process “mostly” driven by a stable process but the processS affects the rate of jumps ofY. We refer the reader to Section2for a formal presentation of this model as it is too long for the introduction. The present article illustrates advantages of semi-discrete models introduced in [5] since the form of the stationary distribution for(Y, S)was conjectured in [5], Example 3.8. We would not find it easy to conjecture the stationary distribution for this process in a direct way.
The main result of this paper, i.e. Theorem3.7, is concerned with the stationary distribution of a transformation of (Y, S). In order to obtain non-trivial results, we “wrap”Y on the unit circle, so that the state space for the transformed process is compact. In other words, we consider(Zt, St)=(eiYt, St). The stationary distribution for(Zt, St)is the product of the uniform distribution on the circle and the normal distribution.
The Gaussian distribution of the “memory” process appeared in models discussed in [2,3]. In each of those papers, memory processes similar to S effectively represented “inert drift.” A heuristic argument given in the introduction
1Supported in part by NSF Grant DMS-09-06743 and by grant N N201 397137, MNiSW, Poland.
2Supported in part by grant N N201 373136, MNiSW, Poland.
3Supported in part by DFG grant Schi 419/5-1.
Table 1
Frequently used symbols
a∨b,a∧b max(a, b), min(a, b) a+,a− max(a,0),−min(a,0)
|x|1 m
j=1|xj|wherex=(x1, . . . , xm)∈Rm
ek thekth unit base vector in the usual orthonormal basis forRn Aα α(1+2α)√ 2α−1
π(1−α/2),α∈(0,2)
Dα ∂|α|
∂xα11···∂xdαd,α=(α1, . . . , αd)∈Nd0 Ck k-times continuously differentiable functions
Cbk,Cck,Ck0 functions inCkwhich, together with all their derivatives up to orderk, are “bounded,” are “compactly supported,” and “vanish at infinity,” respectively
C∗(R2) all bounded and uniformly continuous functionsf:R2→Rsuch that supp(f )⊂R× [−N, N]for someN >0 C∗(R2) C∗(R2)∩Cb2(R2)
S {z∈C:|z| =1}unit circle inC
to [3] provides a justification for the Gaussian distribution, using the concepts of kinetic energy associated to drift and Gibbs measure. The conceptual novelty of the present paper is that the Gaussian distribution ofS in the stationary regime cannot be explained by kinetic energy becauseSaffects the jump distribution and not the drift ofZ.
The product form of the stationary distribution for a two-component Markov process is obvious if the two compo- nents are independent Markov processes. The product form is far from obvious if the components are not independent but it does appear in a number of contexts, from queuing theory to mathematical physics. The paper [5] was an attempt to understand this phenomenon for a class of models. The unexpected appearance of the Gaussian distribution in some stationary measures was noticed in [4] before it was explored more deeply in [2,5].
We turn to the technical aspects of the paper. The main effort is directed at determining the domain and a core of the generator of the process. A part of the argument is based on an estimate of the smoothness of the stochastic flow of solutions to (2.3).
1.1. Notation
Since the paper uses a large amount of notation, we collect some of the most frequently used symbols in Table1, for easy reference. Constantscwithout sub- or superscript are generic and may change their value from line to line.
2. The construction of the process and its generator
LetS= {z∈C: |z| =1}be the unit circle inC. Consider aC3functionV:S→Rsuch that
SV (z)dz=0 and set W (x)=V (eix),x∈R. Assume thatV is not identically constant. In this paper we will be interested in the Markov process(Yt, St)with state spaceR2and generatorG(Y,S)of the following form
G(Y,S)f (y, s)= −(−Δy)α/2f (y, s)+Rf (y, s)+W (y)fs(y, s), (2.1)
with a domain that will be specified later. Here,(y, s)∈R2,α∈(0,2)and
−(−Δy)α/2f (y, s)=Aα lim
ε→0+
|y−x|>ε
f (x, s)−f (y, s)
|y−x|1+α dx, Rf (y, s)=
π+y
−π+y
f (x, s)−f (y, s)
W (y)−W (x) s
+dx. (2.2)
Since−(−Δ)α/2,α∈(0,2), is the generator of the symmetricα-stable process onR, we may think of the processYtas the perturbed symmetricα-stable process andStas the memory which changes the jumping measure of the processYt.
The definition of(Y, S)is informal. Below we will construct this process in a direct way and we will show that this process has the generator (2.1); see Proposition2.4. Our construction is based on the so-calledconstruction of Meyer;
see, e.g., [7,8] or [1], Section 3.1.
For any(y, s)∈R2let g(y, s, x)=
W (y)−W (y+x) s
+1(−π,π)(x), x∈R, and
g(y, s,·)
1= π
−π
W (y)−W (y+x) s
+dx.
Let g(y, s, x):=g(y, s, x)/g(y, s,·)1 if g(y, s,·)1=0. We let g(y, s,·) be the delta function at 0 when g(y, s,·)1=0. Ifg(y, s,·)1=0, we letFy,s(·)denote the cumulative distribution function of a random vari- able with densityg(y, s,·). Ifg(y, s,·)1=0, we letFy,s(·)denote the cumulative distribution function of a random variable that is identically equal to 0. We have
Fy,s−1(v)=inf
x∈R: x
−∞
g(y, s, z) g(y, s,·, )1
dz≥v
so for anyv, the function(y, s)→Fy,s−1(v)is measurable. IfU is a uniformly distributed random variable on(0,1), thenFy,s−1(U)has the densityg(y, s,·). Let(Un)n∈Nbe countably many independent copies ofU and setηn(y, s)= Fy,s−1(Un).
Let X(t ) be a symmetric α-stable process on R, α∈(0,2), starting from 0 and N (t ) a Poisson process with intensity 1. We assume that(Un)n∈N,X(·)andN (·)are independent.
Let 0< σ1< σ2<· · ·be the times of jumps ofN (t ). Consider anyy, s∈Rand fort≥0 let Yt1=y+Xt,
St1=s+ t
0
W Yr1
dr, σ1(t)=
t
0
g
Yr1, Sr1,·
1dr, τ1=inf
t≥0
σ1(t)=σ1
(inf∅= ∞).
Now we proceed recursively. IfYtj,Stj,σj(t)are well defined on[0, τj)andτj<∞then we define fort≥τj, Ytj+1=y+Xt+
j n=1
ηn
Yn(τn−), Sn(τn−) ,
Stj+1=s+Sj(τj−)+ t
τj−W Yrj+1
dr, σj+1(t)=τj+
t
τj
g
Yrj+1, Srj+1,·
1dr, τj+1= inf
t≥τj
σj+1(t)=σj+1 .
Letτ0=0(Yt, St)=(Ytj, Stj)forτj−1≤t < τj,j≥1. It is easy to see that(Yt, St)is defined for allt≥0, a.s. If we putσ (t )=t
0g(Yr, Sr,·)1drthen we can represent(Yt, St)by the following closed-form expression,
Yt=y+Xt+N (σ (t)) n=1 ηn
Y (τn−), S(τn−) , St=s+t
0W (Yr)dr. (2.3)
We define the semigroup{Tt}t≥0of the process(Yt, St)forf∈Cb(R2)by Ttf (y, s)=E(y,s)f (Yt, St), (y, s)∈R2.
By G(Y,S) we denote the generator of{Tt}t≥0 and its domain byD(G(Y,S)). We will show in Proposition 2.4that C∗2(R2)⊂D(G(Y,S))and thatG(Y,S)f is given by (2.1) forf∈C∗2(R2), see Section1.1for the definition ofC∗2(R2).
Our construction of(Yt, St)is a deterministic map (Un)n∈N,
N (t )
t≥0, X(t )
t≥0
→ Y (t )
t≥0, S(t)
t≥0
.
This easily implies the strong Markov property for(Y, S). We will verify that(Zt, St):=(eiYt, St)is also a strong Markov process. We first show that the transition function of(Yt, St)is periodic.
Lemma 2.1. Let(Yt, St)be the Markov process defined by(2.3).Then P(y+2π,s)(Yt∈A+2π, St∈B)=P(y,s)(Yt∈A, St∈B) for all(y, s)∈R2and all Borel setsA, B⊂R.
Proof. Let Xt be a symmetric α-stable process, starting from 0, α∈(0,2), and let N (t ) be a Poisson process with intensity 1. By(Yty, Sts)we denote the process given by (2.3) with initial value(Y0y, S0s)=(y, s). The process (Y˜t,S˜t):=(Yty+2π, Sts)has the following representation
Y˜t=y+2π+Xt+
N (σ (t))˜ n=1
ηnY (˜ τ˜n−),S(˜ τ˜n−) , S˜t=s+
t
0
W (Y˜r)dr, whereσ (t )˜ =t
0g(Y˜r,S˜r,·)1drandτ˜k=inft≥0{ ˜σ (t)=σk}.
Note that for allx∈R,
g(y−2π, s, x)=g(y, s, x) and, therefore, g(y−2π, s,·)
1=g(y, s,·)
1.
It follows thatηn(y−2π, s)has the same distribution asηn(y, s). Since the functionW is periodic with period 2π, we haveW (Y˜r)=W (Y˜r−2π). Moreover,g(Y˜r,S˜r,·)1= g(Y˜r −2π,S˜r,·)1and,ηn(Y (˜ τ˜n−),S(˜ τ˜n−))has the same distribution asηn(Y (˜ τ˜n−)−2π,S(˜ τ˜n−)). This means that we can rewrite the representation of(Yty+2π, Sts)in the following way:
Y˜t=y+2π+Xt+
N (σ (t))˜ n=1
ηn
Y (˜ τ˜n−)−2π,S(˜ τ˜n−) , S˜t=s+
t 0
W (Y˜r−2π)dr, whereσ (t )˜ =t
0g(Y˜r−2π,S˜r,·)1drandτ˜k=inft≥0{ ˜σ (t)=σk}.
By subtracting 2πfrom both sides of the first equation we get Y˜t−2π=y+Xt+
N (σ (t))˜ n=1
ηnY (˜ τ˜n−)−2π,S(˜ τ˜n−) , S˜t=s+
t
0
W (Y˜r−2π)dr,
Table 2
Notation for processes and their generators
Process Semigroup Generator and domain
(Yt, St) Tt, t≥0 (G(Y,S),D(G(Y,S)))
(Zt, St)=(eiYt, St) TtS, t≥0 (G,D(G))
(Yˆt,Sˆt)=(Yˆ0+Xt,Sˆ0+t
0W (Yˆr)dr) Tˆt, t≥0 (G(Y ,ˆS)ˆ,D(G(Y ,ˆS)ˆ )) (Zˆt,Sˆt)=(eiYˆt,Sˆt) TˆtS, t≥0 (Gˆ,D(Gˆ))
withσ (t )˜ andτ˜k as before. Substituting Yˆt := ˜Yt−2πwe see that this is the defining system of equations for the process(Yty, Sts). Therefore, the processes(Yty, Sts)and(Yty+2π, Sts)have the same law.
We can now argue exactly as in [3], Corollary 2.3, to see that (Zt, St)=(eiYt, St)is indeed a strong Markov process. We define the transition semigroup of(Zt, St)forf ∈C0(S×R)by
TtSf (z, s)=E(z,s)f (Zt, St), (z, s)∈S×R. (2.4)
The generator of{TtS}t≥0and its domain will be denotedGandD(G).
In the sequel we will need the following auxiliary processes Yˆt= ˆY0+Xt,
Sˆt= ˆS0+ t
0
W (Yˆr)dr, Zˆt=eiYˆt,
whereXt is a symmetricα-stable Lévy process onR,α∈(0,2), starting from 0. We will use notation from Table2.
We will now identify the generators of the processes(Yt, St)and(Zt, St)and link them with the generators of the processes(Yˆt,Sˆt)and(Zˆt,Sˆt).
Proposition 2.2. Let(Yt, St)be the process defined by(2.3)and letf ∈C∗(R2).Then
t→lim0+
Ttf −f
t exists ⇐⇒ lim
t→0+
Tˆtf −f t exists, in the norm · ∞.If one,hence both,limits exist,then
lim
t→0+
Ttf −f
t = lim
t→0+
Tˆtf −f
t +Rf, (2.5)
whereRf is given by(2.2).
Corollary 2.3. We have
f ∈D(G)∩Cc(S×R) ⇐⇒ f ∈D(Gˆ)∩Cc(S×R).
Iff ∈D(G)∩Cc(S×R)then Gf = ˆGf +RSf, where
RSf (z, s)=
S
f (w, s)−f (z, s)
V (z)−V (w) s
+dw.
Proposition 2.4. Let(Yt, St)be the process defined by(2.3).ThenC∗2(R2)⊂D(G(Y,S))and forf ∈C∗2(R2)we have
G(Y,S)f (y, s)= −(−Δy)α/2f (y, s)+Rf (y, s)+W (y)fs(y, s) (2.6)
for all(y, s)∈R2withRf given by(2.2).
Moreover,C∗2(R2)⊂D(G(Y ,ˆS)ˆ )and forf ∈C∗2(R2)we have
G(Y ,ˆˆS)f (y, s)= −(−Δy)α/2f (y, s)+W (y)fs(y, s) (2.7) for all(y, s)∈R2.
By Arg(z)we denote the argument ofz∈Ccontained in(−π,π]. Forg∈C2(S)let us put Lg(z)=Aα lim
ε→0+
S∩{|Arg(w/z)|>ε}
g(w)−g(z)
|Arg(w/z)|1+αdw +Aα
n∈Z\{0}
S
g(w)−g(z)
|Arg(w/z)+2nπ|1+αdw, (2.8)
where dwdenotes the arc length measure onS; note that
S dw=2π. It is clear that forf ∈Cc2(S×R),z=eiy, y, s∈Rwe have
−(−Δy)α/2f (y, s)˜ =Lzf (z, s). (2.9)
Corollary 2.5. We haveCc2(S×R)⊂D(G)and forf ∈Cc2(S×R)we have Gf (z, s)=Lzf (z, s)+RSf (z, s)+V (z)fs(z, s)
for all(z, s)∈S×R,whereLis given by(2.8).
We also haveCc2(S×R)⊂D(Gˆ)and forf ∈Cc2(S×R)we have Gˆf (z, s)=Lzf (z, s)+V (z)fs(z, s)
for all(z, s)∈S×R.
Remark 2.6. Proposition2.4shows that forf ∈C∗2(R2)the generator of the process(Yt, St)defined by(2.3)is of the form(2.1).This is a standard result,the so-called “construction of Meyer,” but we include our own proof of this result so that the paper is self-contained.Moreover,Proposition2.2,Corollaries2.3and2.5are needed to identify a core forG.Corollary2.5is also needed to find the stationary measure for(Zt, St).
We will need two auxiliary results.
Lemma 2.7. There exists a constantc=c(M) >0such that for anyx∈ [−π,π]and anyu1=(y1, s1)∈R2,u2= (y2, s2)∈R2withs1, s2∈ [−M, M]we have
g(u1, x)−g(u2, x)≤c
|u2−u1| ∧1 .
Proof. Froma+=(a+ |a|)/2 we conclude that|a+−b+| ≤ |a−b|for alla, b∈R.
Letx∈ [−π,π],u1=(y1, s1)∈R2,u2=(y2, s2)∈R2ands1, s2∈ [−M, M]. We have g(u1, x)−g(u2, x)
≤W (y1)−W (y1+x) s1
+−
W (y2)−W (y2+x) s2
+
≤W (y1)−W (y1+x) s1−
W (y2)−W (y2+x) s2
≤W (y1)−W (y1+x) s1−
W (y1)−W (y1+x) s2 +W (y1)−W (y1+x)
s2−
W (y2)−W (y2+x) s2
≤W (y1)−W (y1+x)|s1−s2| +W (y1)−W (y2)|s2| +W (y1+x)−W (y2+x)|s2|
≤2W∞|s1−s2| +2MW
∞|y1−y2|.
Since, trivially,|g(u1, x)−g(u2, x)| ≤4W∞M, the claim follows withc=4(W∞+ W∞)(M+1).
As an easy corollary of Lemma2.7we get
Lemma 2.8. There exists a constant c=c(M) >0 such that for anyu1=(y1, s1)∈R2,u2=(y2, s2)∈R2 with s1, s2∈ [−M, M]we have
g(u1,·)
1−g(u2,·)
1≤c
|u2−u1| ∧1 .
Proof of Proposition2.2. Letf ∈C∗(R2). Throughout the proof we will assume that supp(f )⊂R×(−M0, M0) for someM0>0. Note that
|St| = S0+
t
0
W (Yr)dr
≤ |S0| + W∞≤M0+ W∞
for all starting points(Y0, S0)=(y, s)∈R× [−M0, M0]and all 0≤t≤1. Put M1=M0+ W∞.
If(Y0, S0)=(y, s) /∈R× [−M1, M1], then
|St| = S0+
t 0
W (Yr)dr
> M1− W∞=M0, 0≤t≤1,
sof (Yt, St)=0. It follows that for any(y, s) /∈R× [−M1, M1]and 0< h≤1 we have E(y,s)f (Yh, Sh)−f (y, s)
h =0.
By the same argument,
E(y,s)f (Yˆh,Sˆh)−f (y, s)
h =0.
It now follows from the definition ofRf (y, s)thatRf (y, s)=0 for(y, s) /∈R× [−M1, M1]. It is, therefore, enough to consider(y, s)∈R× [−M1, M1].
The arguments above tell us that for all starting points (Y0, S0)=(y, s)∈R× [−M1, M1] and all 0≤t ≤1,
|St| ≤ |S0| + W∞≤M1+ W∞. Setting M=M1+ W∞,
we get from the definition of the functiongthat g(Yr, Sr,·)
1≤2π2W∞M, 0≤r≤1,
and so σ (t )=
t 0
g(Yr, Sr,·)
1dr≤4πW∞Mt=c0t, 0≤t≤1, with the constantc0=4πW∞M.
From now on we will assume that(y, s)∈R× [−M1, M1]and 0< h≤1. We have Thf (y, s)−f (y, s)
h =E(y,s)f (Yh, Sh)−f (y, s) h
= 1 hE(y,s)
f (Yh, Sh)−f (y, s);N σ (h)
=0 +1
hE(y,s)
f (Yh, Sh)−f (y, s);N σ (h)
=1 +1
hE(y,s)
f (Yh, Sh)−f (y, s);N σ (h)
≥2
=I+II+III.
Sinceσ (h)≤c0hwe obtain
|III| ≤ 2f∞ h P(y,s)
N σ (h)
≥2
≤ 2f∞ h P(y,s)
N (c0h)≥2
=2f∞1−e−c0h−c0he−c0h
h −→
h→0+0 uniformly for all(y, s)∈R× [−M1, M1].
Now we will consider the expression I. We have I= 1
hE(y,s)
f
y+Xh, s+ h
0
W (y+Xr)dr
−f (y, s);N σ (h)
=0
= 1 hE(y,s)
f (Yˆh,Sˆh)−f (y, s);N σ (h)
=0
= 1 hE(y,s)
f (Yˆh,Sˆh)−f (y, s)
−1 hE(y,s)
f (Yˆh,Sˆh)−f (y, s);N σ (h)
≥1
=I1+I2. Note that
I1=Tˆhf (y, s)−f (y, s)
h .
It will suffice to prove that I2→0 and II→Rf. We have
|I2| ≤ 1
hE(y,s)f (Yˆh,Sˆh)−f (y, s);N (c0h)≥1
=1−e−c0h
h E(y,s)f (Yˆh,Sˆh)−f (y, s).
Recall thatf ∈C∗(R2)is bounded and uniformly continuous. We will use the following modulus of continuity ε(f;δ)=ε(δ)= sup
(y,s)∈R2
|y1|∨|sups1|≤δ
f (y+y1, s+s1)−f (y, s).
Clearly,ε(δ)≤2f∞and limδ→0+ε(δ)=0.
Note that forYˆ0=y,Sˆ0=swe haveYˆh−y=Xh,Sˆh−s=h
0 W (Yˆr)drwhich gives| ˆSt−s| ≤hW∞for all t≤h. It follows that
E(y,s)f (Yˆh,Sˆh)−f (y, s)≤E(y,s) ε
sup
0<t≤h
|Xt| ∨hW∞ . Sincet→Xt is right-continuous andX0≡0 we have, a.s.,
sup
0<t≤h
|Xt| −→
h→0+0 and, therefore, ε
sup
0<t≤h
|Xt| ∨hW∞
h−→→0+0.
By the bounded convergence theorem E(y,s)
ε
sup
0<t≤h
|Xt| ∨hW∞
h−→→0+0
uniformly for all (y, s)∈R× [−M1, M1]because the expressionε(sup0<t≤h|Xt| ∨hW∞)does not depend on (y, s). It follows that
|I2| −→
h→0+0
uniformly for all(y, s)∈R× [−M1, M1]. Now we turn to II. We have
II=1 hE(y,s)
f (Yh, Sh)−f (Yh, s);N σ (h)
=1 +1
hE(y,s)
f (Yh, s)−f (y, s);N σ (h)
=1
=II1+II2. Sinceσ (h)≤c0h
|II1| ≤ 1 hE(y,s)
f
Yh, s+ h
0
W (Yr)dr
−f (Yh, s)
;N (c0h)≥1
≤ 1 hE(y,s)
ε
h 0
W (Yr)dr
;N (c0h)≥1
≤ 1 hE(y,s)
ε
hW∞
;N (c0h)≥1
=1−e−c0h
h E(y,s)
ε
hW∞
h−→→0+0
uniformly for all(y, s)∈R× [−M1, M1]. It will suffice to show that II2→Rf. From now on we will use the following shorthand notation
Ut:=(Yt, St), Uˆt:=(Yˆt,Sˆt), u:=(y, s).
We have II2= 1
hE(y,s) f
y+Xh+η1(Uτ1−), s
−f
y+η1(Uτ1−), s
;N σ (h)
≥1 +1
hE(y,s) f
y+η1(Uτ1−), s
−f (y, s);N σ (h)
≥1
−1 hE(y,s)
f
y+Xh+η1(Uτ1−), s
−f (y, s);N σ (h)
≥2
=II2a+II2b+II2c. Observe that
|II2c| ≤2f∞1−e−c0h−c0he−c0h
h −→
h→0+0
and that the convergence is uniform in(y, s)∈R× [−M1, M1].
Moreover,
|II2a| ≤ 1
hE(y,s)f
y+Xh+η1(Uτ1−), s
−f
y+η1(Uτ1−), s;N (c0h)≥1
≤ 1 hE(y,s)
ε
sup
0≤t≤h
|Xh|
;N (c0h)≥1
= 1−e−c0h
h E(y,s)
ε
sup
0≤t≤h
|Xh|
h−→→0+0
uniformly for all(y, s)∈R× [−M1, M1]. It will suffice to show that II2b→Rf. Note that
N σ (h)
≥1 ⇐⇒ τ1≤h ⇐⇒
h
0
g(Ur,·)
1dr≥σ1. (2.10)
We claim that h
0
g(Ur,·)
1dr≥σ1 ⇐⇒
h
0
g(Uˆr,·)
1dr≥σ1. (2.11)
First, we assume thath
0 g(Ur,·)1dr≥σ1. This implies thatτ1≤h. Recall thatUr= ˆUr forr < τ1. Hence h
0
g(Uˆr,·)
1dr≥ τ1
0
g(Uˆr,·)
1dr= τ1
0
g(Ur,·)
1dr=σ1, where the last equality follows from the definition ofτ1.
Now let us assume thath
0 g(Ur,·)1dr < σ1. This implies thatτ1> h. Using againUr = ˆUr forr < h < τ1, we obtain
σ1>
h 0
g(Ur,·)
1dr= h
0
g(Uˆr,·)
1dr, which finishes the proof of (2.11).
By (2.10) and (2.11) we obtain II2b=1
hE(y,s) f
y+η1(Uτ1−), s
−f (y, s);τ1≤h
=1 hE(y,s)
f
y+η1(U(τ1∧h)−), s
−f (y, s);τ1≤h
=1 hE(y,s)
f
y+η1(Uˆ(τ1∧h)−), s
−f (y, s); h
0
g(Uˆr,·)
1dr≥σ1
.
We will use the following abbreviations:
u=(y, s), A=
h
0
g(Uˆr,·)
1dr≥σ1 , B=
h 0
g(u,·)
1dr≥σ1 , F =1
h f
y+η1(Uˆ(τ1∧h)−), s
−f (y, s) . This allows us to rewrite II2bas
II2b=Eu[F;A] =Eu[F;B] +Eu[F, A\B] −Eu[F;B\A].
Recall thatX=(Xt)t≥0,N=(N (t))t≥0andU=(Un)n∈Nare independent. Therefore the probability measureP can be written in the formP=PX⊗PN⊗PU; the conditional probability, givenNorU, isPXand the corresponding expectation is denoted byEX. In a similar wayP(X,N )=PX⊗PN andE(X,N ) denote conditional probability and conditional expectation ifUis given. As usual, the initial (time-zero) value of the process under consideration is given as a superscript. Note thatUˆt =(Yˆt,Sˆt)is a function ofXand does not depend onN orU. In particular,Uˆt andσ1
are independent. Sinceσ1is the time of the first jump of the Poisson processN (t ), it is exponentially distributed with parameter 1. It follows that
Eu[F, A\B]
≤2f∞ h Pu
h
0
g(Uˆr,·)
1dr≥σ1>
h
0
g(u,·)
1dr
=2f∞ h EuX
e−0hg(u,·)1dr−e−0hg(Uˆr,·)1dr; h
0
g(Uˆr,·)
1dr >
h
0
g(u,·)
1dr
≤2f∞ h EuXe−
h
0 g(u,·)1dr−e−
h
0 g(Uˆr,·)1dr
≤2f∞ h EuX
h
0
g(u,·)
1−g(Uˆr,·)
1
dr
≤2f∞EuX sup
0≤r≤h
g(u,·)
1−g(Uˆr,·)
1.
For the penultimate inequality we used the elementary estimate|e−a−e−b| ≤ |a−b|,a, b≥0. From Lemma2.8we infer that the last expression is bounded by
2f∞cEuX sup
0≤r≤h
| ˆUr−u| ∧1
=2f∞cEuX
sup
0≤r≤h
Xr,
r
0
W (Yˆt)dt ∧1
h−→→0+0
uniformly for all(y, s)∈R× [−M1, M1]. This convergence follows from the right-continuity ofXr and the fact that
|r
0W (Yˆt)dt| ≤hW∞.
A similar argument shows that|Eu[F;B\A]| −→
h→0+0 uniformly in(y, s)∈R× [−M1, M1]. It will suffice to show thatEu[F;B] →Rf.