• Aucun résultat trouvé

Dynamic Markov Bridges Motivated by Models of Insider Trading

N/A
N/A
Protected

Academic year: 2021

Partager "Dynamic Markov Bridges Motivated by Models of Insider Trading"

Copied!
58
0
0

Texte intégral

(1)

Dynamic Markov Bridges Motivated by Models

of Insider Trading

Luciano Campi1 Umut Çetin2 Albina Danilova3

1Université Paris Dauphine 2London School of Economics

3Carnegie Mellon University

(2)

Back’s model of insider trading

Inspired by Kyle (1985), Back (1992) studies a market for a bond and a risky asset with three types of participants:

1 Noise traders: The noise traders have no information about

the future value of the risky asset and can only observe their own cumulative demands. The cumulative demand is modeled by a standard Brownian motion denoted with B.

2 Informed trader: The insider knows the value of the risky

asset at time 1, which is given by a standard normal random variable, V , independent of B. Being risk-neutral, her objective is to maximize her expected profit.

3 Market maker: The market maker observes the total order

(3)

The pricing mechanism of the market

The market maker decides the price looking at the total

order,Yθ, which is given by

Xtθ=Bt+ θt,

where θt is the position of the insider in the risky asset at

time t.

Thus, the filtration of the market maker is the one

generated by X and is denoted with FX. Note that θ is not

necessarily adapted to FX, i.e. the insider’s trade is not

observed directly by the market maker.

The market maker has a pricing rule, H : [0, 1] × R 7→ R, to assign the price in the following form:

(4)

Objectives of the insider and the market maker

The market maker chooses a rational pricing rule, i.e. he chooses a pricing rule so that

H(t, Xt) = E[V |FtX],

for every t ∈ [0, 1].

The insider aims to maximize her expected profit out of trading.

(5)

Equilibrium

Definition 1

A pair (H∗, θ∗)is said to form an equilibrium if His a pricing

rule, θ∗ ∈ A, and the following conditions are satisfied:

1 Market efficiency condition: Given θ∗, H is a rational

pricing rule.

2 The optimality condition: Given H∗, θ∗maximizes the

(6)

Equilibrium

In the equilibrium X, the equilibrium level of the total

order, satisfies

dXt =dBt +

V − Xt

1 − t dt,

so that Xis a Brownian bridge. The price is given by

St =X.

(7)

An equilibrium model for a defaultable bond

A company issues a bond that pays £1 at time 1 unless it defaults before that time. Default time is given by

τ :=inf{t > 0 : Zt = −1},

where Z is a Brownian motion starting at 0. Campi & Çetin (2008) study a similar problem where insider knows τ from the beginning. In the equilibrium total order solves

dXt =dBt +  1 1 + Xt 1 + Xt τ −t  dt

(8)

H(t, x ) := Z 1−t x + 1 q 2πy3 e (x +1)2 2y dy .

Note that, this time, 1 + Xis a 3-dimensional Bessel

bridge of length τ in insider’s view. Moreover, τ is an

FX-stopping time. Indeed,

τ =inf{t > 0 : Xt∗= −1}.

X is a Brownian motion in its own filtration.

Related literature: Wu (1999), Cho (2003), Laserre (2004), Biagini and Øksendal (2005).

(9)

One common mathematical characteristic

In the models above the insider’s optimal strategy, θ∗

satisfies d θt= xlogρ(t, X t , signal) where ρ(t, x , z) dz = P(signal ∈ dz|Xt =x ).

This ensures the total demand is a Brownian motion in its own filtration.

Indeed, by standard filtering theory,

dXt = dBtX + E  xlogρ(t, X t , signal) FtX  dt = dBtX + Z ρ x(t, Xt∗,z) ρ(t, Xt∗,z)ρ(t, X t ,z)dz ! dt = dBtX +  x Z ρ(t, Xt∗,z)dz  dt = dBtX.

(10)

One common mathematical characteristic

In the models above the insider’s optimal strategy, θ∗

satisfies d θt= xlogρ(t, X t , signal) where ρ(t, x , z) dz = P(signal ∈ dz|Xt =x ).

This ensures the total demand is a Brownian motion in its own filtration.

Indeed, by standard filtering theory,

dXt = dBtX + E  xlogρ(t, X t , signal) FtX  dt = dBtX + Z ρ x(t, Xt∗,z) ρ(t, Xt∗,z)ρ(t, X t ,z)dz ! dt = dBtX +  x Z ρ(t, Xt∗,z)dz  dt = dBtX.

(11)

One common mathematical characteristic

In the models above the insider’s optimal strategy, θ∗

satisfies d θt= xlogρ(t, X t , signal) where ρ(t, x , z) dz = P(signal ∈ dz|Xt =x ).

This ensures the total demand is a Brownian motion in its own filtration.

Indeed, by standard filtering theory,

dXt = dBtX + E  xlogρ(t, X t , signal) FtX  dt = dBtX + Z ρ x(t, Xt∗,z) ρ(t, Xt∗,z)ρ(t, X t ,z)dz ! dt = dBtX +  x Z ρ(t, Xt∗,z)dz  dt = dBtX.

(12)

One common mathematical characteristic

In the models above the insider’s optimal strategy, θ∗

satisfies d θt= xlogρ(t, X t , signal) where ρ(t, x , z) dz = P(signal ∈ dz|Xt =x ).

This ensures the total demand is a Brownian motion in its own filtration.

Indeed, by standard filtering theory,

dXt = dBtX + E  xlogρ(t, X t , signal) FtX  dt = dBtX + Z ρ x(t, Xt∗,z) ρ(t, Xt∗,z)ρ(t, X t ,z)dz ! dt = dBtX +  x Z ρ(t, Xt∗,z)dz  dt = dBtX.

(13)

One common mathematical characteristic

In the models above the insider’s optimal strategy, θ∗

satisfies d θt= xlogρ(t, X t , signal) where ρ(t, x , z) dz = P(signal ∈ dz|Xt =x ).

This ensures the total demand is a Brownian motion in its own filtration.

Indeed, by standard filtering theory,

dXt = dBtX + E  xlogρ(t, X t , signal) FtX  dt = dBtX + Z ρ x(t, Xt∗,z) ρ(t, Xt∗,z)ρ(t, X t ,z)dz ! dt

(14)

This is not a coincidence!

Consider a more general Markov setting in which the

insider knows the price at time 1, Z1, where

Zt =

Z t

0

a(Zs)dBZs

Then in the equilibrium the market maker uses the following process for the pricing purposes:

dXt =a(Xt) (dBt +d θt) .

It can be shown along the similar lines that it is necessary

(15)

It is well-known, at least since Fitzsimmons, Pitman & Yor (1993), that the process X which is a solution of

dXt =a(Xt)dBt+a2(Xt)

Gx(1 − t, Xt,z)

G(1 − t, Xt,z)

dt,

is a Markov process converging to z as t → 1, where G is the transition density of a diffusion solving

d ξt =a(ξt)d βt, (1)

where β is a standard Brownian motion. I.e.

(16)

Z1, independent of B, has a probability density given by G(1, 0, ·), then defining dXt =a(Xt)dBt +a2(Xt) Gx(1 − t, Xt,Z0) G(1 − t, Xt,Z1) dt, gives the process we want.

Indeed given Z1=z the law of X is that of ξ conditioned to

hit z at time 1. I.e.

P[Xt ∈ dy |Xs=x , Z1=z] = G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) dy . Thus, P[Xt ∈ dy |Xs =x ] = Z G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) P[Z1∈ dz|Xs =x ]  dy .

(17)

Z1, independent of B, has a probability density given by G(1, 0, ·), then defining dXt =a(Xt)dBt +a2(Xt) Gx(1 − t, Xt,Z0) G(1 − t, Xt,Z1) dt, gives the process we want.

Indeed given Z1=z the law of X is that of ξ conditioned to

hit z at time 1. I.e.

P[Xt ∈ dy |Xs =x , Z1=z] = G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) dy . Thus, P[Xt ∈ dy |Xs =x ] = Z G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) P[Z1∈ dz|Xs =x ]  dy .

(18)

Z1, independent of B, has a probability density given by G(1, 0, ·), then defining dXt =a(Xt)dBt +a2(Xt) Gx(1 − t, Xt,Z0) G(1 − t, Xt,Z1) dt, gives the process we want.

Indeed given Z1=z the law of X is that of ξ conditioned to

hit z at time 1. I.e.

P[Xt ∈ dy |Xs =x , Z1=z] = G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) dy . Thus, P[Xt ∈ dy |Xs =x ] = Z G(t − s, x , y )G(1 − t, y , z) G(1 − s, x , z) P[Z1∈ dz|Xs =x ]  dy .

(19)

Since we also want X to be a martingale, we must have P[Xt ∈ dy |Xs =x ] = G(t − s, x , y )dy .

It follows by direct integration that if

P[Z1∈ dz|Xs =x ] = G(1 − s, x , z)dz

then P[Xt ∈ dy |Xs=x ] = G(t − s, x , y )dy .

It can then be verified by filtering methods that

P[Z1∈ dz|Xs =x ] = G(1 − s, x , z)dz given the specific

(20)

Since we also want X to be a martingale, we must have P[Xt ∈ dy |Xs =x ] = G(t − s, x , y )dy .

It follows by direct integration that if

P[Z1∈ dz|Xs =x ] = G(1 − s, x , z)dz

then P[Xt ∈ dy |Xs=x ] = G(t − s, x , y )dy .

It can then be verified by filtering methods that

P[Z1∈ dz|Xs =x ] = G(1 − s, x , z)dz given the specific

(21)

In the models presented so far

there is a private signal Z of the insider giving the true price at the end of the trading horizon;

the cumulative demand does not change its law, i.e. it stays as a Brownian motion if the insider trades optimally;

(22)

Dynamic information asymmetry

Back and Pedersen (1998) analyze the same problem when the insider receives a continuous signal

Zt =Z0+

Z t

0

σ(u)dBuZ

where Z0is some mean zero normal random variable, BZ

is a Brownian motion independent of B, the noise demand,

and Var (Z0) +

R1

0 σ2(s)ds = 1.

The asset value at time 1 is given by Z1. The equilibrium

demand in this case is given by dXt =dBt +

Zt − Xt

V (t) − tdt,

where V (t) = Var (Z0) +R0tσ2(s)ds. St =Xtand,

moreover, limt→1St =Z1.

Similar problems in varying generality are discussed in Wu (1999), Föllmer, Wu and Yor (1999) and Danilova (2008).

(23)

Extension to a general diffusion setting

Goal:Given Zt =Z0+ Z t 0 σ(s)a(Zs)dBZs

with a(z) satisfying certain conditions, construct a process X ,

starting from zero and adapted to FtZ ,B, such that:

C1 (X , Z ) is a Markov process.

C2 X1=Z1, Qz-a.s., where Qz is the law of (X , Z ) with Z0=z

and X0=0.

C3 X with X0=0 is a martingale in its own filtration and

(24)

Model assumptions

Assumption 1

Fix a real number c ∈ [0, 1]. σ : [0, 1] 7→ R+and a : R 7→ R+are

two measurable functions such that:

1 V (t) := c +R0tσ2(u)du > t for every t ∈ [0, 1), and

V (1) = 1.

2 σ2(·)is bounded on [0, 1].

3 a(·) is uniformly bounded away from zero.

(25)

General solution

Keeping the solution to the general Markov problem with initial information asymmetry in mind we postulate that the solution to our problem is given by the pair (X , Z ) such that

the probability density of Z0is given by G(c, 0, z) and X

solves

dXt =a(Xt)dBt +a2(Xt)

ρx(t, Xt,Zt)

ρ(t, Xt,Zt)

dt for t < 1, for some ρ which is expected to be the

conditional density of Zt given FtX where FX is the usual

augmentation of the natural filtration of X .

(26)

It follows from nonlinear filtering theory that the conditional

density, g, of Zt satisfies the SPDE

gt(z) = G(c, 0, z) + Z t 0 1 2σ 2(s)(a2(V (s), z)g s(z))zzds (2) + Z t 0 gs(z) ρ x(s, Xs,z) ρ(s, Xs,Zs) Z R gs(z) ρx(s, Xs,z) ρ(s, Xs,Zs) dz  dNs, where dNs =dXs  R Rgs(z) ρx(s,Xs,z) ρ(s,Xs,Zs)dz  ds.

if X is a martingale in its own filtration, the above turns into

gt(z) = G(c, 0, z) + Z t 0 1 2σ 2(s)(a2(z)g s(z))zzds + Z t 0 gs(z) ρx(s, Xs,z) ρ(s, Xs,Zs) dXs. (3)

(27)

So in order for ρ to give the conditional density it must be a solution to the PDE

ρt+ 1 2a 2(x )ρ xx 1 2σ 2(t)(a2(z)ρ) zz =0 (4)

with the initial condition that ρ(0, 0, z) = G(c, 0, z). Conversely, if one can solve the above PDE, the solution also solves the SPDE in (2). Moreover, X is a martingale in its own filtration since the drift term is given by

a2(Xs) Z R gs(z) ρx(s, Xs,z) ρ(s, Xs,Zs) dz = a2(Xs) Z R ρx(s, Xs,z) dz = 0,

if the interchange of differentiation and integration can be justified.

It might seem hopeless to solve this PDE and in general there won’t be any solution for such PDEs.

However, we shall now see that

ρ(t, x , z) = G(V (t) − t, x , z) solves the PDE above with the

given initial condition. Note that as t → 1 this ρ converges to the delta function at x .

It is interesting to observe that placing X as a backward variable and Z as a forward variable gives a martingale that converges to the terminal value of Z .

(28)

So in order for ρ to give the conditional density it must be a solution to the PDE

ρt+ 1 2a 2(x )ρ xx 1 2σ 2(t)(a2(z)ρ) zz =0 (4)

with the initial condition that ρ(0, 0, z) = G(c, 0, z). Conversely, if one can solve the above PDE, the solution also solves the SPDE in (2). Moreover, X is a martingale in its own filtration since the drift term is given by

a2(Xs) Z R gs(z) ρx(s, Xs,z) ρ(s, Xs,Zs) dz = a2(Xs) Z R ρx(s, Xs,z) dz = 0,

if the interchange of differentiation and integration can be justified.

It might seem hopeless to solve this PDE and in general there won’t be any solution for such PDEs.

However, we shall now see that

ρ(t, x , z) = G(V (t) − t, x , z) solves the PDE above with the

given initial condition. Note that as t → 1 this ρ converges to the delta function at x .

It is interesting to observe that placing X as a backward variable and Z as a forward variable gives a martingale

(29)

First recall that, being the transition density of ξ, G(t − s, x , z) is the fundamental solution of

ut =

1 2(a

2(z)u)

zz. (5)

It is well-known that G(t − s, x , z) is the fundamental solution of the adjoint equation

vs+

1 2a

2(x )v xx =0.

These two facts imply that G(V (t) − t, x , z) is a solution to (4).

(30)

First recall that, being the transition density of ξ, G(t − s, x , z) is the fundamental solution of

ut =

1 2(a

2(z)u)

zz. (5)

It is well-known that G(t − s, x , z) is the fundamental solution of the adjoint equation

vs+

1 2a

2(x )v xx =0.

These two facts imply that G(V (t) − t, x , z) is a solution to (4).

(31)

First recall that, being the transition density of ξ, G(t − s, x , z) is the fundamental solution of

ut =

1 2(a

2(z)u)

zz. (5)

It is well-known that G(t − s, x , z) is the fundamental solution of the adjoint equation

vs+

1 2a

2(x )v xx =0.

These two facts imply that G(V (t) − t, x , z) is a solution to (4).

(32)

Existence of G

Let A(x ) := Z x 0 1 a(y )dy ,

and ζt =A(ξt), where ξ is as defined in (1). An application of

Itô’s formula yields

d ζt =d βt +b(ζt)dt,

where b(y ) := −12az(A−1(y )).

Assumption 2

b and by are bounded and by is absolutely continuous with a

bounded derivative.

Proposition 1

Under the assumptions made so far, there exists a fundamental solution, G, to (5). Moreover,

G(t − s, y , x ) = Γ(t − s, A(y ), A(x ))a(x )1 , where Γ is transition density of ζ.

(33)

Is ρ indeed the conditional density?

We have seen ρ is a good candidate for the conditional density. Now, we shall see that it is indeed the density we are looking for.

Let Ut =A(Zt)and Rt =A(Xt)so that

dUt = σ(t)d βt+ σ2(t)b(Ut)dt dRt = dBt + p x(t, Rt,Ut) p(t, Rt,Ut) +b(Rt)  dt, (6) where p(t, x , z) := a(A−1(z))ρ(t, A−1(x ), A−1(V (z)).

Then, p(t, Rt, ·)is the FtR-conditional density of Ut if and

(34)

Is ρ indeed the conditional density?

We have seen ρ is a good candidate for the conditional density. Now, we shall see that it is indeed the density we are looking for.

Let Ut =A(Zt)and Rt =A(Xt)so that

dUt = σ(t)d βt+ σ2(t)b(Ut)dt dRt = dBt + p x(t, Rt,Ut) p(t, Rt,Ut) +b(Rt)  dt, (6) where p(t, x , z) := a(A−1(z))ρ(t, A−1(x ), A−1(V (z)). Then, p(t, Rt, ·)is the FtR-conditional density of Ut if and

(35)

Zakai’s integral equation

One way to show our candidate function gives the conditional density is to show the uniqueness of the solution of the associated SPDE, which is rather difficult in general. So we will use a different approach.

First note that P(Ut ∈ dz) =

Z

R

Γ(c, A(0), y )Γ(V (t) − V (0), y , z)dy dz

= Γ(V (t), A(0), z) dz.

This imples that Ut has a bounded (uniformly in t ∈ [0, 1])

density.

Next fix a T < 1 and let PT be the measure induced by

(Ut,Rt)t∈[0,T ]. Let QT be the equivalent measure on the

same space defined by the Radon-Nikodym derivative π−1T

where d πt := πt p x(t, Rt,Ut) p(t, Rt,Ut) +b(Rt)  dRt.

(36)

Zakai’s integral equation

One way to show our candidate function gives the conditional density is to show the uniqueness of the solution of the associated SPDE, which is rather difficult in general. So we will use a different approach.

First note that P(Ut ∈ dz) =

Z

R

Γ(c, A(0), y )Γ(V (t) − V (0), y , z)dy dz

= Γ(V (t), A(0), z) dz.

This imples that Ut has a bounded (uniformly in t ∈ [0, 1])

density.

Next fix a T < 1 and let PT be the measure induced by

(Ut,Rt)t∈[0,T ]. Let QT be the equivalent measure on the

same space defined by the Radon-Nikodym derivative π−1T

where d πt := πt p x(t, Rt,Ut) p(t, Rt,Ut) +b(Rt)  dRt.

(37)

Proposition 2

Let ˜πt(z) := EQT

h

πt|FtR,Ut =z

i

. Then, for t ≤ T the

conditional density of Ut given FtR with respect to PT is given by

˜

πt(z)Γ(V (t), A(0), z)

R

Rπ˜t(y )Γ(V (t), A(0), y )dy

.

As in Zakai (1969) one can obtain an integral equation for the unnormalized conditional density

Πt(z) := ˜πt(z)Γ(V (t), A(0), z).

This integral equation will have a unique solution in a

suitable space of functions to which Πt(z) belongs.

To this end let H denote the space of real valued functions

φ(t, z, ω) defined on R+× R × Ω such that

k φ k:= EQT " Z T 0 Z R φ2(t, z, ω) dz dt # < ∞.

(38)

Proposition 2

Let ˜πt(z) := EQT

h

πt|FtR,Ut =z

i

. Then, for t ≤ T the

conditional density of Ut given FtR with respect to PT is given by

˜

πt(z)Γ(V (t), A(0), z)

R

Rπ˜t(y )Γ(V (t), A(0), y )dy

.

As in Zakai (1969) one can obtain an integral equation for the unnormalized conditional density

Πt(z) := ˜πt(z)Γ(V (t), A(0), z).

This integral equation will have a unique solution in a

suitable space of functions to which Πt(z) belongs.

To this end let H denote the space of real valued functions

φ(t, z, ω) defined on R+× R × Ω such that

k φ k:= EQT " Z T 0 Z R φ2(t, z, ω) dz dt # < ∞.

(39)

Zakai equation for the unnormalized density

Under our boundedness assumptions on the coefficients, we

easily have that Πt(z) ∈ H. Then,

Theorem 2

(Corollary 1 and Theorem 2, Zakai (1969)) Πt(z) solves the

integral equation Πt(z) = Γ(V (t), A(0), z) + Z t 0 Z R p x(s, Rs,y ) p(s, Rs,y ) +b(Rs)  Πs(y )Γ(V (t) − V (s), y , z)dydRs,

(40)

Solution to the Zakai equation

Using the PDE that Γ(s, x ; V (t), z) solves for fixed t and z, and integration by parts we have

Corollary 3

Let η satisfy

d ηt =b(t, RttdRt,

with η0=1. Then, Πt(z) = Γ(t, Rt;V (t), z)ηt. Consequently, the

FR

(41)

Gaussian case

When a ≡ 1, it is well-known that

G(t − s, y , x ) = 1

2π(t−s)exp(− (x −y )2

2(t−s)).In this case, the

equation for X reduces to

dXt =dBt +

Zt− Xt

V (t) − t.

This is the optimal demand obtained by Back and Pedersen.

Back and Pedersen (1998) and Wu (1999) only prove the convergence

lim

t→1Xt =Z1

(42)

We shall now give a proof of that limt→1Xt =Z1,Qz-a.s., which

is the convergence with respect to the insider’s probability

measure given Z0=z. Let λ(t) = expnRt 0 V (s)−s1 ds o and Λ(t) =Rt 0 1+σ2(s) λ2(s) ds. For fixed α > 0 define ϕ(t, x , z) = α p 2(Λ(t) + α)e (x −z)2 2λ2(t)(Λ(t)+α)

Then, (ϕ(t, Xt,Zt))t∈[0,1) is a positive Qz-supermartingale and,

under some mild conditions on σ, lim

(43)

We shall now give a proof of that limt→1Xt =Z1,Qz-a.s., which

is the convergence with respect to the insider’s probability

measure given Z0=z. Let λ(t) = expnRt 0 V (s)−s1 ds o and Λ(t) =Rt 0 1+σ2(s) λ2(s) ds. For fixed α > 0 define ϕ(t, x , z) = α p 2(Λ(t) + α)e (x −z)2 2λ2(t)(Λ(t)+α)

Then, (ϕ(t, Xt,Zt))t∈[0,1)is a positive Qz-supermartingale and,

under some mild conditions on σ, lim

(44)

Convergence

Let Mt := ϕ(t, Xt,Zt). Using the supermartingale convergence

theorem, there exists an M1≥ 0 such that limt→1Mt =M1,

Qz-a.s.. Using Fatou’s lemma and the fact that M is a

supermartingale, we have M0≥ lim inf t→1 E z[M t] ≥Ez  lim t→1ϕ(t, Xt,Zt)  ,

where Ez is the expectation operator with respect to Qz. This

yields Qz(limt→1Xt 6= Z1) =0 thanks to the limiting values

(45)

Convergence of X

t

Let Ut =A(Zt)and Rt =A(Xt). By Itô’s formula

dUt = σ(t)d βt + σ2(t)b(Ut)dt dRt = dBt + p x(t, Rt,Ut) p(t, Rt,Ut) +b(Rt)  dt, with p(t, x , z) := a(A−1(z))ρ(t, A−1(x ), A−1(z)).

Note that the convergence of Xt to Z1is equivalent to the

convergence of Rt to U1.

It is easy to show that p(t, x , z) = Γ(V (t) − t, x , z) where Γ is the transition density of

(46)

As the law of ζ is equivalent to the Wiener measure, we can write

Γ(t, x , z) = h(t, x , z)q(t, x , z)

where q is the transition density of a standard Brownian motion and we can get bounds on h and its log-derivative using some old and some new techniques.

In fact the following is true:

Proposition 3

Let xn→ x, zn→ z and tn → 0. Then

lim

n→∞tn

hx

h (tn,xn,zn) =0.

(47)

As the law of ζ is equivalent to the Wiener measure, we can write

Γ(t, x , z) = h(t, x , z)q(t, x , z)

where q is the transition density of a standard Brownian motion and we can get bounds on h and its log-derivative using some old and some new techniques. In fact the following is true:

Proposition 3

Let xn→ x, zn→ z and tn → 0. Then

lim

n→∞tn

hx

h (tn,xn,zn) =0.

(48)

Sketch of proof for convergence of R

t

Consider a new measure, Pz under which, and with an abuse

of notation, dUt = σ(t)d βt dRt = dBt + px(t, Rt,Ut) p(t, Rt,Ut) dt = = dBt + Ut− Rt V (t) − tdt + hx(V (t) − t, Rt,Ut) h(V (t) − t, Rt,Ut) dt. (7) Let rt =Rt − e− Rt 0 ds V (s)−s Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds. This rt satisfies drt =dBt + Ut − rt V (t) − tds.

(49)

Sketch of proof for convergence of R

t

Consider a new measure, Pz under which, and with an abuse

of notation, dUt = σ(t)d βt dRt = dBt + px(t, Rt,Ut) p(t, Rt,Ut) dt = = dBt + Ut− Rt V (t) − tdt + hx(V (t) − t, Rt,Ut) h(V (t) − t, Rt,Ut) dt. (7) Let rt =Rt − e− Rt 0 ds V (s)−s Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds. This rt satisfies U − r

(50)

So we need lim t→1e Rt 0 ds V (s)−s Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = 0.

There could be a problem if lim t→1 Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = ∞. However, then by L’Hospital rule the limit would equal

lim t→1(V (t) − t) hx(V (t) − t, Rt,Ut) h(V (t) − t, Rt,Ut) =0 due to Proposition 3.

(51)

So we need lim t→1e Rt 0 ds V (s)−s Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = 0. There could be a problem if

lim t→1 Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = ∞.

However, then by L’Hospital rule the limit would equal lim t→1(V (t) − t) hx(V (t) − t, Rt,Ut) h(V (t) − t, Rt,Ut) =0 due to Proposition 3.

(52)

So we need lim t→1e Rt 0 ds V (s)−s Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = 0. There could be a problem if

lim t→1 Z t 0 e Rs 0 du V (u)−uhx(V (s) − s, Rs,Us) h(V (s) − s, Rs,Us) ds = ∞. However, then by L’Hospital rule the limit would equal

lim t→1(V (t) − t) hx(V (t) − t, Rt,Ut) h(V (t) − t, Rt,Ut) =0 due to Proposition 3.

(53)

A straightforward corollary

Let Z be the unique strong solution to Zt =Z0+ Z t 0 σ(s)d βs+ Z t 0 σ2(s)b(Zs)ds,

where b ∈ Cb2with bounded derivatives and σ is as before.

Suppose P(Z0∈ dz) = Γ(c, 0, z)dz for some c ∈ (0, 1). Let

ρ(t, x , z) := Γ(V (t) − t, x , z) where V is as defined earlier. Define X by dXt =dBt +  b(Xs) + ρx(t, Xt,Zt) ρ(t, Xt,Zt)  dt, for t ∈ (0, 1) with X =0.

(54)

Then

1 In the filtration generated by X

Xt

Z t

0

b(s, Xs)ds

defines a standard Brownian motion;

2 X1=Z1, Qz-a.s. where Qz is the law of (X , Z ) with Z0=z

(55)

An example

Suppose Z is an Ornstein-Uhlenbeck type process, i.e. dZt = σ(t)d βt− bσ2(t)Ztdt,

where b > 0 is a constant.

Note that in this case Γ(t, x , z) = q((1 − e−2bt))/2b, xe−bt,z).

Let X be defined by X0=0 and

dXt =dBt + ( Zt − Xte−b(V (t)−t) eb(V (t)−t)− e−b(V (t)−t) − bXt ) dt, for t ∈ (0, 1). Then, it follows from the previous theorem that if

Z0has a probability density given by G(c, 0, ·) then X is an

Ornstein-Uhlenback process in its own filtration and X1=Z1,Qz-a.s..

(56)

An example

Suppose Z is an Ornstein-Uhlenbeck type process, i.e. dZt = σ(t)d βt− bσ2(t)Ztdt,

where b > 0 is a constant. Note that in this case Γ(t, x , z) = q((1 − e−2bt))/2b, xe−bt,z).

Let X be defined by X0=0 and

dXt =dBt + ( Zt − Xte−b(V (t)−t) eb(V (t)−t)− e−b(V (t)−t) − bXt ) dt, for t ∈ (0, 1). Then, it follows from the previous theorem that if

Z0has a probability density given by G(c, 0, ·) then X is an

Ornstein-Uhlenback process in its own filtration and X1=Z1,Qz-a.s..

(57)

An example

Suppose Z is an Ornstein-Uhlenbeck type process, i.e. dZt = σ(t)d βt− bσ2(t)Ztdt,

where b > 0 is a constant. Note that in this case Γ(t, x , z) = q((1 − e−2bt))/2b, xe−bt,z).

Let X be defined by X0=0 and

dXt =dBt + ( Zt − Xte−b(V (t)−t) eb(V (t)−t)− e−b(V (t)−t) − bXt ) dt, for t ∈ (0, 1).

Then, it follows from the previous theorem that if

Z0has a probability density given by G(c, 0, ·) then X is an

Ornstein-Uhlenback process in its own filtration and X1=Z1,Qz-a.s..

(58)

An example

Suppose Z is an Ornstein-Uhlenbeck type process, i.e. dZt = σ(t)d βt− bσ2(t)Ztdt,

where b > 0 is a constant. Note that in this case Γ(t, x , z) = q((1 − e−2bt))/2b, xe−bt,z).

Let X be defined by X0=0 and

dXt =dBt + ( Zt − Xte−b(V (t)−t) eb(V (t)−t)− e−b(V (t)−t) − bXt ) dt,

for t ∈ (0, 1). Then, it follows from the previous theorem that if

Z0has a probability density given by G(c, 0, ·) then X is an

Ornstein-Uhlenback process in its own filtration and X1=Z1,Qz-a.s..

Références

Documents relatifs

On note un défait de relaxation de la jonction œ so-gastrique avec un spasme du sphincter inférieur de l ’œ sophage lors des déglutitions de solides. Les contractions œ

For rotor blades adjustment REXROTH offer both __________________ pitch drives, which are space savingly mounted inside the ___________, consist each of two or three stages

A diagram of analytic spaces is an object of Dia(AnSpc) where AnSpc is the category of analytic spaces.. Cohomological motives and Artin motives. all finite X-schemes U ). an

In the general case we can use the sharper asymptotic estimate for the correlation kernel corresponding to a real analytic potential from the proof of Lemma 4.4 of [4] (or Theorem

Let X be a n-dimensional compact Alexandrov space satisfying the Perel’man conjecture. Let X be a compact n-dimensional Alexan- drov space of curvature bounded from below by

A second scheme is associated with a decentered shock-capturing–type space discretization: the II scheme for the viscous linearized Euler–Poisson (LVEP) system (see section 3.3)..

- We study the existence and regularity of densities for the solution of a class of parabolic SPDEs of Burgers type introduced by Gyongy in a recent paper.. In the

For this reason, when someone wants to send someone else a small change to a text file, especially for source code, they often send a diff file.When someone posts a vulnerability to