HAL Id: hal-01591946
https://hal.archives-ouvertes.fr/hal-01591946v2
Preprint submitted on 22 Sep 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
PDE for joint law of the pair of a continuous diffusion and its running maximum
Laure Coutin, Monique Pontier
To cite this version:
Laure Coutin, Monique Pontier. PDE for joint law of the pair of a continuous diffusion and its running maximum. 2017. �hal-01591946v2�
PDE for joint law of the pair of a continuous diusion and its running maximum
Laure Coutin∗, Monique Pontier† September 22, 2017
Abstract
LetX be a d-dimensional diusion process and M the running supremum of the rst component. In this paper, in case of dimension d,we rst show that for any t >0, the law of the pair (Mt, Xt) admits a density with respect to Lebesgue measure. In uni-dimensional case, we compute this one. This allows us to show that for anyt >0,the pair formed by the random variableXtand the running supremum Mt of X at time t can be characterized as a solution of a weakly valued-measure partial dierential equation.
Keywords: Partial dierential equation, running supremum process, joint law.
A.M.S. Classication: 60J60, 60H07, 60H10.
In this paper one was interested in the joint law of the pair (a continuous diusion process, its running maximum). In case of a Brownian motion the result is well known, see for instance [9]. For general Gaussian processes, the law of the maximum is studied in [1].
Concerning the maximum law, the main part of literature is devoted to maximum of martingales, their terminal value, their maximum at terminal time. For instance look at Rogers et al. [15, 7, 2]. Cox-Obloj [5] aim, given a price process S, is to exhibit an hedging strategy of the so-called no touch option, meaning that the payo is the indicator of the set {ST < b;ST > a}. They are not concerned with the law of the pair (process, its running maximum). A lot of papers are mainly interested in the hedging of barrier option, for instance [2].
∗coutin@math.univ-toulouse.fr, IMT.
†pontier@math.univ-toulouse.fr, IMT: Institut Mathématique de Toulouse, Université Paul Sabatier, 31062 Toulouse, France.
The case of general Lévy processes is studied by Doney and Kyprianou [6]. In particular cases driven by a Brownian motion and a compound Poisson process, Roynette-Vallois-Volpi [16] provide the Laplace transform of undershot-overshot- hitting time law. In [11, 4] a weak partial integro dierential equation for the pair (process-its running maximu) law density is done. Lagnoux-Mercier-Vallois [10]
provide the law density of such a pair, but in case of reected Brownian motion.
Concerning the diusion processes, for instance the Ornstein Uhlenbeck process, the density of the running maximum law is given in [13]. Quote Yor et al. [9] for the one dimensional diusion process: a PDE is obtained for the law density of the process stopped before hitting a moving barrier. In [8] a multi-dimensional diusion (whose corresponding diusion vector elds are commutative) joint distribution is studied at the time when a component attains its maximum on nite time interval;
under regularity and ellipticity conditions the smoothness of this joint distribution is proved.
In [4] a Lévy process(Xt, t≥0),starting from zero, right continuous left limited is considered: X is the sum of a drifted Brownian motion and a compound Poisson process, called a mixed diusive-jump process, then the density function of the pair formed by the random variable Xt and its running supremum Mt is provided. Fi- nally we quote [3] which proves that the hitting time law admits a density and we here use some of its basic ideas.
We here look for more general (but continuous) cases where this density exists.
We have results in d−dimensional case, but without closed expression. In uni- dimensional case we get the existence and a closed expression for the joint law density.
The model is as following: on a ltered probability space (Ω,(Ft=σ(Wu, u≤ t))t≥0,P) where W := (Wu, u ≥ 0) is a d-dimensional Brownian motion. Let a diusion process taking its values in Rd, solution to
dXt=B(Xt)dt+
∑d i=1
Ai(Xt)dWt, X0 =x∈Rd, t >0, where B:Rd→Rd and A:Rd →Rd×d satisfy
A and B ∈Cb1. (1)
Let Mt := sups≤tXs1. We rst prove that the law of Vt = (Mt, Xt) is absolutely continuous with respect to the Lebesgue measure in a general case with some stan- dard assumptions on the coecients A and B. Then in Section 2, we turn to the uni-dimensional case. Here the density of the pair (process, running maximum) is provided in a weak form. Section 3 is devoted to prove a PDE concerning this density. Finally, an Appendix gives some tools and intermediate results.
1 The law of Vt is absolutely continuous
Here it is proved that for any t >0, the joint law ofVt:= (Mt, Xt)admits a density with respect to the Lebesgue measure. For this purpose, we use Malliavin calculus specically Nualart's results [12].
Proposition 1.1. We assume that B andAsatisfy Assumption (1) and there exists a constant c > 0 such that
c∥v∥2 ≤v′A(x)A(x)′v, ∀v, x∈Rd. (2) Then the joint law of Vt := (Mt, Xt) admits a density with respect to the Lebesgue measure for all t >0.
The next subsection recalls some useful denitions and results.
1.1 Short Malliavin calculus summary
The material of this subsection is taken in section 1.2 of [12]. Let H=L2([0, T],Rd) endowed with the usual scalar product ⟨.,⟩H and the associated norm ∥.∥H.
For all h,˜h∈H,
W(h) :=
∫ T
0
h(t)dWt
is a centered Gaussian variable with variance equal to ∥h∥2H. If⟨h,˜h⟩H = 0 then the random variables W(h) and W(˜h) are independent.
LetS denote the class of smooth random variables F dened as following:
F =f(W(h1), ..., W(hn))(W(h1), ..., W(hn)) (3) where n∈N, h1, ..., hn∈H and f belongs to Cb(Rn).
Denition 1.2. The derivative of a smooth variableF as (3) is theHvalued random variable given by
DF =
∑n i=1
∂if(W(h1), ..., W(hn))hi.
Proposition 1.3. The operator D is closable from Lp(Ω) into Lp(Ω,H) for any p≥1.
For anyp≥1,we denote the domain of the operatorDinLp(Ω)byD1,p meaning that D1,p is the closure of the class of smooth random variables S with respect to the norm
∥F∥1,p = [E[|F|p] +E[∥DF∥pH]]1/p.
Malliavin calculus is a powerful tool to prove the absolute continuity of random variables law. Namely Theorem 2.1.2 page 97 [12] states:
Theorem 1.4. Let F = (F1, ..., Fm) be a random vector satisfying the following conditions
(i) Fi belongs to D1,p for p > 1 for all i= 1, ..., m,
(ii) the Malliavin matrix γF = (⟨DFi, DFj⟩H)1≤i,j≤m is invertible.
Then the law of F is absolutely continuous with respect to the Lebesgue measure on Rm.
According to this theorem, the proof of Proposition 1.1 will be a consequence of the following that we have to prove:
• Xti, i= 1, ..., d and Mt belongs toD1,p p > 1, Lemma 1.5;
• the (d+ 1)×(d+ 1) matrix γV(t) := (⟨DVti, DVtj⟩)1≤i,j≤d+1 is almost surely invertible, Proposition 1.6.
1.2 Malliavin dierentiability of the supremum
Lemma 1.5. We assume that B and A satisfy Assumption (1) then Xti, i = 1, ..., d and Mt belongs to D1,p ∀p≥1 for all t >0.
Proof. Using Theorem 2.2.1 [12], under Assumption (1),
• Xti, i= 1, d belong to D1,∞ for all t >0,
• ∀t≤T, ∀p >0, ∀i= 1,· · · , d, there exists a constantCTp such that sup
0≤r≤tE (
sup
r≤s≤T
DrXsip)
=Ct ≤CTp <∞, (4)
• the Malliavin derivative DrXt satises DrXt = 0 for r > t almost surely and for r≤t almost surely, using Einstein's convention:
DrXti =Ai(Xr) +
∫ t
r
Aik,α(s)Dr(Xsk)dWsα+
∫ t
r
Bik(s)Dr(Xsk)ds (5) where Ak,α(s) :=∂kAα(Xs) and Bk :=∂kB(Xs) are in Rd.
In order to prove thatMtbelongs toD1,p we follow the same lines as the proof of Nualart's Proposition 2.1.10 with indexpinstead of2.Then, for anyi= 1, ..., d,we establish that theHvalued process(D.Xti, t∈[0, T])has a continuous modication and satises E(∥D.Xti∥pH)<∞.
We now use Appendix (A.11) in Nualart [12], as a corollary of Kolmogorov's conti- nuity criterion. Namely if there exist positive real numbers α, β, K such that
E[∥D.Xt+τi −D.Xti∥αH]≤Kτ1+β, ∀t ≥0, τ ≥0
then DXi admits a continuous modication. MoreoverE(sups∈[0,T]∥D.Xsi∥αH)<∞. Let τ >0,Equation (5) yields
∆τDr(Xti) : =Dr(Xt+τi )−Dr(Xti)
=
∫ max(t+τ,r) max(r,t)
Bik(s)Dr(Xsk)ds+
∫ max(t+τ,r) max(r,t)
Aik,α(s)Dr(Xsk)dWsα. Using the denition of H
∥∆τD.(Xti)∥2H =
∫ T 0
|
∫ max(t+τ,r) max(r,t)
Bik(s)Dr(Xsk)ds+
∫ max(t+τ,r) max(r,t)
Aik,α(s)Dr(Xsk)dWsα|2dr.
According to Jensen's inequality for p≥2
∥∆τD.(Xti)∥pH ≤Tp2−1
∫ T
0
|
∫ max(t+τ,r)
max(r,t)
Bik(s)Dr(Xsk)ds+
∫ max(t+τ,r)
max(r,t)
Aik,α(s)Dr(Xsk)dWsα|pdr.
Using (a+b)p ≤2p−1(ap+bp),
∥∆τD.(Xt)∥pH ≤2p−1Tp2−1
∫ T
0
[
|
∫ t+τ
t
Bik(s)Dr(Xsk)ds|p+|
∫ t+τ
t
Aik,α(s)Dr(Xsk)dWsα|p ]
dr.
The expectation of the rst term is bounded using Jensen's inequality and (4) for any r ∈[0, T]:
E [
|
∫ t+τ
t
Bik(s)Dr(Xsk)ds|p ]
≤ ∥B∥p∞τp−1sup
r
E[ sup
r≤s≤T|Dr(Xsk)|pτ] =∥B∥p∞τpCTp. Using once again (4), Burkholder-Davis Gundy' and Jensen's inequalities, the expectation of the second term satises for anyr ∈[0, T]:
E [
|
∫ t+τ t
Aik,α(s)Dr(Xsk)dWsα|p ]
≤CpE [
(
∫ t+τ t
|Aik,α(s)Dr(Xsk)|2ds)p/2 ]
≤Cp∥A∥p∞τp/2−1
∫ t+τ
t
E( sup
r≤s≤T|Dr(Xsi)|p)ds≤Cp∥A∥p∞τp/2−1CTpτ,
thus for any τ ∈[0,1] there exists a constant D=Tp/22p/2−1CTp(∥B∥p∞τp/2+Cp|A∥p∞) such that for any i= 1, ...d,
E[∥D.(Xt+τi )−D.(Xti)∥pH]≤Dτp/2.
Kolmogorov's lemma applied to the process {D.(Xt), t ∈[0, T]},taking it values in the Hilbert spaceH,proves the existence of a continuous version, meaning: there exist positive real numbers α, β, K such that
E[∥D.(Xt+τi )−D.(Xti)∥αH]≤Kτ1+β.
Withα=p >2, β =p/2−1, K =D,we get the existence of a continuous version of the process t7→ D.(Xt) from [0, T] to the Hilbert spaceH. Finally, we conclude as Nualart's
Proposition 2.1.10 proof with indexp instead of 2. •
1.3 Invertibility of the Malliavin matrix
Proposition 1.6. Assume thatB andA are in Cb1 and there exists a constantc >0 such that
c∥v∥2≤v′A(x)A(x)′v, ∀v∈Rd, ∀x∈Rd
then for all t >0 the matrix γV(t) := (⟨DVti, DVtj⟩H)1≤i,j≤d+1 is almost surely invertible.
Proof. The key is to introduce a new matrix which will be invertible:
for all(s, t), 0< s < t, γG(s, t) := (⟨DGi(s, t), DGj(s, t)⟩H)1≤i,j≤2(d+1) (6) whereGi(s, t) :=Xti, i= 1, ..., dand Gi+d(s, t) =Xsi, i= 1, ..., d.
On another hand we will prove, t >0being xed, P(Xt1 =Mt) = 0.
Step 1: We introduce
• N1,t:={ω,∃s∈[0, t], DXs1 ̸=DMt and Xs1=Mt},
• N2,t:={ω,∃s∈[0, t[, det(γG(s, t)) = 0},
• N3,t:={ω, Xt1 =Mt},
• Nt={ω, det(γV(t)) = 0}. Then,
Nt⊂(
Nt∩ ∩3i=1Ni,tc)
∪ ∪3i=1Ni,t.
Proof. Note that P(Nt∩ ∩3i=1Ni,tc ) = 0. Indeed if ω ∈ Nt∩ ∩3i=1Ni,tc , since X.1 admits a continuous modication there existss0 such thatXs10 =Mt.The fact thatω∈N3,tc implies that s0 < t, and γV(t) = (Γi,jG(s0, t))(i,j)∈{1,···,d+1}2 is a sub matrix of γG(s0, t). The fact thatγV(t)is not invertible contradicts the fact thatγG(s0, t)is invertible. Then, it remains to prove thatP(Ni,t) = 0 for i= 1,· · ·, d+ 1.
Step 2: Using the same lines as the proof of Proposition 2.1.11 [12], we prove that almost surely
{s:Xs1 =Mt} ⊂ {s: DMt=DXs1} meaningP(N1,t) = 0.We skip the details for simplicity.
Step 3: For all t>0, almost surely for all s < t, the 2d×2d matrix γG(s, t) is invertible, meaning that ∀t, the eventN2,t is negligible.
Proof. This matrix γG(s, t) is symmetrical and using (2.59) and (2.60) in [12] yields:
γG(s, t) =
( Y(t)C(t)Y(t)′ Y(s)C(s)Y(t)′ Y(t)C(s)Y(s)′ Y(s)C(s)Y(s)′
)
(7) where, using Einstein's convention to avoid ∑
k, ∑
k′, ∑
l...
Ci,j(t) :=
∫ t
0
Y−1(u)ikAkl(Xu)Y−1(u)jk′Akl′(Xu)du Yji(t) :=δi,j +
∫ t
0
Aik,l(u)Yjk(u)dWul+
∫ t
0
Bik(u)Yk(u)du, i, j ∈ {1,· · ·, d}
Let us denote
Ci,j(s, t) :=Ci,j(t)−Ci,j(s).
According to (2.58) [12] there exists a processZ such that almost surely for all h∈[0, T] Z(h)Y(h) =Id
thus for all tthe matricesY(t) are invertible.
Actually for alliand j
Yji(t) =Yji(s) +
∫ t
s
Aik,l(u)Yjk(u)dWul +
∫ t
s
Bik(u)Yjk(u)du, and multiplying this equality by Y(s)−1 one deduces:
Y(t)Y(s)−1 =Id+
∫ t s
A..,l(u)Y(u)Y(s)−1dWsl+
∫ t s
B(u)Y(u)Y(s)−1ds so the(d, d) matrix Y(s, t) :=Y(t)Y(s)−1 is invertible.
Then γG(s, t) (7) can be rewritten as a matrix composed with four (d, d) blocks:
γG(s, t) :=
( Y(s, t)Y(s)[C(s) +C(s, t)]Y(s)′Y(s, t)′ Y(s)C(s)Y(s)′Y(s, t)′ Y(s, t)Y(s)C(s)Y(s)′ Y(s)C(s)Y(s)′
)
The second line of blocks multiplied by Y(s, t)′ and this one subtracted to the rst line yield:
det [γG(s, t)] =
Y(s, t)Y(s)C(s, t)Y(s)′Y(s, t)′ 0 Y(s, t)Y(s)C(s)Y(s)′ Y(s)C(s)Y(s)′
.
The properties of block trigonal matrix determinants prove that
det [γG(s, t)] =Y(s, t)Y(s)C(s, t)Y(s)′Y(s, t)′Y(s)C(s)Y(s)′
The processes Z are Y are diusion processes so each of them admits a continuous modication satisfying Z(h)Y(h) = Id, ∀h ∈ [0, T]. Thus, almost surely the continuous process Z is invertible so satises almost surely for all0≤s≤t≤T
∫ t
s
det(Z(h))2dh >0.
Letσ(x) =∑d
l=1Al(x)Al(x)′.Formula (2.61) page 127 [12] shows C(s) =
∫ s
0
Y−1(h)σ(Xh)(Y(h)−1)′dh, C(s, t) =
∫ t
s
Y−1(h)σ(Xh)(Y(h)−1)′dh We now follow the proof of Theorem 2.3.1 page 127 [12]: forv∈Rd, using the uniform ellipticity Assumption (2)
v′σ(Xs)v≥c|v|2, ∀s.
Withv= (Y(h)−1)′u we get
u′Y(h)−1σ(Xh)(Y(h)−1)′u≥cu′Y(h)−1(Y(h)−1)′u and
u′C(s)u=
∫ s
0
u′Y(h)−1σ(X(h))(Y(h)−1)′udh≥c
∫ s
0
u′Y(h)−1(Y(h)−1)′udh=c|u|2
∫ s
0
det(Z(h))2dh.
Similarly
u′C(s, t)u=
∫ t
s
u′Y(h)−1σ(X(h))(Y(h)−1)′udh≥c|u|2
∫ t
s
det(Z(h))2dh.
Thus almost surely for alls∈]0, t[, C(s) and C(s, t) are invertible. As a consequence, the matrix γG(s, t)is invertible.
The process t→D.(Xt) taking its values inHadmits a continuous modication and the sets of invertible matrix is an open set then,
P({ω,∃s∈[0, t[, det(γG(s, t)) = 0}) =P(N2,t) = 0.
Step 4: Under Assumptions (1) and (2), timetbeing xed, almost surelyMt> Xt1meaning• the eventN3,t is negligible.
Proof. For sake of completeness we prove this result, more or less included in Proposition 18 [8] but stronger assumptions are used there.
The set {Mt=Xt1} is detailed as follows:
{ω, Mt(ω) =Xt1(ω)} (8)
={ω,∃s < t| ∀u∈[s, t], Xu1(ω) =Xt1(ω)} ∪ {ω| ∀u < t, Xu1(ω)< Xt1(ω)}. Using (1) and (2), A−1B is bounded, thus an equivalent change of equivalent probability measure can be operated using Girsanov Theorem: the probability measure P0 is dened as
dP0
dP|Ft
=Lt, Lt:= exp (
−
∫ t
0
(BA−1)i(Xs)dWsi−1 2
∫ t
0
∥(BA−1(Xs)∥2ds )
. Then X1 is a (F,P0) martingale:
Xt1=X01+
∫ t
0
∑
j
A1,j(Xs)dW˜sj (9) where W˜ is a (F,P0) d-dimensional Brownian motion. The bracket of X1, actually inde- pendent of the probability measure in continuous case, is
⟨X1, X1⟩t=
∫ t
0
∑
j
(A1,j(Xs))2ds.