HAL Id: hal-02010344
https://hal.archives-ouvertes.fr/hal-02010344
Preprint submitted on 7 Feb 2019
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Existence and regularity of law density of a pair (diffusion, first component running maximum)
Laure Coutin, Monique Pontier
To cite this version:
Laure Coutin, Monique Pontier. Existence and regularity of law density of a pair (diffusion, first
component running maximum). 2019. �hal-02010344�
ESAIM: Probability and Statistics
Existence and regularity of law density of a pair (diffusion, first component running maximum)
--Manuscript Draft--
Manuscript Number:
Article Type: Research article
Full Title: Existence and regularity of law density of a pair (diffusion, first component running maximum)
Short Title: Existence and regularity of law density of (diffusion, running maximum) Corresponding Author: Monique Pontier, PhD
Institut de mathematiques de Toulouse Toulouse, France FRANCE
Corresponding Author E-Mail: pontier@math.univ-toulouse.fr
Order of Authors: Monique Pontier, PhD
Laure Coutin, phD Manuscript Region of Origin: FRANCE
Abstract: Let X be a continuous $d$-dimensional diffusion process and M the running supremum of the first component.
We show that, for any t>0, the law of the (d+1) random vector (M_t,X_t) admits a density with respect to the Lebesgue measure using Malliavin's calculus. In case
$d=1$ we prove the regularity of this density.
Keywords: Running supremum process; joint law
density; Malliavin calculus; regularity of the density.
Author Comments:
Existence and regularity of law density of a pair (diusion, rst component running maximum)
Laure Coutin ∗ , Monique Pontier † July 2, 2018
Abstract
Let X be a continuous d-dimensional diusion process and M the running supremum of the rst component. We show that, ∀t > 0, the law of the (d + 1) random vector (M
t, X
t) admits a density with respect to the Lebesgue measure using Malliavin's calculus. In case d = 1 we prove the regularity of this density.
Keywords: Running supremum process, joint law density, Malliavin calcu- lus, regularity of the density.
A.M.S. Classication: 60J60, 60H07, 60H10.
In this paper one is interested in the joint law of the random vector (M, X ) where X is a d -dimensional diusion process and M the running supremum of its rst component. When the process X is a Brownian motion the result is well known, see for instance [13]. For general Gaussian processes, the law of the maximum is studied in [2]. The cases where the process X is a martingale, or a Lévy process are deeply studied in the literature. Some specic diusion processes as Orstein-Uhlenbeck were also investigated.
A lot of studies are devoted to the maximum of martingales, their terminal value, their maximum at terminal time. For instance look at Rogers et al. [18, 9, 3]. Cox- Obloj [7] aim is to exhibit an hedging strategy of the so-called no touch option, meaning that the payo is the indicator of the set {S
T< b; S
T> a}, the price process S being given. They are not concerned with the law of the pair (process, its
∗
coutin@math.univ-toulouse.fr, IMT, UMR 5219.
†
pontier@math.univ-toulouse.fr, IMT: Institut Mathématique de Toulouse, Université Paul Sabatier, 31062 Toulouse, France.
Manuscript Click here to download Manuscript LaureMoRegul2juillet18.pdf
running maximum). A lot of papers are mainly interested in the hedging of barrier option, for instance [3].
The case of general Lévy processes is studied by Doney and Kyprianou [8]. In particular cases driven by a Brownian motion and a compound Poisson process, Roynette et al. [19] provide the Laplace transform of undershot-overshot-hitting time law. In [15, 6] the pair (the process-its running maximum) law density is proved to be a weak solution of a partial dierential equation. We quote [4] which is the starting point of our study. Lagnoux et al. [14] provide the law density of such a pair, but in case of a reected Brownian motion.
Concerning the diusion processes, for instance the Ornstein Uhlenbeck process, the density of the running maximum law is given in [1]. Some analogous results are provided in [9] concerning random walks.
Quote Jeanblanc et al. [13] for the one-dimensional diusion process: a PDE is obtained for the law density of the process stopped before hitting a moving barrier.
In [10] a multi-dimensional diusion (whose corresponding diusion vector elds are commutative) joint distribution is studied at the time when a component attains its maximum on nite time interval; under regularity and ellipticity conditions the smoothness of this joint distribution is proved.
Here we look for more general (but continuous) cases where this density exists.
In d− dimensional case, the law of the (d + 1) -random vector (M
t, X
t) is absolutely continuous, where M
t:= sup
s≤tX
s1. In the one-dimensional case we get the regu- larity of this density. Such results are used in our work in progress [5] where this density is proved to be a weak solution of a partial dierential equation.
The model is as following: let a ltered probability space (Ω, (F
t= σ(W
u, u ≤ t))
t≥0, P ) where W := (W
u, u ≥ 0) is a d -dimensional Brownian motion. Let X be a d -dimensional diusion process solution to
dX
t= B (X
t)dt +
d
X
i=1
A
i(X
t)dW
t, X
0= x∈ R
d, t > 0,
where B : R
d→ R
dand A: R
d→ R
d×dare bounded with bounded and continuous dierential.
In Section 1 we recall some elements in Malliavin's calculus that we use to prove
that the law of V
t:= (M
t, X
t) is absolutely continuous with respect to the Lebesgue
measure in a general case with some standard assumptions on the coecients A and
B. In Section 2, we turn to the one-dimensional case and we prove the regularity
of the density of the law of V
tunder some more technical assumptions. Section
3 provides an example where the announced partial dierential equation is exactly
stated.
1 The law of V t is absolutely continuous
Here it is proved that for any t > 0, the joint law of V
t:= (M
t, X
t) admits a density with respect to the Lebesgue measure. For this purpose, we use Malliavin calculus specically Nualart's results [16]. Let us denote C
bi( R
d) the set of the functions i times dierentiable, bounded, with bounded derivatives.
Theorem 1.1 We assume that A and B satisfy
(1) A and B ∈ C
b1( R
d)
and there exists a constant c > 0 such that
ckvk
2≤ v
TA(x)A
T(x)v, ∀v, x ∈ R
d. (2)
Then the joint law of V
t= (M
t, X
t) admits a density with respect to the Lebesgue measure for all t > 0.
The next subsection recalls some useful denitions and results.
1.1 Short Malliavin calculus summary
The material of this subsection can be found in Section 1.2 of [16]. Let H = L
2([0, T ], R
d) endowed with the usual scalar product h., .i
Hand the associated norm k.k
H.
For all (h, ˜ h) ∈ H
2,
W (h) :=
Z
T 0h(t)dW
tis a centered Gaussian variable with variance equal to khk
2H
. If hh, ˜ hi
H= 0 then the random variables W (h) and W (˜ h) are independent.
Let S denote the class of smooth random variables F dened as following:
F = f (W (h
1), ..., W (h
n)) (3)
where n ∈ N , h
1, ..., h
n∈ H and f belongs to C
b( R
n).
Denition 1.2 The Malliavin derivative of a smooth variable F dened in (3) is the H valued random variable given by
DF =
n
X
i=1
∂
if (W (h
1), ..., W (h
n))h
i.
Proposition 1.3 The operator D is closable from L
p(Ω) into L
p(Ω, H ) for any
p ≥ 1.
For any p ≥ 1, let us denote D
1,pthe domain of the operator D in L
p(Ω) , meaning that D
1,pis the closure of the class of smooth random variables S with respect to the norm
kF k
1,p= ( E [|F |
p] + [ E [kDF k
pH])
1/p.
Malliavin calculus is a powerful tool to prove the absolute continuity of random variables law. Namely Theorem 2.1.2 page 97 [16] states:
Theorem 1.4 Let F = (F
1, ..., F
m) be a random vector satisfying the following conditions
(i) F
ibelongs to D
1,pfor p > 1 for all i = 1, ..., m,
(ii) the Malliavin matrix γ
F:= (hDF
i, DF
ji
H)
1≤i,j≤mis invertible.
Then the law of F is absolutely continuous with respect to the Lebesgue measure on R
m.
Using this theorem, the proof of Theorem 1.1 will be a consequence of the following:
• X
ti, i = 1, ..., d and M
tbelongs to D
1,pp > 1 , see Lemma 1.5;
• the (d + 1) × (d + 1) matrix γ
V(t) := (hDV
ti, DV
tji)
1≤i,j≤d+1is almost surely invertible, see Proposition 1.6.
1.2 Malliavin dierentiability of the supremum
Lemma 1.5 Under Assumption (1), X
ti, i = 1, ..., d and M
tbelongs to D
1,p∀p ≥ 1 for all t > 0.
Proof: Using Theorem 2.2.1 [16], under Assumption (1),
• X
ti, i = 1, d belong to D
1,∞for all t > 0,
• ∀t ≤ T, ∀p > 0, ∀i = 1, · · · , d , there exists a constant C
Tpsuch that sup
0≤r≤t
E
sup
r≤s≤T
D
rX
sip
= C
t≤ C
Tp< ∞, (4)
• the Malliavin derivative D
rX
tsatises D
rX
t= 0 for r > t almost surely and for r ≤ t almost surely, using Einstein's convention (2.56 page 125 [16]):
D
rX
ti= A
i(X
r) + Z
tr
A
ik,α(s)D
r(X
sk)dW
sα+ Z
tr
B
ik(s)D
r(X
sk)ds (5)
where A
k,α(s) := ∂
kA
α(X
s) and B
k:= ∂
kB(X
s) are in R
d.
(i) In order to prove that M
tbelongs to D
1,pwe follow the same lines as in the
proof of Proposition 2.1.10 [16] with index p instead of 2. For any i = 1, ..., d, we
establish that the H-valued process (D
.X
ti, t ∈ [0, T ]) has a continuous modication and satises E (kD
.X
tik
pH) < ∞.
We now use Appendix (A.11) in Nualart [16], as a corollary of Kolmogorov's continuity criterion: if there exist positive real numbers α, β, K such that
(6) E[kD
.X
t+τi− D
.X
tik
αH] ≤ Kτ
1+β, ∀t ≥ 0, τ ≥ 0 then DX
iadmits a continuous modication.
(ii) As a second step we prove (6). Let τ > 0, Equation (5) yields
∆
τD
r(X
ti) : = D
r(X
t+τi) − D
r(X
ti)
=
Z
max(t+τ,r) max(r,t)B
ik(s)D
r(X
sk)ds +
Z
max(t+τ,r) max(r,t)A
ik,α(s)D
r(X
sk)dW
sα. Using the denition of H
k∆
τD
.(X
ti)k
2H
= Z
T0
|
Z
max(t+τ,r) max(r,t)B
ik(s)D
r(X
sk)ds+
Z
max(t+τ,r) max(r,t)A
ik,α(s)D
r(X
sk)dW
sα|
2dr.
According to Jensen's inequality for p ≥ 2 k∆
τD
.(X
ti)k
pH≤ T
p2−1Z
T 0|
Z
max(t+τ,r) max(r,t)B
ik(s)D
r(X
sk)ds+
Z
max(t+τ,r) max(r,t)A
ik,α(s)D
r(X
sk)dW
sα|
pdr.
Using (a + b)
p≤ 2
p−1(a
p+ b
p) , k∆
τD
.(X
t)k
pH≤ 2
p−1T
p2−1Z
T 0| Z
t+τt
B
ik(s)D
r(X
sk)ds|
p+ | Z
t+τt
A
ik,α(s)D
r(X
sk)dW
sα|
pdr.
The expectation of the rst term is bounded using Jensen's inequality and (4) for any r ∈ [0, T ] :
E
| Z
t+τt
B
ik(s)D
r(X
sk)ds|
p≤ kBk
p∞τ
p−1sup
r
E[ sup
r≤s≤T
|D
r(X
sk)|
pτ ] = kBk
p∞τ
pC
Tp.
Using once again (4), Burkholder-Davis Gundy' and Jensen's inequalities, the expectation of the second term satises for any r ∈ [0, T ] :
E
| Z
t+τt
A
ik,α(s)D
r(X
sk)dW
sα|
p≤ C
pE
( Z
t+τt
|A
ik,α(s)D
r(X
sk)|
2ds)
p/2≤ C
pkAk
p∞τ
p/2−1Z
t+τt
E( sup
r≤s≤T
|D
r(X
si)|
p)ds ≤ C
pkAk
p∞τ
p/2−1C
Tpτ,
thus for any τ ∈ [0, 1] there exists a constant D = T
p/22
p/2−1C
Tp(kBk
p∞τ
p/2+ C
p|Ak
p∞) such that for any i = 1, ...d,
E[kD
.(X
t+τi) − D
.(X
ti)k
pH] ≤ Dτ
p/2.
So Criterium (6) is satised with α = p > 2, β = p/2 − 1, K = D. Kolmogorov's lemma applied to the process {D
.(X
t), t ∈ [0, T ]} taking it values in the Hilbert space H , proves the existence of a continuous version of the process t 7→ D
.(X
t) from [0, T ] to the Hilbert space H .
Finally, we conclude as Nualart's Proposition 2.1.10 proof with index p instead of
2. •
1.3 Invertibility of the Malliavin matrix
Proposition 1.6 Assume that B and A satisfy Assumptions (1) and (2) then for all t > 0 the matrix γ
V(t) := (hDV
ti, DV
tji
H)
1≤i,j≤d+1is almost surely invertible.
Proof: The key is to introduce a new matrix which will be invertible:
(7) for all (s, t), 0 < s < t, γ
G(s, t) := (hDG
i(s, t), DG
j(s, t)i
H)
1≤i,j≤2(d+1)where G
i(s, t) := X
ti, i = 1, ..., d and G
i+d(s, t) = X
si, i = 1, ..., d.
Step 1: We introduce
• N
1,t:= {ω, ∃s ∈ [0, t], DX
s16= DM
tand X
s1= M
t},
• N
2,t:= {ω, ∃s ∈ [0, t[, det(γ
G(s, t)) = 0},
• N
3,t:= {ω, X
t1= M
t},
• N
t= {ω, det(γ
V(t)) = 0}.
Remark that N
t⊂ N
t∩ ∩
3i=1N
i,tc∪ ∪
3i=1N
i,t.
Note that P (N
t∩ ∩
3i=1N
i,tc) = 0. Indeed if ω ∈ N
t∩ ∩
3i=1N
i,tc, since X
.1admits a continuous modication there exists s
0such that X
s10= M
t. The fact that ω ∈ N
3,tcimplies that s
0< t, and γ
V(t) = (Γ
i,jG(s
0, t))
(i,j)∈{1,···,d+1}2is a sub matrix of γ
G(s
0, t).
The fact that γ
V(t) is not invertible contradicts the fact that γ
G(s
0, t) is invertible.
Then, it remains to prove that P (N
i,t) = 0 for i = 1, 2, 3.
Step 2: Using the same lines as the proof of Proposition 2.1.11 [16], we prove that almost surely
{s : X
s1= M
t} ⊂ {s : DM
t= DX
s1} meaning P (N
1,t) = 0. We skip the details for simplicity.
Step 3: For all t> 0, almost surely for all s < t , the 2d × 2d matrix γ
G(s, t) is invertible, meaning that ∀t , the event N
2,tis negligible.
Proof: This matrix γ
G(s, t) is symmetrical and using (2.59) and (2.60) in [16] yields:
γ
G(s, t) =
Y (t)C(t)Y
T(t) Y (s)C(s)Y
T(t) Y (t)C(s)Y
T(s) Y (s)C(s)Y
T(s)
(8)
where, using Einstein's convention C
i,j(t) :=
Z
t 0Y
−1(u)
ikA
kl(X
u)A
kl0(X
u)Y
−1(u)
jk0du (9)
Y
ji(t) := δ
i,j+ Z
t0
A
ik,l(u)Y
jk(u)dW
ul+ Z
t0
B
ik(u)Y
k(u)du, i, j ∈ {1, · · · , d}.
Let us denote
C
i,j(s, t) := C
i,j(t) − C
i,j(s).
According to (2.58) [16] there exists a process Z such that almost surely for all h ∈ [0, T ]
Z(h)Y (h) = Id thus for all t the matrices Y (t) are invertible.
As a consequence, for all (s, t) ∈ [0, T ]
2, the (d, d) matrix Y (s, t) := Y (t)Y (s)
−1is too invertible. Then γ
G(s, t) (8) can be rewritten as a matrix composed with four (d, d) blocks:
γ
G(s, t) :=
Y (s, t)Y (s)[C(s) + C(s, t)]Y
T(s)Y
T(s, t) Y (s)C(s)Y
T(s)Y
T(s, t) Y (s, t)Y (s)C(s)Y
T(s) Y (s)C(s)Y
T(s)
The second line of blocks multiplied by Y (s, t)
0and this one subtracted to the rst line yield:
det [γ
G(s, t)] = det
Y (s, t)Y (s)C(s, t)Y
T(s)Y
T(s, t) 0
Y (s, t)Y (s)C(s)Y
T(s) Y (s)C(s)Y
T(s)
. The properties of block trigonal matrix determinants prove that
det [γ
G(s, t)] =
Y (s, t)Y (s)C(s, t)Y
T(s)Y
T(s, t)Y (s)C(s)Y
T(s) (10)
The processes Z are Y are diusion processes so each of them admits a contin- uous modication satisfying Z(h)Y (h) = Id, ∀h ∈ [0, T ] . Thus, almost surely the continuous process Z is invertible so satises almost surely for all 0 ≤ s ≤ t ≤ T
Z
t sdet(Z(h))
2dh > 0.
Let σ(x) = P
dl=1
A
l(x)A
l(x)
0. By denition of C (see (9)) C(s) =
Z
s 0Y
−1(h)σ(X
h)(Y (h)
−1)
0dh, C(s, t) = Z
ts
Y
−1(h)σ(X
h)(Y (h)
−1)
0dh We now follow the proof of Theorem 2.3.1 page 127 [16]: for v ∈ R
d, using the uniform ellipticity (Assumption (2)):
v
0σ(X
s)v ≥ c|v|
2, ∀s > 0, ∀v ∈ R
d.
With v = (Y (h)
−1)
0u we get
u
0Y (h)
−1σ(X
h)(Y (h)
−1)
0u ≥ cu
0Y (h)
−1(Y (h)
−1)
0u and
u
0C(s)u = Z
s0
u
0Y (h)
−1σ(X(h))(Y (h)
−1)
0udh ≥ c Z
s0
u
0Y (h)
−1(Y (h)
−1)
0udh = c|u|
2Z
s0
det(Z(h))
2dh.
Similarly
u
0C(s, t)u = Z
ts
u
0Y (h)
−1σ(X(h))(Y (h)
−1)
0udh ≥ c|u|
2Z
ts
det(Z (h))
2dh.
Thus almost surely for all s ∈]0, t[, C(s) and C(s, t) are invertible. Since Y (s, t), Y (s) are too invertible as a consequence, from (10) the matrix γ
G(s, t) is invertible.
The process t → D
.(X
t) takes its values in H and admits a continuous modication (see (6) above) and the set of invertible matrices is an open set then,
P ({ω, ∃s ∈]0, t[, det(γ
G(s, t)) = 0}) = P (N
2,t) = 0. 2
Step 4: Under Assumptions (1) and (2), time t being xed, almost surely M
t> X
t1meaning the event N
3,tis negligible.
Proof: For sake of completeness we prove this result. Actually it is Proposition 18 [10] but with stronger assumptions than ours.
The set {M
t= X
t1} is detailed as follows:
{ω, M
t(ω) = X
t1(ω)} = Ω
1∪ Ω
2where (11)
Ω
1= {ω,∃s < t| ∀u ∈ [s, t], X
u1(ω) = X
t1(ω)}, Ω
2= {ω| ∀u < t, X
u1(ω) < X
t1(ω)}.
Using Assumptions (1) and (2), A
−1B is bounded, thus an equivalent change of equivalent probability measure can be operated using Girsanov Theorem: the prob- ability measure P
0is dened as
(12) d P
0d P
|Ft= L
t, L
t:= exp
− Z
t0
(BA
−1)
i(X
s)dW
si− 1 2
Z
t 0k(BA
−1(X
s)k
2ds
. Then X
1is a (F , P
0) martingale:
X
t1= X
01+ Z
t0
X
j
A
1,j(X
s)d W ˜
sj(13)
where W ˜ is a (F , P
0) d -dimensional Brownian motion. The bracket of X
1, actually independent of the probability measure in continuous case, is
hX
1, X
1i
t= Z
t0
X
j
(A
1,j(X
s))
2ds.
Assumption (2) applied to v = (1, 0, ..., 0) implies that for any x , P
j
A
1,j(x)A
1,j(x) ≥ c > 0 . This have two consequences:
• For all rational numbers q < q
0in [0, T ]
hX
1, X
1i
q0− hX
1, X
1i
q> c(q
0− q) > 0.
According to Proposition 1.13 page 119 [17], for all rational numbers q < q
0in [0, T ] X
1is not constant on the interval [q, q
0]. Let us remark
{ω, ∃s > t| ∀u ∈ [s, t], X
u1(ω) = X
t1(ω)} ⊂ ∪
q<q0<T , q,q0∈Q{ω| u ∈ [q, q
0], X
u1(ω) = X
q1(ω)}
thus
P
0{ω| ∃s > t, ∀u ∈ [s, t], X
u1(ω) = X
t1(ω)}
= 0.
The probability measures P
0and P are equivalent so
P {ω, ∃s > t| ∀u ∈ [s, t], X
u1(ω) = X
t1(ω)}
(14) = 0.
• Using Dambis-Dubins-Schwarz' Theorem (Theorem 1.6 Chapter V [17]), and once again that P
j
(A
1,j(X
s))
2≥ c, then hX
1, X
1i
∞=
Z
∞ 0X
j
(A
1,j(X
s)
2ds = +∞.
So there exists a P
0Brownian motion B such that X
1(t) = B
hX1,X1it, ∀t > 0.
Here is followed step by step the proof of Theorem 2.7 Chapter I [17] (Lévy's modulus of continuity), but without absolute value: for all N ∈ R
P
0
lim sup
ε→0
sup 0 ≤ t
1, t
2≤ N
t
1− t
2< ε
B
t1− B
t2h(ε) = 1
= 1
where h(s) = p
2s log(1/s), s ∈ [0, 1].
This is equivalent to
P
0
lim inf
ε→0
sup 0 ≤ t
1, t
2≤ N
t
1− t
2< ε
B
t1− B
t2h(ε) 6= 1
= 0.
We remark that Ω
2=
ω, ∀u ∈ [0, t], X
1(u) < X
1(t) =
ω, ∀u ∈ [0, t], B
hX1,X1iu< B
hX1,X1it⊂ ∪
N∪
n∩
k≥n
sup 0 ≤ t
1, t
2≤ N
t
1− t
2< 1/k
B
t1− B
t2h(1/k) ≤ 0
⊂ ∪
Nlim inf
1/n→0
sup 0 ≤ t
1, t
2≤ N
t
1− t
2< 1/n
B
t1− B
t2h(1/n) 6= 1
thus P
0(Ω
2) = 0. The probability measures P and P
0are equivalent, so P (Ω
2) = P
ω, ∀s ∈ [0, t], X
s1(ω) < X
t1(ω) = 0.
(15)
Finally (11), (14), (15) prove that P
M
t= X
t1= P (N
3,t) = 0.
(16)
These four steps conclude the proof of Proposition 1.6, so Assumption (ii) in 2 Theorem 1.4 is satised and Theorem 1.1 is proved.
2 Case d = 1, regularity of the density
In the one-dimensional case we study the regularity of the density of the law of the vector V
t.
We recall the model, the following stochastic dierential equation (see [6]) dX
t= B(X
t)dt + A(X
t)dW
t, X
0= x, t ∈ [0, T ],
(17)
where W is a Brownian motion, A and B satisfy Assumptions (1) and (2). Moreover Assumption (2) yields that A
−1B is bounded. We recall the notations V := (M, X ) where M
t= sup
s≤tX
s, t ∈ R .
From Theorem 1.1. in Section 1, the law of the pair V
t= (M
t, X
t) admits a density. With Malliavin calculus tools, we are not able to prove the regularity of the density on the diagonal {(m, x) : m = x}. So we turn to the regularity of the density on the open subset {(m, x), m > max(0, x)}.
Theorem 2.1 Under Assumptions (2), B ∈ C
b4( R ), A ∈ C
b5( R ), ∀t > 0, the density
of the vector V
trestricted to the open set ∆ := {(m, x), m > max(0, x)} belongs to
C
2,2(∆) .
2.1 Reduction to a drifted Brownian motion
Since we are in a one-dimensional setting, we use a Lamperti transformation in order to reduce the problem to the case of a diusion with additive noise. A priori dX
t= B(X
t)dt + A(X
t)dW
t. We look for an increasing function ϕ ∈ C
b2such that the coecient of dW would be 1. Itô formula yields
dϕ(X
t) = ϕ
0(X
t)B(X
t)dt + 1
2 ϕ”(X
t)A
2(X
t)dt + ϕ
0(X
t)A(X
t)dW
t.
A sucient condition is to choose ϕ such that ϕ
0=
A1. As is A , ϕ
0is bounded above and below uniformly. Then Y = ϕ(X) satises
(18) dY
t=
B
A ◦ ϕ
−1(Y
t) − 1
2 A
0◦ ϕ
−1(Y
t)
dt + dW
t.
Remark that B ˜ =
BA◦ ϕ
−1−
12A
0◦ ϕ
−1∈ C
b1as a consequence of Assumptions of Theorem 2.1 Moreover ϕ
0being positive, ϕ is increasing and Y
t∗= ϕ(X
t∗). From Theorem 1.1, the law of the pair (Y
t∗, Y
t) admits a density with respect to Lebesgue measure and the link between V
tand (Y
t∗, Y
t) law densities is done by the following lemma.
Lemma 2.2 We assume that A and B satisfy Assumptions of Theorem 2.1. Then the density law of (M
t, X
t), p
V(., ., t), satises
p
V(b, a; t) = p
Y∗,Y(ϕ(b), ϕ(a); t) A(b)A(a)
where ϕ is dened by ϕ
0(x) =
A(x)1and p
Y∗,Y(., ., t) is the pair (Y
t∗, Y
t) law density.
Proof: It is enough to identify the density law of the pair V
t= (M
t, X
t) using, for any bounded measurable F , the following
E[F (M
t, X
t)] = E[F (ϕ
−1(Y
t∗), ϕ
−1(Y
t)] = Z
F (ϕ
−1(β), ϕ
−1(α))p
Y∗,Y(β, α; t)dβdα.
We operate the change of variables b = ϕ
−1(β), a = ϕ
−1(α) , so dβ = ϕ
0(b)db =
A(b)1db
and dα = ϕ
0(a)da =
A(a)1da get the result. •
With such a lemma, the regularity of the density p will be the one of p
Y∗,Y, ϕ, A . Theorem 2.1 is a consequence of the following theorem where B is indeed
BA◦ ϕ
−1−
1
2
A
0◦ ϕ
−1.
Theorem 2.3 Assume that B belongs to C
b4( R ), then for all t > 0 the law of
(M
t, X
t) has a C
2,2density with respect to the Lebesgue measure on ∆ = {(b, a), b > a, b > 0}
where M
t= sup
s≤tX
sand X
tis the solution of
dX
t= B(X
t)dt + dW
t.
2.2 Change of probability measure
We now consider the model with A = 1 meaning dX
t= B(X
t)dt + dW
t.
Recall the equivalent change of probability measure Q = L P where L is dened as dL
t= −L
tB (X
t)dW
t. Abusing of notation, remark that under Q , here we use W instead of X which is a Q -Brownian motion. Denoting Z = L
−1, P = ZQ,
(19) Z
t= exp[
Z
t 0B (W
s)dW
s− 1 2
Z
t 0B
2(W
s)ds].
Since B is bounded, Z
t∈ L
pfor any p.
Under Q the law of V
tis the one of (W
t∗, W
t) where W
t∗:= sup
s≤tW
s.
Lemma 2.4 As soon as B ∈ C
b4, then Z
t∈ D
4,pfor any p > 1, where D
4,pis the set of random variables 4 times dierentiable in the Malliavin sense with derivatives in L
p.
Proof: Using the denition of Z
twe only have to prove ∀p > 1 Z
t0
B(W
s)dW
s− 1 2
Z
t 0B
2(W
s)ds ∈ D
4,p. Obviously by linearity R
t0
B
2(W
s)ds ∈ D
4,psince actually D
si1,...,si
R
t0
B
2(W
s)ds = R
tmaxjsj
(B
2)
(i)(W
s)ds. Using the Malliavin derivative D
r[ R
t0
B (W
s)dW
s= B (W
r) + R
tr
B
0(W
s)dW
sand recursively D
iZ
t, i = 1, ...4, are well dened.
Finally the lemma is proved using the boundness of B and its derivatives. • From now on, using a change of time unit, it is enough to consider the case t = 1.
2.3 Local non degeneracy of F := V 1 = (W 1 ∗ , W 1 )
Let us recall Denition 2.1.2 in Nualart [16].
Denition 2.5 A random vector F = (F
1, ..., F
m) whose components are in D
1,2is locally non degenerate in an open set ∆ ⊂ R
mif there exist elements u
i∆∈ D
∞( H ), j = 1, ..., m and an m × m random matrix γ
∆= (γ
∆i,j) such that γ
i,j∈ D
∞,
|detγ
∆|
−1∈ L
q(Ω) for all q ≥ 1, and hDF
i, u
j∆i = γ
∆i,jon {F ∈ ∆} for any i, j = 1, ..., m.
Proposition 2.6 Let η > 0. The random vector F := V
1= (W
1∗, W
1) is locally non
degenerate in any open set ∆
η= {(b, a), b − a > η, b > η}.
Proof: The key is to prove that the random vector F := V
1= (W
1∗, W
1) satises Denition (2.5).
Step 1: The F components belong to D
1,2Indeed, D
tW
1= 1
[0,1](t), and D
tW
1∗= 1
[0,τ](t) where τ := inf{s, W
s= W
1∗} . Step 2
Let the process W ˜ be dened as W ˜
t:= W
1−t− W
1.
Similarly to [16] Proposition 2.1.12 page 112 ∀p > 2, let s ∈ [0, 1] and γ such that 2pγ ∈ (1, p − 1) we introduce:
Y
1(s) = Z
[0,s]2
(W
u− W
u0)
2p(u − u
0)
1+2pγdudu
0Y
2(s) =
Z
[0,s]2
( ˜ W
u− W ˜
u0)
2p(u − u
0)
1+2pγdudu
0Following this proposition proof, it can be proved that:
∃R(η, p, γ) such that Y
1(s) ≤ R ⇒ sup
z∈[0,s]
W
z≤ η;
(20)
similarly Y
2(s) ≤ R ⇒ sup
z∈[0,s]
W ˜
z≤ η.
Let ψ : R
+→ [0, 1] be innitely dierentiable and such that ψ(x) = 1 when x ∈ [0, R/2], ψ(x) = 0 when x ≥ R. Denote
(21) u
1η(s) = ψ(Y
1(s)), u
2η(s) = ψ(Y
2(1 − s)) and
Γ
η= R
10
ψ(Y
1(s))ds R
10
ψ(Y
1(s))ds
0 R
10
ψ (Y
2(1 − s))ds
! (22) .
Step 3
The processes u
iη∈ D
∞(H) i = 1, 2, by the construction of Y
iand the denition of the function ψ. As a consequence, the random matrix Γ
ηbelongs to D
∞.
Step 4 Note that (detΓ
η)
−1= [ R
10
ψ(Y
1(s))ds R
10
ψ(Y
2(1−s))ds]
−1. We will prove that ∀q ≥ 1,
(23) (
Z
1 0ψ(Y
1(s))ds)
−1and ( Z
10
ψ(Y
2(1 − s))ds)
−1∈ L
q. Indeed using the trick of Proposition 2.1.12 [16] page 114 R
10
ψ(Y
1(s))ds ≥ R
10
1
{Y1(s)<R/2}ds = Leb{s : Y
1(s) < R/2} = (Y
1)
−1(R/2) > 0, since the non decreasing function Y
1is
invertible; so for any q, R
10
ψ(Y
1(s)ds
−q≤ [(Y
1)
−1(R/2)]
−q. Recall that E [(Y
1)
−1(R/2)]
−q= Z
∞0
P {(Y
1)
−1(R/2) < x
−1/q}dx.
On the other hand for any y , P {(Y
1)
−1(R/2) < y} = P {R/2 < Y
1(y)} ≤ (
R2)
qE [(Y
1)
q(y)].
We compute E [(Y
1)
q(y)] as E
[
Z
[0,y]2
(W
u− W
u0)
2p(u − u
0)
1+2pγdudu
0]
q. Unsing Jensen's inequality,
E [(Y
1)
q(y)] ≤ y
2q−2Z
[0,y]2
E [(W
u− W
u0)
2pq] (u − u
0)
q(1+2pγ)dudu
0)
Since W is a Brownian motion, E [(W
u−W
u0)
2pq] ≤ C|u−u
0|
pqand using 2pγ < p −1, y
2q−2Z
[0,y]2
E [(W
u− W
u0)
2pq]
(u − u
0)
q(1+2pγ)dudu
0≤ Cy
q(1+p−2γp). Choosing y = x
−1/q,
P {(Y
1)
−1(R/2) < x
−1/q} ≤ C
1x
−(1+p−2γp). Using this bound for x ≥ 1, and 2γp − p − 1 > −2,
E [(Y
1)
−1(R/2)]
−q= Z
∞0
P {(Y
1)
−1(R/2) < x
−1/q}dx < ∞.
Recall that W ˜ is a Brownian motion (time reversal), so both processes (Y
si, 0 ≤ s ≤ 1), i = 1, 2, have the same law so the two factors in detΓ
ηhave the same law.
So, for any q , ( R
10
ψ(Y
2(1 − s))ds)
−1also belongs to L
q. This achieves the proof of (23) and as a conclusion, (detΓ
η)
−1∈ L
qfor any q.
Step 5
According to Denition 2.5 we now prove that the following random matrix hDW
1∗, u
1ηi hDW
1, u
1ηi
hDW
1∗, u
2ηi hDW
1, u
2ηi (24)
coincides with Γ
ηon the set {V
1∈ ∆
η} .
(i) By denition of the time τ , W
τ= W
1∗> η, this contradicts (20) so Y
1(s) ≥ R thus ψ(Y
1(s)) = 0 as soon as s ≥ τ ; hDW
1∗, u
1ηi = R
τ0
ψ(Y
1(s))ds = R
10
ψ(Y
1(s))ds.
(ii) Concerning hDW
1∗, u
2ηi = R
τ0
u
2η(s)ds, it is null:
On the set {W
1∗> η, W
1∗− W
1> η}, using W
1∗= W
τ, so W
τ− W
1> η, we get W
τ− W
1= Z
1−τ> η so η ≤ Z
1−τ≤ sup
0≤u≤1−sZ
ufor s ≤ τ. Once again, this contradicts (20) so Y
2(1− s) > R, and the denition of ψ yields u
2η(s) = 0.∀s ∈ [0, τ ] (iii) Concerning hDW
1, u
iηi, since DW
1= 1 on [0, 1], hDW
1, u
2ηi = R
10
ψ(Y
2(1 − s))ds and hDW
1, u
1ηi = R
10
ψ(Y
1(s))ds. •
2.4 Proof of Theorem 2.3
Let v
0= (x
∗0, x
0) ∈ ∆
η:= {(x
∗, x), x
∗> η, x
∗− x > η} and δ <
12d(v
0, ∆
cη) and δ < δ
0< d(v
0, ∆
cη). Recall the notations u
iη, i = 1, 2, Γ
η(21) and (22).
(i) Let ϕ ∈ C
η∞( R
m) meaning that support(φ) ⊂ ∆
ηand denote F = (W
1∗, W
1) , then using the denition of P and Q
E (∂
1,22ϕ(F )) = E
Q(∂
1,22ϕ(F )Z
1) (25)
where Z is dened in (19).
Since F is locally non degenerate (Proposition 2.6), we apply Proposition 2.1.4 (2.25) [16] to the multi-index α = (1, 2) and G = Z
1. Since we use Proposition 2.1.4 [16] with index α = (1, 2) , we only need Z
1belonging to ∩
p>1D
2p. Later, we will use twice these indices, so then we will need Z
1belonging to ∩
p>1D
4p, see Lemma 2.4.
Using identity (2.25) of Proposition 2.1.4 [16] to the multi-index α = (1, 2) and G = Z
1in the right hand of (25) we obtain
E (∂
1,22ϕ(F )) = E
Q(ϕ(F )H
1,2(Z
1)), (26)
where, following Formulas (2.26) and (2.27) of Proposition 2.1.4 [16], expressed in our framework:
H
(i)(Z
1) =
2
X
j=1
δ(Z
1(Γ
−1η)
i,ju
jη), = δ(Z
1(Γ
−1ηu
η)
i) = Z
1δ((Γ
−1ηu
η)
i)−
Z
1 0D
sZ
1(Γ
−1ηu
η(s))
ids, i = 1, 2 and for β = (1, 2)
H
1,2(Z
1) = H
(2)(H
(1)(Z
1)).
We detail this denition:
(27) H
1,2(Z
1) = δ[H
(1)(Z
1(Γ
−1ηu
η)
2)] = δ[δ[Z
1(Γ
−1ηu
η)
1](Γ
−1ηu
η)
2].
Since
ϕ(W
1∗, W
1) = Z
W1∗−∞
Z
W1−∞
∂
1,22ϕ(x
∗, x)dx
∗dx (28)
with Fubini in (26) we obtain E [∂
1,22ϕ(F )] =
Z
R2
∂
1,22ϕ(x
∗, x)E
Q[1
W1>x,W∗1>x∗
H
1,2(Z
1)]dx
∗dx.
(29)
Then Equations (27) and (29) imply that the law of F has a density with respect to the Lebesgue measure given by
p
V(x
∗, x; 1) = E
Q[1
W1>x,W∗1>x∗
δ[δ[Z
1(Γ
−1ηu
η)
1](Γ
−1ηu
η)
2].
(ii) We now turn to the dierentiability of this density. Let α = (1, 2) and β = 1, or 2 or (i, j), i, j ≤ 2, using the denition of P and Q
E (∂
α,β2+|β|ϕ(F )) = E
Q[∂
α,β2+|β|ϕ(F )Z
1].
(30)
Then using Proposition 2.1.4 [16] for α + β
E (∂
|β|β(∂
1,22ϕ(F )) = E
Q[ϕ(F )H
β(H
α(Z
1))]
(31)
Using representation (28) of ϕ E [(∂
α,β2+|β|ϕ(F )] =
Z
R2
∂
1,22ϕ(x
∗, x) E
Q[1
W1∗>x∗,W1>xH
β(H
α(Z
1))]dxdx
∗. (32)
Let φ be a regular function with support in the ball B(v
0, δ) ⊂ ∆
ηand ξ = ∂
α2ϕ so
∂
α,β2+|β|ϕ = ∂
β|β|ξ ; we apply (32):
E [(∂
β|β|ξ(F )] = Z
R2
ξ(x
∗, x) E
Q[1
W1∗>x∗,W1>xH
β(H
α(Z
1))]dxdx
∗. (33)
The left hand above is identied as R
R2
∂
β|β|ξ(x
∗, x)p
V(x
∗, x; 1)dx
∗dx but this one, using an integration by part, is also ± R
R2