• Aucun résultat trouvé

Enlargement of filtration in discrete time

N/A
N/A
Protected

Academic year: 2021

Partager "Enlargement of filtration in discrete time"

Copied!
18
0
0

Texte intégral

(1)

HAL Id: hal-01253214

https://hal.archives-ouvertes.fr/hal-01253214

Submitted on 8 Jan 2016

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Christophette Blanchet-Scalliet, Monique Jeanblanc, Ricardo Romo Roméro

To cite this version:

(2)

C. Blanchet-Scalliet

, M. Jeanblanc

, R. Romo Romero

January 8, 2016

Abstract

We present some results on enlargement of filtration in discrete time. Many results known in continuous time extend immediately in a discrete time setting. Here, we provide direct proofs which are much more simpler. We study also arbitrages conditions in a financial setting and we present some specific cases, as immersion and pseudo-stopping times for which we obtain new results.

Introduction

In this paper, we present classical results on enlargement of filtration, in a discrete time frame-work. In such a setting, any F-martingale is a semimartingale for any filtration G larger than F, and one can think that there are not so many things to do. From our point of view, one interest of our paper is that the proofs of the semimartingale decomposition formula are simple, and give a pedagogical support to understand the general formulae obtained in the literature in continuous time. It can be noted that many results are established in continuous time under the hypothesis that allF-martingales are continuous or, in the progressive enlargement case, that the random time avoids theF-stopping times and the extension to the general case is difficult. In discrete time, one can not make any of such assumptions, since martingales are discontinuous and random times valued in the set of integers do not avoidF-stopping times.

In the first section, we recall some well know facts. Section 2 is devoted to the case of initial enlargement. Section 3 presents the case of progressive enlargement with a random time τ . We give a "model-free" definition of arbitrages in the context of enlargement of filtration, we study some examples in initial enlargement and give, in a progressive enlargement setting, necessary and sufficient conditions to avoid arbitrages before τ . We present the particular case of honest times (which are the standard example in continuous time) and we give conditions to obtain immersion property. We also give also various characterizations of pseudo-stopping times.

1

Some well known Results and Definitions

In this paper, we are working in a discrete time setting: X = (Xn, n ≥ 0) is a process on a

probability space (Ω,P), and H = (Hn, n≥ 0) is a filtration. We note ∆Xn:= Xn−Xn−1, n≥ 1

C. B-S. Université de Lyon - CNRS, UMR 5208, Institut Camille Jordan - Ecole Centrale de Lyon, 36 avenue

Guy de Collongue, 69134 Ecully Cedex, FRANCE.

M.J., R.R.: LaMME, UMR CNRS 8071, Université d’Evry-Val-D’Essonne, 23 boulevard de France, 91037 Evry The research of M. Jeanblanc and R. Romo Romero is supported by the Chaire Marchés en Mutation (Fédération

Bancaire Française). M.J. thanks K. Kardaras for fruitful discussion on arbitrages opportunities.

(3)

the increment of X at time n and we set ∆X0= X0. A process X isH-predictable if, for any

n≥ 1, the random variable XnisHn−1-measurable and X0is a constant. A process X is square

integrable ifE(Xn) <2 ∞ for all n ≥ 0. A random variable X is positive if X > 0 a.s. and a process A is increasing (resp. decreasing) if An≥ An−1(resp. An≤ An−1) a.s. , for all n≥ 1.

1.1

H-martingales

We give some obvious results on the form ofH-martingales. The set of processes of the form (ψ0+

n

k=1ψk− E(ψk|Hk−1), n ≥ 0) where ψ is an H-adapted integrable process is equal to the set of allH-martingales (here,∑0k=1 = 0)

The set of processes of the form (ψ0

∏n k=1

ψk

E(ψk|Hk−1), n≥ 0) where ψ is a positive integrable

H-adapted process is the set of all positive H-martingales (here,∏0

k=1 = 1).

1.2

Doob’s Decomposition and Applications

1.2.1 Doob’s decomposition

Lemma 1.1 Any integrableH-adapted process X is a special H-semimartingale1 with (unique) decomposition X = MX,H+ VX,Hwhere MX,His anH-martingale and VX,His anH-predictable process with V0X,H= 0. Furthermore,

∆VnX,H=E(∆Xn|Hn−1),∀n ≥ 1 .

Proof. In the proof, VH:= VX,H and MH:= MX,H. Setting V0H= 0 and, for n≥ 1

VnH− VnH−1=E(Xn− Xn−1|Hn−1),

we construct anH-predictable process. This leads to ∆MnH= ∆Xn−∆VnH= Xn−E(Xn|Hn−1),∀n ≥ 1. Setting M0H= X0, the process MHis anH-martingale. 

In what follows, we shall also denote VX (resp. VH) theH-predictable part of X if there are no ambiguity on the choice of the filtration (resp. on the choice of the process).

As an immediate corollary, we obtain the Doob decomposition of supermartingales: Corollary 1.2 If X is anH-adapted supermartingale, it admits a unique decomposition

X = MX− AX

where MX is anH-martingale and AX is an increasingH-predictable process with AX= 0 .

1.2.2 Predictable brackets

Proposition 1.3 If X and Y are square integrable martingales, there exists a unique

H-predictable process VX,Y =:⟨X, Y ⟩ such that V0X,Y = 0 and XY − VX,Y is an H-martingale. Furthermore

∆VnX,Y =E(Yn∆Xn|Hn−1) =E(∆Yn∆Xn|Hn−1) , n≥ 1.

Proof. Indeed, from Lemma 1.1, and using the martingale property of X and Y , we have, for n≥ 1: ∆VnX,Y = V X,Y n − V X,Y n−1 = E(XnYn− Xn−1Yn−1|Hn−1)

= E(Yn∆Xn|Hn−1) +E(Xn−1∆Yn|Hn−1) =E(Yn∆Xn|Hn−1) = E(∆Yn∆Xn|Hn−1) .



(4)

The predictable bracket of two semimartingales X, Y is defined in continuous time as the dual predictable projection of the covariation process, that is⟨X, Y ⟩ := [X, Y ]p. For discrete time semimartingales, we adopt the same definition. The covariation process is

[X, Y ]0= 0 , [X, Y ]n:=

nk=1

∆Xk∆Yk, n≥ 1,

and [X, Y ]p is the unique predictable (bounded variation) process null at time 0, such that [X, Y ]− [X, Y ]p is a martingale, i.e., [X, Y ]p is the predictable part of the semimartingale [X, Y ].

Lemma 1.4 Let X, Y be twoH-adapted processes (hence, semimartingales). Then

⟨X, Y ⟩0= 0 , ∆⟨X, Y ⟩n=E(∆Xn∆Yn|Hn−1) , n≥ 1 .

Proof. From Doob’s decomposition (Lemma 1.1), for n≥ 1,

(∆[X, Y ]p)n = E([X, Y ]n− [X, Y ]n−1|Hn−1) =E(∆Xn∆Yn|Hn−1) .

 As usual, two martingales X and Y are said to be orthogonal if the product is a martingale, i.e., ifE(∆Yn∆Xn|Hn−1) = 0,∀n ≥ 1.

1.2.3 Stochastic integral of optional processes and martingale property

Lemma 1.5 If Y is an adapted square integrable process and X a square integrable

H-martingale, the process YX defined as (Y X)n:= Y0+

∑n

k=1Yk∆Xk, n≥ 0 is an H-martingale

if and only if theH-martingale part of Y is orthogonal to X. In particular, if Y is H-predictable, Y  X is an H-martingale.

Proof. Let Y = MY + VY. Since

E(Yn∆Xn|Hn−1) =E(MnY∆Xn|Hn−1) + VnYE(∆Xn|Hn−1) =E(∆MnY∆Xn|Hn−1)

the result is obvious. 

1.3

Multiplicative decomposition

Theorem 1.6 Let X be anH-adapted (integrable) positive process, then X can be represented

in a unique form as

X = KXNX, where KXis anH-predictable process with KX

0 = 1 and NXis anH-martingale. More precisely,

N0X= X0, NnX = X0 nk=1 Xk E(Xk|Hk−1), ∀n ≥ 1 , K0X= 1, K X n = nk=1 E(Xk|Hk−1) Xk−1 , ∀n ≥ 1.

Proof. For each n≥ 1 fixed, the positive random variable NnX, is integrable since by recurrence E[NX n] =E [ NnX−1E ( Xn E(Xn|Hn−1) Hn−1 )] =E ( NnX−1 )

= X0, and from Subsection 1.1, NX is a

martingale.

In the other hand, the process KX, defined by

KnX= Xn X0 ∏n k=1 Xk E(Xk|Hk−1) = nk=1 E(Xk|Hk−1) Xk−1 is anH-predictable process. 

Remark 1.7 In terms of Doob’s decomposition X = MX+ VX, one has

KnX = MX n−1+ VnX Xn−1 K X n−1= X0 nk=1 MkX−1+ VkX Xk−1 , N X n = Xn/K X n, n≥ 1 .

Corollary 1.8 Any positiveH-supermartingale Y admits a unique multiplicative decomposition

Y = N D where N is anH-martingale and D an H-predictable decreasing process with D = 1.

(5)

1.4

Exponential process

Given an integrable process X, such that X0 = 0, we define the exponential of X denoted by

E(X) as the solution of the following equation in differences:

{

E(X)n = E(X)n−1∆Xn, ∀n ≥ 1 , E(X)0 = 1 .

(1)

Proposition 1.9 The solution of (1), is given by

E(X)n:= Πnk=1(∆Xk+ 1) , ∀n ≥ 1 . (2) If ∆Xn>−1, for all n ≥ 0, then E(X) is positive.

Lemma 1.10 Let ψ and γ be predictable and M and N be two martingales. Then

E(ψ  M)E(γ  N) = E(ψ  M + γ  N + ψγ  [M, N]) .

Proof. This result is known as Yor’s equality. By definition, the two sides are equal to 1 at

time 0. For n≥ 1, the left-hand side Kn :=E(ψ  M)nE(γ  N)n satisfies Kn = Kn−1(1 +

ψn∆Mn)(1 + γn∆Nn). The right-hand side Jn :=E(ψ  M + γ  N + ψγ  [M, N])n satisfies

Jn= Jn−1(1+ψn∆Mn+γn∆Nn+ψnγn∆Mn∆Nn). Assuming by recurrence that Kn−1= Jn−1,

the result follows. 

1.5

Girsanov’s transformation

Theorem 1.11 Let X be a (P, H)-martingale, Q a probability measure such that Q ∼ P on Hn

for all n≥ 0, and Ln:=ddQP Hn

. If Xn∈ L1(Q) for all n ≥ 0, then the process eX defined as

e Xn= Xn− nk=1 1 Lk−1⟨X, L⟩k, ∀n ≥ 0 is aQ-martingale.

Proof. The result follows from the fact that, from Doob’s decomposition, the process XQ defined, for n≥ 0 as XnQ= Xn− nk=1 EQ[∆Xk|Hk−1] = Xn− nk=1 1 Lk−1EP[Lk∆Xk|Hk−1] is aQ-martingale.

1.6

Enlargement of filtration

In continuous time, a difficult problem is to give conditions such that an F-martingale is a semimartingale for two filtrations satisfying F ⊂ G, and, if it is the case, to give the G-semimartingale decomposition of anF-martingale. In discrete time, the following proposition is an easy consequence of Doob’s decomposition and states that ifF ⊂ G, then any F-martingale is aG-semimartingale and gives explicitly the decomposition of this semimartingale.

Proposition 1.12 In a discrete time setting, any integrable process is a special semimartingale

in any filtration with respect to which it is adapted: ifF ⊂ G, and if X is an F-martingale, it is aG special semimartingale with decomposition

X = MG+ VG

where MG is aG-martingale and VG isG-predictable, V0G= 0, and

∆VnG=E(∆Xn|Gn−1), n≥ 1.

(6)

Comment 1.13 Note that results in continuous time can be directly applied to discrete time: ifF is a discrete time filtration and X a discrete time process, one can study the continuous on right jumping filtration eF defined in continuous time for n ≤ t < n + 1 as eFt =Fn, and the càdlàg process eXt =

nXn1{n≤t<n+1}. One interest of our computations relies on the fact that we do not need hypotheses done in continuous time and that our proofs are simple.

Another goal of this paper is to study how enlarging the filtration may introduce arbitrages. To do this, we first recall some definition of no arbitrage for a given filtration

Definition 1.14 Let X be anH-semimartingale, where H is a given filtration. We say that the

model (X,H) has no arbitrages if there exists a positive H-martingale L, with L0= 1, such that

XL is anH-martingale.

We start with a general result, valid for any filtrationH:

Lemma 1.15 Let Y be an integrableH-semimartingale. If there exists a positive H-adapted

process ψ such that

E(Ynψn|Hn−1) = Yn−1E(ψn|Hn−1),∀n ≥ 1 ,

there exists a positiveH-martingale L such that LY is an H-martingale.

Proof. Let Y be a (P, H)-semimartingale with decomposition Y = MY + VY, with ∆VY n = E(∆Yn|Hn−1) and where MY is a (P, H)-martingale. Define, for a given ψ, the (P, H)-martingale

L L0= 1, Ln= nk=1 ψk E(ψk|Hk−1) = Ln−1 ψn E(ψn|Hn−1), n≥ 1, then, setting dQ = LdP, the process MY

decomposes as MY = mM + VM where mM is a (Q, H)-martingale and ∆VnM = EQ(∆M Y n|Hn−1) = 1 Ln−1EP(Ln∆M Y n|Hn−1) = 1 Ln−1(EP(LnM Y n|Hn−1)− Ln−1MnY−1) =E 1 P(ψn|Hn−1)EP(ψn∆M Y n|Hn−1) . The process Y is a (Q, H)-martingale if VM

+ VY = 0 or equivalently ∆VY+ ∆VM = 0, that is E(ψn∆MY

n|Hn−1) +E(ψn|Hn−1)E(∆Yn|Hn−1) = 0 .

We develop and use that ∆MnY = Yn− E(Yn|Hn−1) and obtain, after simplification E(ψnYn|Hn−1) =E(ψn|Hn−1)Yn−1.

 In the setting of enlargement of filtration, we introduce the following "model free" definition

Definition 1.16 Let F ⊂ G, we say that the model (F, G) is arbitrage free if there exists a

positiveG-martingale L with L0 = 1 (called a deflator) such that, for anyF-martingale X, the

process XL is aG-martingale.

Our definition is "model free" in the sense that we do not specify the price process in the filtration F. The study of conditions so that, for a given martingale X, there exists a deflator, can be found in Choulli and Deng [7]. In the enlargement of filtration setting, we assume that there are no arbitrages inF, and we work under a risk neutral probability measure in the filtration F. Working under the historical probability does not create problems: it suffices to change the probability at the beginning.

2

Initial Enlargement

The filtration G = (Gn, n ≥ 0) is an initial enlargement of F with a random variable ξ taken

(7)

2.1

Bridge

We study the following particular example. Let (Yi, i≥ 1) a sequence of i.i.d. random variables

with zero mean and the process X of the form X0 := 0, Xn:=

∑n

i=1Yi, n≥ 1. For N fixed, we put ξ := XN and we denote by F the natural filtration of X, and we note that X is an F-martingale.

We need to compute ∆Vn=E(∆Xn|Fn−1∨ σ(XN)). Using the fact that (Yi, i≥ 1) are i.i.d, we have, for n≤ j ≤ N (Yj, X1,· · · , Xn−1, XN) law = (Yn, X1,· · · , Xn−1, XN) , hence E(Yn|Fn−1∨ σ(XN)) = E(Yj|Fn−1∨ σ(XN)) = 1 N− (n − 1)E(Yn+· · · + Yj+· · · + YN|Fn−1∨ σ(XN)) = 1 N− (n − 1)E(XN− Xn−1|Fn−1∨ σ(XN)) = XN− Xn−1 N− (n − 1).

Therefore, the process XG defined as

XnG= Xn− nk=1 XN− Xk−1 N− (k − 1), n≥ 0 is aG-martingale.

Comment 2.1 This formula is similar to the one obtained for Lévy bridges: if X is an inte-grable Lévy process in continuous time (e.g. a Brownian motion) with natural filtrationFX, settingG = FX∨ σ(XT) leads to XtG= Xt−t 0 XT− Xs T− s ds, 0≤ t ≤ T, where XGis aG-martingale.

2.2

Initial enlargement with ξ, a

Z-valued random variable

Let X be anF-martingale, ξ be a r.v. taking values in Z and, for any j ∈ Z, let p(j) be the F-martingale defined as pn(j) =P(ξ = j|Fn).

Then, the Doob decomposition of X in G is X = MG+ VG where MG is aG-martingale and, for n≥ 1, ∆VnG=E(∆Xn|Fn−1∨ σ(ξ)) so that

(∆VnG)1{ξ=j} = 1{ξ=j} E(1{ξ=j}∆Xn|Fn−1) P(ξ = j|Fn−1) = 1{ξ=j}E(pn(j)∆Xn|Fn−1) pn−1(j) =1{ξ=j}⟨X, p(j)⟩n pn−1(j) , (3) where we have used the tower property in the second equality. On the set{ξ = j}, one has

pn(j)̸= 0, ∀n ≥ 0. Indeed,

E(1{pn(j)=0}1{ξ=j}) =E(1{pn(j)=0}E(1{ξ=j}|F

X

n)) =E(1{pn(j)=0}pn(j)) = 0 .

Therefore, the process XG defined as

XnG= Xn− nk=1⟨X, p(j)⟩k|ξ=j pk−1(ξ) is aG-martingale.

Comment 2.2 In continuous time, under Jacod’s hypothesisP(τ ∈ du|Ft) = pt(u)P(τ ∈ du), the process XGis aG-martingale where

XtG= Xt−t

0

d⟨X, p(u)⟩s|ξ=u

(8)

2.3

Arbitrages

Lemma 2.3 If ξ∈ FN for some N and ξ /∈ F0, the model (F, G) is not arbitrage free.

Proof. Let Xn=E(ξ|Fn). If aG-deflator L exists, the process XL would be a G-martingale, and XnLn = E(XNLN|Gn). Using the fact that XN = ξ ∈ Gn, 0 ≤ n ≤ N, we obtain E(XNLN|Gn) = XNLn, in particular XNL0= X0L0 which is not possible since XN = ξ is not

inF0. 

3

Progressive Enlargement

We assume that τ is a random variable valued in N ∪ {+∞}, and introduce the filtration G where, for n≥ 0, we set Gn=Fn∨ σ(τ ∧ n). In particular {τ = 0} ∈ G0, so that, in generalG0

is not trivial.

In continuous time, many results are obtained under the hypothesis that τ avoidsF-stopping times, or that allF-martingales are continuous, which is not the case here. We present here some basic results, and we refer to Romo Romero [9] for more information.

3.1

General results

We now assume thatF0 is trivial. If Y is aG-adapted process, then there exists an F-adapted

process y such that

Yn1{n<τ}= yn1{n<τ},∀n ≥ 0. (4) If Y is aG-predictable process, there exists an F-predictable process y such that

Yn1{n≤τ}= yn1{n≤τ}, ∀n ≥ 0.

We introduce the supermartingale

Zn=P(τ > n|Fn),∀n ≥ 0 and its Doob’s decomposition Z = M− A with

A0= 0, ∆An=−E(∆Zn|Fn−1) =P(τ = n|Fn−1) ,∀n ≥ 1 . In particular, sinceF0is trivial, Z0= M0=P(τ > 0).

We also introduce the supermartingale e

Zn=P(τ ≥ n|Fn), ∀n ≥ 0

and its Doob’s decomposition eZ = fM− eA where fM is anF-martingale and eA theF-predictable

increasing process satisfying eA0= 0, ∆ eAn=P(τ = n − 1|Fn−1),∀n ≥ 1. We shall often use the trivial equalities

e

Zn=P(τ > n − 1|Fn) = Zn+P(τ = n|Fn), Zn=P(τ ≥ n + 1|Fn), E( eZn|Fn−1) = Zn−1. Proposition 3.1 On the set{n ≤ τ}, eZnand Zn−1 are positive. On the set{n > τ}, eZnand

Zn−1 are (strictly) smaller than 1.

Proof. The first assertion is obtained from the two following equalities:

E(1{n≤τ}1{Zn−1=0}) = E(P(n ≤ τ|Fn−1)1{Zn−1=0}) =E(Zn−11{Zn−1=0}) = 0 ,

E(1{n≤τ}1{ eZn=0}) = E(P(n ≤ τ|Fn)1{ eZn=0}) =E( eZn1{ eZn=0}) = 0 .

The second assertion is left to the reader.

(9)

Lemma 3.2 One has, for any random time τ ,

a) if the random variable Y is integrable

E(Y |Gn)1{τ>n}=1{τ>n}E(Y 1{n<τ}|Fn)

Zn

,∀n ≥ 0 . b) for Ynintegrable andFn-measurable

E(Yn|Gn−1)1{τ≥n} = 1{τ≥n}Z1 n−1E(Yn e Zn|Fn−1),∀n ≥ 1 E(Yn e Zn |Gn−1)1{τ≥n} = 1{τ≥n} 1 Zn−1E(Yn1{ eZn>0}|Fn−1) ,∀n ≥ 1. (5) Proof.

a) Taking Yn=E(Y |Gn) in (4), and taking expectation w.r.t. Fn we obtain E(Y 1{n<τ}|Fn) =E(1{n<τ}|Fn)yn= Znyn.

b) Only the second equality requires a proof. For n≥ 1, we have E(Yn e Zn 1{τ≥n}|Gn−1) = 1{τ≥n} 1 Zn−1E(Yn 1 e Zn 1{τ≥n}|Fn−1) = 1{τ≥n} 1 Zn−1E(Yn 1 e Zn 1{τ≥n}1{ eZn>0}|Fn−1) =1{τ≥n} 1 Zn−1E(Yn1{ eZn>0}|Fn−1) .

Lemma 3.3 Let Hn=1{τ≤n}, n≥ 0, and Λ be the F-predictable process defined as

Λ0= 0, ∆Λn:=

∆An

Zn−11{Zn−1>0}, n≥ 1 .

The process N defined as

Nn:= Hn− Λn∧τ = Hn− n∧τ k=1

λk, n≥ 0 (6)

where λn:= ∆Λnis aG-martingale.

Proof. It suffices to find the Doob decomposition of theG-semimartingale H. The predictable

part of this decomposition is K with

∆Kn = E(∆Hn|Gn−1) =1{τ≤n−1}0 + 1{τ>n−1}E(∆Hn|Fn−1) Zn−1 = 1{τ≥n}E(Zn−1− Zn|Fn−1) Zn−1 =1{τ≥n} An− An−1 Zn−1 , n≥ 1.

We conclude, noting that Zn−1 > 0 on {1 ≤ n ≤ τ}, so that, on {n ≤ τ}, one has ∆Kn =

λn where λn = Z∆An

n−11{Zn−1>0} is F-predictable. Note for future use that 0 ≤ λn ≤ E(1 −

Zn

Zn−1|Fn−1)≤ 1. 

Proposition 3.4 Suppose Z positive. The multiplicative decomposition of Z is given by Zn=

NZ nE

(

− Λ)

n

, n≥ 0 where NZ is anF-martingale and Λ is defined in Lemma 3.3.

Proof. We have seen that there exist anF-martingale NZ and an F-predictable process KZ such that Z = NZKZwith

KnZ= nk=1 [ Zk−1− E(Zk|Fk−1) Zk−1 + 1 ] , ∀n ≥ 1 .

>From Lemma 3.3 and the positivity of Z, we have

∆Λn= Zn−1− E ( Zn|Fn−1 ) Zn−1 , ∀n ≥ 1 ,

(10)

3.1.1 Properties

Lemma 3.5 If eZ is predictable, and N is defined in (6), then E(Nn|Fn)− N0= M0− Mn for

all n≥ 0, where M is the martingale part in the Doob decomposition of Z. Proof. By definition of N , we have that, for n≥ 0,

E(∆Nn|Fn) = E(1{τ≤n}− 1{τ≤n−1}− λn1{τ≥n}|Fn) = E(−1{τ>n}+1{τ≥n}|Fn)− λnE(1{τ≥n}|Fn) = −Zn+ eZn− λnZen=−∆Zn− λnZn−1.

Finally, using that ∆Zn+ ∆An= ∆Mnand λn=1{Zn−1>0}Z∆An−1n which implies

λnZn−1= λnZn−11{Zn−1>0}+ λnZn−11{Zn−1=0}= ∆An,

and we getE(∆Nn|Fn) =−∆Mn. 

We denote by Ho the increasing integrable F-adapted process2

Hno := nk=0 E(∆Hk|Fk) = nk=0 P(τ = k|Fk) , ∀n ≥ 0, which satisfies E(Yτ1{τ<∞}) =E( n=0 Yn∆Hn) =E( n=0 YnE(∆Hn|Fn)) =E( n=0 Yn∆Hno) (7)

for any F-adapted bounded process Y . We define Ho := H∞−o +P(τ = ∞|F∞) where

H∞−o = limn→∞Hno:= ∑

k=0P(τ = k|Fk). Note that ∆ eAn= Hn−1o −Hn−2o , and since eA1= H0o

we have eAn= Hno−1, hence Zn+ Hno= Zn+ ∆H o n+ H o n−1= eZn+ eAn= fMn.

Furthermore, since limn→∞Zn=1{τ=∞}, and E(H∞−o ) = limE(Hn0)≤ 1, one has f

Mn= Zn+ Hno=E(11{τ=∞}+ H∞−o |Fn) . (8) Lemma 3.6 Let J := H− 1

e

Z1[0,τ ] H

o. Then, for any integrable F-adapted process Y , the

process Y  J is a G-martingale. In particular, J is a G-martingale. Proof. From ∆Jn=1{τ=n}e1

Zn1{τ≥n}P(τ = n|Fn), n≥ 1, one has ∆Jn1{τ<n}= 0 and, from

Lemma 3.2 (5), E(Yn∆Jn|Gn−1) = 1{τ>n−1} 1 Zn−1E(Yn∆Jn1{τ>n−1}|Fn−1) +E(Yn∆Jn1{τ<n}|Gn−1) = 1{τ>n−1} 1 Zn−1E ( Yn ( P(τ = n|Fn)− e1 Zn P(τ = n|Fn)P(τ ≥ n|Fn)1{ eZn>0} ) |Fn−1 ) = 1{τ>n−1} 1 Zn−1E ( YnP(τ = n|Fn) ( 1− 1{ eZ n>0} ) |Fn−1 ) = 1{τ>n−1} 1 Zn−1E ( YnP(τ = n|Fn)1{ eZ n=0}|Fn−1 ) ,

where the fact that Y isF adapted has been used in the second equality. It remains to note that, on{ eZn= 0}, one has P(τ = n|Fn) = 0 to obtainE(Yn∆Jn|Gn−1) = 0.  Note that, if Y is an F-martingale, then, from lemma 1.5, the G-martingale part of Y is orthogonal to J . This result is similar to the one obtained by Choulli et al. [6].

2In fact, Hois theF-dual optional projection of H, and many proofs are consequences or properties of dual optional

(11)

3.1.2 Immersion in progressive enlargement

We recall thatF is immersed in G (we shall write F ,→ G) if any F-martingale is a G-martingale. This is equivalent to Zn=P(τ > n|F∞) =P(τ > n|Fk) for any k≥ n ≥ 0.

Lemma 3.7 F is immersed in G if and only if eZ is predictable and eZn=P(τ ≥ n|F∞), n≥ 0.

Proof. Assume thatF is immersed in G. Then, for n ≥ 0,

e

Zn=P(τ ≥ n|Fn) = P(τ > n − 1|Fn) =P(τ > n − 1|Fn−1) =P(τ > n − 1|F∞) = P(τ ≥ n|F∞) ,

where the second and the next to last equality follow from immersion assumption. The equality e

Zn=P(τ > n − 1|Fn−1) = Zn−1 establishes the predictability of eZ.

Assume now that eZ is predictable and eZn=P(τ ≥ n|F∞). Then, eZn=P(τ ≥ n|Fn−1) and P(τ > n|Fn) = P(τ ≥ n + 1|Fn) = eZn+1=P(τ > n|F∞) .

The immersion property follows. 

Remark 3.8 We will see in the proof of Theorem 3.24 that ˜Z predictable implies that τ is a

pseudo-stopping time, hence Z (and eZ) is decreasing.

Theorem 3.9 SupposeF ,→ G. Then the following assertions are equivalent (i) Z isF-predictable.

(ii) For anyG-predictable process U, one has E (∑n k=1Uk∆Nk Fn ) = 0,∀n ≥ 1 , in particular E ( ∆Nn Fn ) = 0,∀n ≥ 1 .

(iii) AnyF-martingale X is orthogonal to N.

Proof. (i)⇒ (ii). By uniqueness of Doob’s decomposition and the predictability of Z, Zn =

M0− An, hence ∆Mn= 0.

By Lemma 3.5 and 3.7, we have that

E(∆Nn|Fn) =−∆Mn andE ( ∆Nn Fn ) = 0.

For k≤ n, let ¯Uk∈ Fk−1 be such that ¯Uk11{τ>k−1}= Uk11{τ>k−1}, then E(Uk∆Nk|Fn) = ¯Uk

[

E(−11{τ>k}+ 11{τ≥k}|Fn)− λkE(11{τ≥k}|Fn) ]

,

which, using immersion propetry E(Uk∆Nk|Fn) = U¯k

[

E(−11{τ>k}+ 11{τ≥k}|Fk)− λkE(11{τ≥k}|Fk) ] = U¯kE(∆Nk|Fk) = 0 ,

taking the sum over all k≤ n we obtain the desired result.

(ii)⇒ (iii). We prove that E(∆Xn ∆Nn|Gn−1) = 0 for all n≥ 1. From the Lemma 3.2, we have that

E(∆Xn∆Nn|Gn−1)11{τ≥n}=Z1

n−1E[∆Xn11{τ>n−1}∆Nn|Fn−1]11{τ≥n}, since ∆Xn∈ Fnand 11{τ>n−1}∈ Gn−1 we have, from (ii)

E(∆Xn11{τ>n−1}∆Nn|Fn−1) =E [ ∆XnE(11{τ>n−1}∆Nn|Fn)|Fn−1 ] = 0 hence E(∆Xn∆Nn|Gn−1)11{τ≥n}= 0 . On the set{τ < n}, using that {τ < n} ∈ Gn−1, we obtain

(12)

Finally, we getE[∆(XnNn)|Gn−1] = 0 .

(iii) ⇒ (i). By (iii), we have in the one hand, for n ≥ 1, E(∆Xn∆Nn|Gn−1) = 0, then E(∆Xn∆Nn) = 0 . In the other hand, E(∆Xn∆Nn) = E[∆XnE(∆Nn|Fn)] , and applying Lemma 3.5, we obtainE(∆Nn∆Xn) =−E(|∆Xn|2) , which impliesE(|∆Xn|2) = 0 . Therefore ∆Xn= 0, or equivalentlyE(Zn|Fn−1) = Zn, which is equivalent to the predictability of Z.  Example 3.10 Assume that τ = inf{n : Γn≥ Θ} where Γ is an increasing F-adapted process and Θ is independent fromF, with an exponential law. Then, immersion property holds and

Zn=P(Γn> Θ|Fn) = e−Γn(and eZn=P(τ > n−1|Fn) =P(Γn−1> Θ|Fn) = e−Γn−1= Zn−1).

If Γ is predictable, the Doob decomposition of Z is Zn= 1−An= 1−e−Γn, and ∆Λn= Z∆An

n−1.

Moreover, Z is predictable and assertions of Theorem 3.9 hold. If Γ is not predictable, ∆Λn= ∆An Zn−1 = E(−∆Zn|Fn−1) Zn−1 = e −Γn−11− E(e−∆Γn|Fn−1) Zn−1 = 1− E(e −∆Γn|F n−1) .

3.1.3 Construction of τ from a given supermartingale

We now answer the following question. Let Z be a supermartingale on (Ω,F, P), valued in [0, 1] such that Z∞= 0. Is it possible to construct τ such that Z is its Azéma supermartingale. We mimic the general proof of Song [10]. It is rather obvious that one has to extend the probability space. Let us consider the space (Ω× N, F ⊗ N , Q) where N is the set of non-negative integer,

N the associated σ-algebra and Q a probability to be constructed so that the identity map τ (ω, n) = n satisfiesQ(τ > n|Fn) = Zn and Q coincides with P on F. To do so, we need to construct a family of martingales Mkwhich will representQ(τ ≤ k|Fn). The knowledge of these quantities will allow us to characterizeQ({τ ≤ k}∩Fn) for any Fn∈ Fn, and using Kolmogorov arguments, to construct Q on the product space. The measure Q will be a probability, since Q(R+× Ω) = 1. This family must be valued in [0, 1], increasing w.r.t. k. i.e.,

Mkk= 1− Zk, Mnk≤ M k+1

n ,∀k < n and∀n ≤ k, Mnk=E(Mkk|Fn).

We assume that 1 >E(Zn|Fn−1), ∀n ≥ 1. Let k be fixed and define

Mkk:= 1− Zk, Mnk= M k

n−1E(Zn1− E(Z|Fn−1)− Zn

n|Fn−1) ,∀n > k .

It is easy to check that Mk is an F-martingale valued in [0, 1] such that Mnk ≤ Mnk+1 (the supermartingale property of Z being used to obtain Mk

n≥ 0).

3.2

Study before τ

3.2.1 Semimartingale decomposition

Proposition 3.11 Any square integrable F-martingale X stopped at τ is a G-semimartingale

with decomposition Xnτ = XnG+ n∧τ k=1 1 Zk−1⟨fM , X⟩ F k,

where XGis aG-martingale (stopped at τ). Here, fM is the martingale part of the Doob decom-position of the supermartingale eZ.

Proof. We compute the predictable part of theG-semimartingale X on the set {0 ≤ n < τ}

using Lemma 3.2

1{τ>n}E(∆Xn+1|Gn) = 1{τ>n} 1

ZnE( e

Zn+1∆Xn+1|Fn) .

Using now the Doob decomposition of eZ, and the martingale property of X, we obtain

(13)

and finally

1{τ>n}E(∆Xn+1|Gn) =1{τ>n}Z1 n

⟨fM , X⟩Fn+1.

 Comment 3.12 We recall, for the ease of the reader, the Jeulin formula in continuous time: IfG is the progressive enlargement of F with a random time τ, any F martingale X stopped at

τ is aG semimartingale with decomposition Xtτ = bXt+ ∫ t∧τ 0 1 Zs−d⟨fM , X⟩ F s,

where Zt =P(τ > t|Ft), Zet =P(τ ≥ t|Ft). Here eZ = fM − eA where fM is anF-martingale and eA isF-predictable. As eZ is not càdlàg, this is not the standard Doob-Meyer decomposition

established only for càdlàg supermartingales.

3.2.2 Arbitrages

Lemma 3.13 LetGτ− be the filtrationG “strictly before τ", i.e., Gnτ−=Gn∧(τ−1). The model (F, Gτ−) is arbitrage free.

Proof. In the case where X is anF-martingale and working in the progressive enlarged filtration,

we will find ψ such that on the set{1 ≤ n < τ} (strictly before τ) 1{n−1≤τ}E(ψnXn|Gn−1) =1{n−1≤τ}Xn−1E(ψn|Gn−1) that is 1{n−1≤τ}e1 Zn−1E(ψn XnZn|Fn−1) =1{n−1≤τ}Xen−1 Zn−1E(ψn Zn|Fn−1) . We are looking for a positiveF-adapted process ψ, satisfying

E(ψnXnZn|Fn−1) = Xn−1E(ψnZn|Fn−1) .

The choice ψ = (1/Z)1{Z>0}+1{Z=0} provides a solution, valid for any martingale X.  Theorem 3.14 Assume that τ is not anF-stopping time and denote by Gτ the filtrationGnτ =

Gτ∧n, n≥ 0. Then, the model (F, Gτ) is arbitrage free if and only, for any n, the set{0 = eZn<

Zn−1} is empty.

We mean here that, for anyF-martingale X, the stopped process Xτ admits a deflator. This result was established in Choulli and Deng [7] and is a particular case of the general results obtained in Aksamit et al.[4]. We give here a slightly different proof, by means of the two following propositions.

Proposition 3.15 Assume that for any n, the set{ eZn = 0 < Zn−1} is empty. The process

L =E(Y ), where Y is the G-martingale defined by ∆Yk =1{τ≥k}(ZZk−1ek − 1) for k ≥ 1 and Y0 = 0, is a positive G-martingale. If X is an F-martingale, the process XτL is a (G, P)

martingale.

Proof. The process Y is a martingale: for n≥ 1,

E(∆Yn|Gn−1) = E(1{τ≥n}Zn−1e− eZn Zn |Gn−1) =1{τ≥n}Z1 n−1E(1{ eZn>0}(Zn−1− eZn)|Fn−1) = 1{τ≥n} 1 Zn−1E(Zn−1− eZn− 1{ eZn=0}(Zn−1− eZn)|Fn−1) = 1{τ≥n} 1 Zn−1E(Zn−1− eZn|Fn−1) = 0 ,

where we have used (5), the fact thatE( eZn|Fn−1) = Zn−1and that, by assumption{ eZn= 0} ⊂

{Zn−1= 0}, hence 1{ eZ

n=0}(Zn−1− eZn) = 0.

(14)

{ eZn = 0 < Zn−1} is empty. On the set {τ ≥ k}, one has Zk−1 > 0 which implies that

∆Yk= (Zk−1e

Zk − 1) ≥ −1, hence L is positive. Furthermore, for X an F-martingale

E(Xτ n+1 Ln+1 Ln |G n) = E(X(n+1)∧τ(1 +1{τ≥n+1}Zne− eZn+1 Zn+1 )|Gn) = E(Xn+11{τ≥n+1} Zn e Zn+1 |Gn) +E(Xτ1{τ<n+1}|Gn) = E(Xn+11{τ≥n+1} Zn e Zn+1 |Gn) + Xτ1{τ<n+1} = 1{τ>n} 1 ZnE(Xn+1 Zn1{ eZ n+1>0}|Fn) + Xτ1{τ≤n} = 1{τ>n} 1 ZnE(Xn+1 Zn(1− 1{ eZ n+1=0})|Fn) + Xτ1{τ≤n} = 1{τ>n} 1 ZnE(Xn+1 Zn|Fn) + Xτ1{τ≤n}= Xn∧τ,

where we have used that, by assumption, Zn1{ eZn+1=0}= 0. Hence the deflator property. 

Remark 3.16 In case of immersion, there are no arbitrages (indeed any e.m.m. inF will be an e.m.m. inG). This can be also obtained using the previous result, since, under immersion hypothesis, one has Zn−1= eZn.

Proposition 3.17 If there exists n≥ 1 such that the set {0 = eZn< Zn−1} is not empty, and

if τ is not an F-stopping time, there exists an F-martingale X such that Xτ is a G-adapted increasing process with Xτ

0 = 1,P(Xττ > 1) > 0. Hence, the model (F, Gτ) is not arbitrage free.

Proof. The proof is the discrete version of Acciaio et al. [1]. Let ϑ = inf{n : 0 = eZn< Zn−1}. The random time ϑ is anF-stopping time satisfying τ ≤ ϑ and P(τ < ϑ) > 0. Let In=1{ϑ≤n} and denote by D the F-predictable process part of the Doob decomposition of I. One has

D0 = 0 and ∆Dn = P(ϑ = n|Fn−1). We introduce the F-predictable increasing process U setting Un= E(−D)1 n. Then, ∆Un= 1 E(−D)n−1( 1 1− ∆Dn − 1) = 1 E(−D)n−1 ∆Dn 1− ∆Dn = Un∆Dn

We consider the process X = U K, where K = 1− I,

∆Xn = −Un∆In+ Kn−1∆Un=−Un(∆In− Kn−1∆Dn) and

E(∆Xn|Fn−1) = −UnE(∆In− Kn−1∆Dn|Fn−1) = Un(P(ϑ = n|Fn−1)− Kn−1P(∆Dn|Fn−1)) = UnKn−1(P(ϑ = n|Fn−1)− P(∆Dn|Fn−1)) = 0 ,

where we have used that Kn−1P(ϑ = n|Fn−1) = E(Kn−11ϑ=n|Fn−1) =P(ϑ = n|Fn−1) . Hence

X is anF-martingale.

We now prove that Xτ ≥ 1 and P(Xτ> 1) > 0, equivalently that Dτ ≥ 0 and P(Dτ > 0) > 0. For that, we compute

E(Dτ1τ <) = n=0 E(Dn1{τ=n}) = n=0 E(DnP(τ = n|Fn)) = n=1 E(DnP(τ > n|Fn))− n=1 E(DnP(τ > n − 1|Fn)) + D0P(τ = 0) Since D is predictable E(Dτ1τ <) = n=1 E(DnP(τ > n|Fn))− n=1 E(DnP(τ > n − 1|Fn−1)) = n=1 E(Dn∆Zn) = E( n=1 Zn−1∆Dn) =E(Zϑ−11ϑ<∞) > 0 ,

(15)

3.3

After τ

As we mentioned at the beginning, anyF-martingale is a G-semimartingale (which is not the case in continuous time). In a progressive enlargement of filtration with a random time valued inN, one can give the decomposition formula. We start with the general case, then we study the particular case where τ is honest, to provide comparison with the classical results.

3.3.1 General case

Mixing the results obtained in initial enlargement and progressive enlargement before τ , for any F-martingale X Xn= XnG+ n∧τk=0 1 Zk−1⟨fM , X⟩k+ nk=τ⟨X, p(u)⟩k|u=τ pk−1(τ ) . (9) where XGis aG-martingale. 3.3.2 Honest times

In continuous time, strong conditions are needed to keep the semimartingale property after τ , here it is no more the case. However, we now consider the case where τ is honest (and valued inN). We recall the definition (see Barlow [5]) and some of the main properties.

A random time is honest, if, for any n≥ 0, there exists an Fn-measurable random variable

τ (n) such that

1{τ≤n}τ =1{τ≤n}τ (n) . (10) Remark 3.18 Following Jeulin, τ is honest if there exists anFnmeasurable random variable bτ(n), such that

1{τ<n}τ =1{τ<n}bτ(n) . (11) The two definitions are equivalent. Indeed, starting with the equality (11), one can define

τ (n) =bτ(n) ∧ n; then on {τ = n}, τ(n) = n and 1{τ≤n}τ =1{τ≤n}τ (n).

It follows that anyG-predictable process V can be written as Vn= Vb

n1{n≤τ}+ Vna1{τ<n} where Va, VbareF-predictable processes.

Lemma 3.19 If τ is honest, Zn= eZn on the set{n > τ} and eZτ = 1.

If eZτ = 1, then τ is honest.

Proof. For any n≥ 0,

P(τ = n|Fn)1{n>τ} = P(τ = n|Fn)1{n>τ}1{n>τ(n)}=E(1{τ=n}1{n>τ(n)}|Fn)1{n>τ} = E(1{τ=n}1{n>τ(n)}1{n>τ}|Fn)1{n>τ}= 0 .

It follows that Zn1{τ<n}= eZn1{τ<n}. Furthermore, e

Zn1{τ=n} = 1{τ=n}P(τ ≥ n|Fn) =1{τ=n}1{τ(n)=n}P(τ ≥ n|Fn) = 1{τ=n}E(1{τ(n)=n}1{τ≥n}|Fn) =1{τ=n}

which implies e = 1.

If e = 1, let ℓ(n) = sup{k ≤ n : eZk= 1}. Then, for any n ≥ 0, one has τ = ℓ(n) on the set

{τ ≤ n}, and τ is honest. 

Proposition 3.20 Let τ be an honest time and X anF-martingale. Then,

(16)

Proof. Let X = MG+ VGbe theG-semimartingale decomposition of X. Let n ≥ 0 be fixed. From the property of honest times, there exists eV , anF-predictable process, such that

VnG1{τ≤n}= eVn1{τ≤n}. Then,

1{τ≤n}(Vn+1G − VnG) = 1{τ≤n}( eVn+1− eVn) =1{τ≤n}E(Xn+1− Xn|Gn)

= E(1{τ≤n}(Xn+1− Xn)|Gn) . (12) We now take the conditional expectation w.r.t. Fn in (12). Taking into account that eV is F-predictable, and the fact that F ⊂ G, we get

E(1{τ≤n}|Fn)( eVn+1− eVn) = E(1{τ≤n}(Xn+1− Xn)|Fn)

= E(E(1{τ≤n}|Fn+1)(Xn+1− Xn)|Fn) . Now, using the fact that

E(1{τ≤n}|Fn) = 1− E(1{τ>n}|Fn) = 1− Zn

E(1{τ≤n}|Fn+1) = 1− E(1{τ>n}|Fn+1) = 1− E(1{τ≥n+1}|Fn+1) = 1− eZn+1 and that X is anF-martingale, we obtain, on the set {τ ≤ n}

(1− Zn)( eVn+1− eVn) =−E( eZn+1(Xn+1− Xn)|Fn) =−⟨fM , X⟩n.

 Remark 3.21 It seems important to note that the Doob decomposition of Z is not needed. As we have seen, one can write an optional decomposition of Z as Z = fM− Ho. This "explains" why, in continuous time, such an optional decomposition of Z is required.

Comment 3.22 We recover the Jeulin’s formula for honest times. We recall, for the ease of the reader, the Jeulin formula in continuous time:

Xn= XnG+ ∫ n∧τ 0 1 Zs−⟨fM , X⟩s−n τ 1 1− Zs−d⟨fM , X⟩s.

Comment 3.23 Let τ an honest time. We have obtained a formula using Jacod’s hypothesis in (2.2). In continuous time, one can show that honest times do not satisfy equivalence Jacod’s hypothesis. The goal here is to check that the decompositions obtained in (9) and the one for honest times are the same. We proceed as in Aksamit [2]. Let n≥ 1 be fixed. On τ < n, we have τ = τ (n− 1) where τ(n − 1) ∈ Fn−1⊂ Fn. We now restrict our attention to k < n. On the one hand,

1{τ=k}(1− Zn−1) = 1{τ=k=τ(n−1)}P(τ ≤ n − 1|Fn−1) =1{τ=k}E(1{τ(n−1)=k}1{τ≤n−1}|Fn−1) = 1{τ=k}E(1{τ(n−1)=k}1{τ=k}|Fn−1) =1{τ=k}E(1{τ=k}|Fn−1) =1{τ=k}pkn−1

On the other hand

1{τ=k}E(fMn∆Xn|Fn−1) = −1{τ=k}E((1 − fMn)∆Xn|Fn−1) =−1{τ=k=τ(n−1)}E((1 − eZn)∆Xn|Fn−1) = −1{τ=k}E(1{k=τ(n−1)}1{τ<n}∆Xn|Fn−1)

= −1{τ=k}E((E(1{k=τ}|Fn)∆Xn|Fn−1) =−1{τ=k}E(pkn∆Xn|Fn−1) .

3.3.3 Arbitrages before τ

Let τ be a bounded honest time which is not an F-stopping time. Assuming the existence of a deflator L implies that fM L is a G martingale. Since ˜ = 1, one has f ≥ 1, and P(f > 1) > 0. Therefore, using optional sampling theorem, 1 =E(fMτLτ) >E(Lτ) = 1 yields to a contradiction and to existence of arbitrages.

We now check that the condition given in 3.14 is satisfied. Let n0 such thatP(τ ≤ n0) = 1

and P(τ ≤ n0 − 1) < 1. On the set A := {τ ≤ n0− 1 < n0}, the honesty of τ implies

(17)

that Zn0−1E(1{τ≤n0−1}|Hn0−1) = 0 = Zn0−1(1− Zn0−1), henceE(Zn0−1) =E(Z

2

n0−1) which,

using the fact thatE(Z2

n0−1)≤ (E(Zn0−1))

2 implies thatE(Zn

0−1) = 0 orE(Zn0−1) = 1, i.e.,

Zn0−1≡ 0 or Zn0−1≡ 1. The first equality would imply that τ = n0, the second equality that

τ≤ n0, a.s., hence a contradiction.

We refer to Choulli and Deng [7] for a necessary and sufficient condition to avoid arbitrages after τ .

3.4

Pseudo-stopping times

We end the study of progressive enlargement with a specific class of random times. We assume thatF0is trivial. We recall that a random time τ is anF-pseudo stopping time if E(Xτ) =E(X0)

for any boundedF-martingale X (see [8]).

Theorem 3.24 The following statements are equivalent:

(i) τ is an F-pseudo stopping time. (ii) H∞−o =P(τ < ∞|F∞), Ho = 1 .

(iii) fM ≡ 1.

(iv) eZ is predictable.

(v) EveryF-martingale stopped at τ is a G-martingale. Proof. The proofs of ((i)⇔ (ii) ⇔ (iii)⇔ (v)) are standard.

(i)⇒ (ii) Using that that the bounded martingale X is closed, limn→∞Xn =: X∞ exists and one can write the integration by parts formula

H∞−o X∞= n=1 Hno−1∆Xn+ n=0 Xn∆Hno. Taking expectation, and using the fact that X is anF-martingale, we obtain

E(Ho ∞−X∞) =E( n=0 Xn∆Hn)o

and, from property (7) of Ho , one has E(Ho

∞−X∞) =E(Xτ1{τ<∞}) . It follows that

X0 = E(Xτ) =E(Xτ1{τ<∞}) +E(X∞1{τ=∞}) =E(Xτ1{τ<∞}) +E(X∞)− E(X∞1{τ<∞})

= E(H∞−o X) + X0− E(X∞1{τ<∞}) ,

henceE(H∞−o X∞) =E(X∞1{τ<∞}) =E(X∞P(τ < ∞|F∞)) which implies H∞−o =P(τ < ∞|F∞). (ii)⇒ (iii) Obvious

(iii)⇒ (iv) By definition of Ho, and (8) , we have that f

Mn= Hn0+ Zn= H

0

n−1+ eZn, ∀n ≥ 1 , (13) therefore, by (iii), we deduce that eZn = 1− Hn0−1 which, since H0 is F-adapted, is Fn−1 -measurable for all n≥ 1, i.e. eZ isF-predictable.

(iv) ⇒ (v) If eZ is predictable, fM is a predictable martingale, hence a constant (indeed,

E(fMn|Fn−1) = fMn= fMn−1) and ∆⟨X, fM⟩n= 0 for all n≥ 1. The result follows from Propo-sition 3.11.

(v)⇒ (i) For any bounded F-martingale X, the stooped process Xτ is aG-martingale. Then, as a consequence of the optional stopping theorem applied inG at time τ, we get E(Xτ) =E(X0),

(18)

References

[1] Acciaio B. and Fontana, C. and Kardaras C., Arbitrage of the First Kind and Filtration

Enlargements In Semimartingale Financial Models, Preprint, arxiv.org/pdf/1401.7198.pdf,

2014.

[2] Aksamit, A., Random times, enlargement of filtration and arbitrages, Thèse de Doctorat, Univ. Evry, 2014.

[3] Aksamit, A., and Libo, L. Pseudo-stopping times and the Immersion property, Preprint, arxiv.org/abs/1409.0298 2015.

[4] Aksamit, A., Choulli, T. and Deng, J. and Jeanblanc, M., Non-Arbitrage up to Random

Horizon for Semimartingale Models, Preprint, arXiv:1310.1142, 2014.

[5] Barlow, M.T. Study of filtration expanded to include an honest time Z. Wahr. Verw. Gebiete, 44, 307-323, 1978.

[6] Choulli, T., Daveloose, C. and Vanmaele, M. : Hedging mortality risk and optional

mar-tingale representation theorem for enlarged filtration. Preprint, arXiv:1510.05858, 2015.

[7] Choulli, T. and Deng, J. Non-arbitrage for Informational Discrete Time Market Models Preprint, arxiv.org/pdf/1407.1453. 2014.

[8] Nikeghbali, A. and Yor, M., A definition and some properties of pseudo-stopping times, The Annals of Probability, 33, 1804-1824, 2005.

[9] Romo, Romero R. Thèse de Doctorat, Univ. Evry. In preparation, 2016.

Références

Documents relatifs

However, if we consider in the population Z · N the number of those individuals in the generation 0 whose progeny is still alive in generation [N t], that number does not explode

- In-vivo: Students enrolled in a course use the software tutor in the class room, typically under tightly controlled conditions and under the supervision of the

We apply this result to the (undecidable) class of TPNs which are robustly bounded (i.e., whose finite set of reachable markings remains finite under infinitesimal perturbations):

In addition, a nuclear localization signal suggested that formation of the single, bright particles (NLS) followed by an HA tag was introduced at the N terminus of in the presence

In the global theory with irreducible automorphic representations, if we fix the archimedean data (the “weight”) then the space of invariant vectors under a compact open subgroup of

Continuous manifolds in R&#34; that are sets of interpolation for the Fourier algebra 173 Proof.. Since e was arbitrary, it follows that tt~ is absolutely

The RSPCA :Royal Society for the Prevention of Cruelty to Animals; was the first organisation dedicated to the well-(done /exists / being )of animals.. anywhere in

Par contre, les eleves expliquent moins bien les moyens qu'ils utilisent pour verifier diffe- rents aspects de leur texte (voir les exemples presentes a la page 85 des Modeles