• Aucun résultat trouvé

Optimal Skorokhod embedding under finitely-many marginal constraints *

N/A
N/A
Protected

Academic year: 2021

Partager "Optimal Skorokhod embedding under finitely-many marginal constraints *"

Copied!
26
0
0

Texte intégral

(1)

HAL Id: hal-01247004

https://hal.archives-ouvertes.fr/hal-01247004

Submitted on 20 Dec 2015

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Optimal Skorokhod embedding under finitely-many marginal constraints *

Gaoyue Guo, Xiaolu Tan, Nizar Touzi

To cite this version:

Gaoyue Guo, Xiaolu Tan, Nizar Touzi. Optimal Skorokhod embedding under finitely-many marginal

constraints *. SIAM Journal on Control and Optimization, Society for Industrial and Applied Math-

ematics, 2016. �hal-01247004�

(2)

arXiv:1506.04063v1 [math.PR] 12 Jun 2015

Optimal Skorokhod embedding under finitely-many marginal constraints

Gaoyue Guo Xiaolu Tan Nizar Touzi § June 15, 2015

Abstract

The Skorokhod embedding problem aims to represent a given probability measure on the real line as the distribution of Brownian motion stopped at a chosen stopping time. In this paper, we consider an extension of the optimal Skorokhod embedding problem in Beiglb¨ ock, Cox & Huesmann [1] to the case of finitely-many marginal constraints

1

. Using the classical convex duality approach together with the optimal stopping theory, we obtain the duality results which are formulated by means of probability measures on an enlarged space. We also relate these results to the problem of martingale optimal transport under multiple marginal constraints.

Key words. Skorokhod embedding, martingale optimal transport, model-free pric- ing, robust hedging.

AMS subject classification (2010). Primary: 60G40, 60G05; Secondary: 49M29.

1 Introduction

Let µ be a probability measure on R, with finite first moment and centered, the Skorokhod embedding problem (SEP) consists in finding a stopping time τ on a Brownian motion W such that W

τ

∼ µ and the stopped process W

τ∧·

:= W

τ∧t

t≥0

is uniformly integrable. We refer the readers to the survey paper [36] of Ob l´ oj for a comprehensive account of the field.

In this paper, we consider its extension to the case of multiple marginal con- straints. Namely, let µ := (µ

1

, · · · , µ

n

) be a given family of centered probability

We gratefully acknowledge the financial support of the ERC 321111 Rofirm, the ANR Isotace, and the Chairs Financial Risks (Risk Foundation, sponsored by Soci´et´e G´en´erale) and Finance and Sustainable Development (IEF sponsored by EDF and CA).

CMAP, Ecole Polytechnique, France. guo@cmap.polytechnique.fr

CEREMADE, University of Paris-Dauphine, France. tan@ceremade.dauphine.fr

§

CMAP, Ecole Polytechnique, France. nizar.touzi@polytechnique.edu

1

While producing the final version of this paper, we knew from Mathias Beiglb¨ock about the new

development in [2] that is also in the final producing version and extends the previous work [1] to the

case of finitely-many marginal contraints. We emphasize that our approach is of completely different

nature.

(3)

measures such that the family is increasing in convex ordering, i.e. for every convex function φ : R → R, one has

Z

R

φ(x)µ

k

(dx) ≤ Z

R

φ(x)µ

k+1

(dx) for all k = 1, · · · , n − 1.

The extended SEP is to find an increasing family of stopping times τ := (τ

1

, · · · , τ

n

) such that W

τk

∼ µ

k

for all k = 1, · · · , n and the stopped process W

τn∧·

is uniformly integrable. We study an associated optimization problem, which consists in maxi- mizing the expected value of some reward function among all such embeddings.

One of the motivations to study this problem is its application in finance to compute the arbitrage-free model-independent price bounds of contingent claims consistent with the market prices of vanilla options. Mathematically, the under- lying asset is required to be a martingale according to the no-arbitrage condition and the market calibration allows to recover the marginal laws of the underlying at certain maturities (see e.g. Breeden & Litzenberger [7]). Then by consider- ing all martingales fitting the given marginal distributions, one can obtain the arbitrage-free price bounds. Based on the fact that every continuous martingale can be considered as a time-changed Brownian motion by Dambis-Dubins-Schwarz theorem, Hobson studied the model-free hedging of lookback options in his seminal paper [25] by means of the SEP. The main idea of his poineering work is to ex- ploit some solution of the SEP satisfying some optimality criteria, which yields the model-free hedging strategy and allows to solve together the model-free pricing and hedging problems. Since then, the optimal SEP has received substantial attention from the mathematical finance community and various extensions were acheived in the literature, such as Cox & Hobson [9], Hobson & Klimmek [27], Cox, Hobson &

Ob l´ oj [10], Cox & Ob l´ oj [11] and Davis, Ob l´ oj & Raval[12], Ob l´ oj & Spoida [37], etc. A thorough literature is provided in Hobson’s survey paper [26].

Beiglb¨ ock, Cox and Huesmann generalized this heuristic idea and formulated the optimal SEP in [1], which recovered the previous known results by a unifying for- mulation. Namely, their main results are twofold. First, they establish the expected identity between the optimal SEP and the corresponding model-free superhedging problem. Second, they derive the chacterization of the optimal embedding by a ge- ometric pathwise property which allows to recover all previous known embeddings in the literature.

The problem of model-free hedging has also been approached by means of the martingale optimal transport, as initiated by Beiglb¨ ock, Henry-Labord`ere &

Penkner [3] in discrete-time case and Galichon, Henry-Labord`ere & Touzi [20] in continuous-time case. Further developpement enriches this literature, such as Bei- glb¨ ock & Juillet [4], , Henry-Labord`ere & Touzi [24], Henry-Labord`ere, Tan & Touzi [23]. An remarkable contribution for the continuous-time martingale optimal trans- port is due to Dolinsky and Soner [14, 15]. We also refer to Tan and Touzi [42] for the optimal transport problem under more general controlled stochastic dynamics.

Our objective of this paper is to revisit the duality result of [1] and to extend the duality to the case of multiple marginal constraints. Our approach uses tools from a completely different nature. First, by following the convex duality approach, we convert the optimal SEP into an infimum of classical optimal stopping problems.

Next, we use the standard dynamic programming approach to relate such opti-

mal stopping problems to model-free superhedging problems. We observe that the

derived duality allows to reproduce the geometric characterization of the optimal

(4)

embedding introduced in [1], see e.g. [21]. Finally, we show that our result induces the duality for a class of martingale optimal transport problems in the space of continuous paths.

The paper is organized as follows. In Section 2, we formulate our optimal SEP under finitely-many marginal constraints and provide two duality results. In Section 3 the duality of optimal SEP together with time-change arguments gives the duality for the martingale optimal transport problem under multi-marginal constraints. We finally provide the related proofs in Section 4.

Notations. (i) Let Ω := C( R

+

, R ) be the space of all continuous paths ω on R

+

such that ω

0

= 0, B be the canonical process, P

0

be the Wiener measure, F := (F

t

)

t≥0

be the canonical filtration generated by B, and F

a

:= (F

ta

)

t≥0

be the augmented filtration under P

0

.

(ii) Define for some fixed integer n ≥ 1 the enlarged canonical space by Ω := Ω × Θ (see El Karoui and Tan [18, 19]), where Θ :=

1

, · · · , θ

n

) ∈ R

n+

: θ

1

≤ · · · ≤ θ

n

. All the elements of Ω are denoted by ¯ ω := (ω, θ) with θ := (θ

1

, · · · , θ

n

).

Denote further by (B, T ) (with T := (T

1

, · · · , T

n

)) the canonical element on Ω, i.e. B

t

(¯ ω) := ω

t

and T (¯ ω) := θ for every ¯ ω = (ω, θ) ∈ Ω. The enlarged canonical filtration is denoted by F := (F

t

)

t≥0

, where F

t

is generated by (B

s

)

0≤s≤t

and all the sets {T

k

≤ s} for all s ∈ [0, t] and k = 1, · · · , n. In particular, all random variables T

1

, · · · , T

n

are F −stopping times.

(iii) We endow Ω with the compact convergence topology, and Θ with the classical Euclidean topology, then Ω and Ω are both Polish spaces (separable, complete metrizable space). In particular, F

:= W

t≥0

F

t

is the Borel σ−field of the Polish space Ω (see Lemma A.1).

(iv) Denote by C

1

:= C

1

(R) the space of all continuous functions on R with linear growth.

(v) Throughout the paper UI, a.s. and q.s. are respectively the abbreviations of uniformly integrable, almost surely and quasi-surely. Moreover, given a set of probability measures N (e.g. N = P and N = M in the following) on some measurable space, we write N −q.s. to represent that some property holds under every probability of N .

2 An optimal Skorokhod embedding problem and the dualities

In this section, we formulate an optimal Skorokhod embedding problem (SEP) under finitely-many marginal constraints, as well as its dual problems. We then provide two duality results.

2.1 An optimal Skorokhod embedding problem

Throughtout the paper, µ := (µ

1

, · · · , µ

n

) is a vector of n probability measures on R and we denote, for any integrable function φ : R → R,

µ

k

(φ) :=

Z

R

φ(x)µ

k

(dx) for all k = 1, · · · , n.

(5)

The vector µ is said to be a peacock if each probability µ

k

has finite first moment, i.e. µ

k

(|x|) < +∞, and µ is increasing in convex ordering, i.e, k 7→ µ

k

(φ) is non- decreasing for every convex function φ. A peacock µ is called centered if µ

k

(x) = 0 for all k = 1, · · · , n. Denote by P

the collection of all centered peacocks.

Optimal SEP As in Beiglb¨ ock, Cox & Huesmann [1], we shall consider the problem in a weak setting, i.e. the stopping times may be identified by probability measures on the enlarged space Ω. Recall that the elements of Ω are denoted by ¯ ω := ω, θ = (θ

1

, · · · , θ

n

)

and the canonical element is denoted by B, T = (T

1

, · · · , T

n

)

, and in particular T

1

, · · · , T

n

are all F −stopping times. Let P (Ω) be the space of all probability measures on Ω, and define

P := n

P ∈ P(Ω) : B is an F − Brownian motion and B

Tn∧·

is UI under P o .(2.1) Set for any given family of probability measures µ = (µ

1

, · · · , µ

n

)

P (µ) := n

P ∈ P : B

Tk

P

µ

k

for all k = 1, · · · , n o

. (2.2)

As a consequence of Kellerer’s theorem in [33], P (µ) is nonempty if and only if µ ∈ P

.

Let Φ : Ω → R be a measurable function, then Φ is called non-anticipative if Φ(ω, θ) = Φ ω

θn∧·

, θ

for every (ω, θ) ∈ Ω. Define the optimal SEP for a non- anticipative function Φ by

P (µ) := sup

P∈P(µ)

E

P

Φ(B, T )

, (2.3)

where the expectation of a random variable ξ is defined by E

P

[ξ] = E

P

+

] − E

P

] with the convention ∞ − ∞ = −∞. The problem is well-posed if there exists at least a P ∈ P (µ) such that E

P

[|Φ(B, T )|] < +∞. We emphasize that Φ is non- anticipative throughout the paper.

Remark 2.1. A µ−embedding is a collection

α = Ω

α

, F

α

, P

α

, F

α

= (F

tα

)

t≥0

, W

α

, τ

α

= (τ

1α

, · · · , τ

nα

) ,

where W

α

is an F

α

−Brownian motion, τ

1α

, · · · , τ

nα

are increasing F

α

−stopping times such that W

ταnα∧·

is uniformly integrable, and W

ταα

k

∼ µ

k

for all k = 1, · · · n.

We observe that for every centered peacock µ, there exists a one-to-one correspon- dence between P (µ) and the set of µ−embeddings. Indeed, it is clear that every µ−embedding α induces a probability measure P := P

α

◦ (W

α

, τ

α

)

−1

∈ P(µ). Con- versely, every P ∈ P(µ) induces a µ−embedding.

2.2 The duality results

We introduce two dual problems. Recall that P

0

is the Wiener measure on Ω =

C(R

+

, R) under which the canonical process B is a standard Brownian motion,

F = (F

t

)

t≥0

is the canonical filtration and F

a

= (F

ta

)

t≥0

is the augmented filtration

by P

0

. Denote by T

a

the collection of all increasing families of F

a

−stopping times

(6)

τ = (τ

1

, · · · , τ

n

) such that the process B

τn∧·

is uniformly integrable. Define also the class of functions

Λ := C

1n

= n

λ := (λ

1

, · · · , λ

n

) : λ

k

∈ C

1

for all k = 1, · · · n o

. (2.4) For µ = (µ

1

, · · · , µ

n

), λ = (λ

1

, · · · , λ

n

) and ω, θ = (θ

1

, · · · , θ

n

)

∈ Ω, we denote µ(λ) :=

n

X

k=1

µ

k

k

) and λ(ω

θ

) :=

n

X

k=1

λ

k

θi

) with ω

θ

:= (ω

θ1

, · · · , ω

θn

).

Then the first dual problem for the optimal SEP (2.3) is given by D

0

(µ) := inf

λ∈Λ

n sup

τ∈Ta

E

P0

Φ(B, τ) − λ(B

τ

)

+ µ(λ) o

. (2.5)

As for the second dual problem, we return to the enlarged space Ω. Given P ∈ P , an F −adapted process M = (M

t

)

t≥0

is called a strong P −supermartingale if

E

P

M

τ2

F

τ1

≤ M

τ1

, P − a.s.

for all F −stopping times τ

1

≤ τ

2

. Let L

2loc

be the space of all F −progressively measurable processes H = (H

t

)

t≥0

such that

Z

t

0

H

2s

ds < +∞ for every t ≥ 0, P − q.s..

For H ∈ L

2loc

, the stochastic integral (H · B) := R

·

0

H

s

dB

s

is well defined P −a.s. for all P ∈ P. We introduce a subset of processes:

H := n

H ∈ L

2loc

: (H · B) is a P − strong supermartingale for all P ∈ P o . Denote further

D := n

(λ, H ) ∈ Λ × H : λ(B

T

) + (H · B)

Tn

≥ Φ(B, T ), P − q.s. o , and the second dual problem is given by

D(µ) := inf

(λ,H)∈D

µ(λ). (2.6)

Loosely speaking, the two dual problems dualize respectively different constraints of the primal problem (2.3). By penalizing the marginal constraints, we obtain the first dual problem D

0

(µ) of (2.5), where a multi-period optimal stopping problem appears for every fixed λ ∈ Λ. Then the second dual problem D(µ) of (2.6) follows by the resolution of the optimal stopping problem via the Snell envelop approach and the Doob-Meyer decomposition.

Our main result of this section is the following duality results.

Assumption 2.2. The non-anticipative function Φ is bounded from above, and

θ 7→ Φ(ω, θ) is upper-semicontinuous for each ω ∈ Ω.

(7)

Assumption 2.3. One of the following conditions holds true.

(i)The reward function Φ admits the representation Φ(¯ ω) = P

n

k=1

Φ

k

(ω, θ

1

, · · · , θ

k

), where for each k = 1, · · · , n, Φ

k

: Ω × R

k+

satisfies that Φ

k

(ω, θ

1

, · · · , θ

k

) = Φ

k

θk∧·

, θ

1

, · · · , θ

k

), the map θ

k

7→ Φ

k

(¯ ω) is c` adl` ag and

1

, · · · , θ

k−1

) 7→ Φ

k

(ω, θ

1

, · · · , θ

k

)

is uniformly continuous for 0 ≤ θ

1

≤ · · · ≤ θ

k−1

≤ θ

k

, uniformly in θ

k

. (ii) The map ω ¯ 7→ Φ(¯ ω) is upper-semicontinuous.

Theorem 2.4. (i) Under Assumption 2.2, there is some P

∈ P(µ) such that E

P

[Φ] = P (µ) = D

0

(µ).

(ii) Suppose in addition that Assumption 2.3 holds true, then P (µ) = D

0

(µ) = D(µ).

Remark 2.5. (i) Based on the first dual problem D

0

(µ) in (2.5), a numerical algorithm has been obtained in Bonnans and Tan [6] for the above optimal SEP.

(ii) The second dual problem D(µ) in (2.6) is similar to that in Beiglb¨ ock, Cox and Huesmann [1], where they use a continuous martingale instead of the stochastic integral (H · B ) in our formulation.

(iii) The duality result in [1, 2] is established under Assumptions 2.2 and 2.3 (ii) . Our Assumption 2.2 (i) does not involve any regularity in ω, and allows to handle examples as Φ(¯ ω) = P

n

k=1

ϕ(ω, θ

k

) for some non-anticipative c` adl` ag process ϕ : Ω × R

+

→ R.

3 Application to a class of martingale trans- port problems

In this section, we use the previous duality results of the optimal SEP to study a continuous time martingale transport problem under multiple marginal constraints.

As an application in finance to study the robust superhedging problem, the multi- marginal case is very natural. Namely, when the vanilla options are available for trading for several maturities, which allows to recover the marginal distributions of the underlying asset at several times, we can formulate the robust superhedging problem as a martingale transport problem under multiple marginal constraints.

3.1 Robust superhedging and martingale transport

Define the canonical process X := (X

t

)

0≤t≤1

by X

t

= B

1∧t

for all t ∈ [0, 1] and its natural filtration ˜ F := ( ˜ F

t

)

0≤t≤1

. Denote further by M the collection of all martingale measures ˜ P, i.e. the probability measures under which X is a martingale.

Let I := (0 < t

1

< · · · < t

n

= 1) be a set of time instants and define the set of martingale transport plans for µ ∈ P

M(µ) := n

P ˜ ∈ M : X

tk

∼ µ

k

for all k = 1, · · · , n o

.

(8)

By Karandikar [31], there is a non-decreasing ˜ F −progressive process hXi which coincides with the quadratic variation of X, ˜ P −a.s. for every martingale measure P ˜ ∈ M. Denote

hXi

−1t

:= inf

s ≥ 0 : hXi

s

> t and W

t

:= X

hXi−1

t

for every t ≥ 0.

For a measurable function ξ : Ω → R, the martingale transport problem under multiple marginal constraints is defined by

P ˜ (µ) := sup

˜P∈M(µ)

E

P˜

ξ(X)

. (3.7)

Denote by ˜ H the collection of all ˜ F−progressive processes ˜ H := ( ˜ H

t

)

0≤t≤1

such that Z

1

0

H ˜

s2

dhXi

s

< +∞, M − q.s. and ( ˜ H · X) is ˜ P − supermartingale for all ˜ P ∈ M.

Then the two dual problems are given by D ˜

0

(µ) := inf

λ∈Λ

n sup

P∈M˜

E

˜P

ξ X

− λ(X

I

)

+ µ(λ) o

and D(µ) := ˜ inf

(λ,H)∈˜ D˜

µ(λ),

where

λ(X

I

) :=

n

X

i=1

λ

i

(X

ti

) with X

I

:= (X

t1

, · · · , X

tn

) and

D ˜ := n

(λ, H) ˜ ∈ Λ × H ˜ : λ(X

I

) + ( ˜ H · X)

1

≥ ξ X

, M − q.s. o . It is easy to check that the weak dualities hold:

P ˜ (µ) ≤ D ˜

0

(µ) ≤ D(µ). ˜ (3.8) Remark 3.1. The problem D(µ) ˜ has a natural interpretation in financial mathe- matics as the minimal robust superhedging cost of the exotic option, where λ and H ˜ are called the static and dynamic strategy, respectively. Here robustness refers to the fact that the underlying probability measure is not fixed a priori, so that the superhedging requirement is imposed under all possible models.

3.2 Duality results

Using the duality results of the optimal SEP in Theorem 2.4, we can establish the duality for the above martingale transport problem.

Theorem 3.2. Assume that the reward function ξ admits the representation ξ(X) = Φ W, hXi

t1

, · · · , hXi

tn

, for some Φ : Ω → R satisfying Assumptions 2.2 and 2.3. Then

P ˜ (µ) = D ˜

0

(µ) = D(µ). ˜

(9)

Example 3.3. Let φ : R

+

× R

3

n

→ R be upper-semicontinuous and bounded from above. Then the ξ defined below satisfies the condition in Theorem 3.2.

ξ(X) = φ hXi

ti

, X

ti

, X

ti

, X

ti

, i = 1, · · · n

, (3.9)

where X

t

:= sup

0≤s≤t

X

s

and X

t

:= inf

0≤s≤t

X

s

.

Remark 3.4. In Dolinsky & Soner [14], the duality is established (in a stronger sense) for the case n = 1, for a general payoff function ξ which is Lipschtiz with respect to the uniform metric. In our Theorem 3.2, the reward function ξ is more specific, but it may include the dependence on the quadratic variation of the underly- ing process, which is related to the variance option in finance. Moreover, our results is given for the multiple marginal case, such an extension for their technique seems not obvious, see also the work of Hou & Ob l´ oj [28] and Biagini, Bouchard, Kardaras

& Nutz [5]. More recently, an analogous duality is proved in the Skorokhod space under suitable conditions in Dolinsky & Soner [15] , where the underlying asset is assumed to take values in some subspace of c` adl` ag functions.

Proof of Theorem 3.2. Combining the dualities P (µ) = D

0

(µ) = D(µ) in Theorem 2.4 and the weak dualities ˜ P (µ) ≤ D ˜

0

(µ) ≤ D(µ), it is enough to prove ˜

P (µ) ≤ P ˜ (µ) and D(µ) ≥ D(µ), ˜

where P (µ) and D(µ) are defined respectively in (2.3) and (2.6) with reward func- tion Φ.

(i) Define the process M := (M

t

)

0≤t≤1

by M

t

:= B

Tk+tk+1t−tk−t

∧Tk+1

for all t ∈ [t

k

, t

k+1

) and 0 ≤ k ≤ n − 1,

with T

0

= t

0

= 0 and M

1

= B

Tn

. It is clear that M is a continuous martingale under every probability P ∈ P and M

tk

= B

Tk

for all k = 1, · · · , n, which implies in particular M

tk

P

µ

k

for every P ∈ P(µ). Let P ∈ P (µ) be arbitrary, then P ˜ := P ◦ M

−1

∈ M(µ). Moreover, one finds P −a.s., hM i

tk

= T

k

for all k = 1, · · · , n and B

t

= M

hMi−1

t

, which yields

ξ(M ) = Φ B, hM i

t1

, ..., hMi

tn

= Φ(B, T ), P − a.s.

Thus

P(µ) ˜ ≥ E

P

ξ(M)

= E

P

Φ B, T

. (3.10)

It follows that

P(µ) ≤ P ˜ (µ).

(ii) Let us now prove ˜ D(µ) ≤ D(µ). Let (λ, H ) ∈ D, i.e. (λ, H ) ∈ Λ × H be such that

λ(B

T

) + (H · B)

Tn

≥ Φ B, T

, P − q.s..

For every ˜ P ∈ M, it follows by Dambis-Dubins-Schwarz theorem that the time- changed process W

t

= X

hXi−1

t

with hXi

−1t

= inf{s ≥ 0 : hXi

s

> t} is a Brownian motion with respect to the time-changed filtration F ˜

hXi−1

t

t≥0

under ˜ P and

X

t

= W

hXit

for every t ∈ [0, 1], P ˜ − a.s.

(10)

Moreover, hXi

I

:= (hXi

tk

)

1≤k≤n

are stopping times w.r.t. the time-changed filtra- tion F ˜

hXi−1

t

t≥0

. Let us define P := ˜ P ◦ W, hXi

t1

, ..., hXi

tn

−1

, then P ∈ P and thus we have ˜ P −a.s.

λ W

hXiI

+ H

s

· W

hXi1

≥ Φ W, hXi

t1

, ..., hXi

tn

. Define

H ˜

s

(X) := H

hXis

W, hXi

t1

, ..., hXi

tn

,

then it follows by Propositions V.1.4 and V.1.5 of Revuz and Yor [39] that H is F ˜ −progressively measurable such that

Z

1

0

H ˜

s2

dhXi

s

=

Z

hXi1

0

H

2s

ds < + ∞, P ˜ − a.s., and

H · W

hXit

= ( ˜ H · X)

t

for every 0 ≤ t ≤ 1, P ˜ − a.s.

Hence

λ(X

I

) + ( ˜ H · X)

1

≥ Φ W, hXi

t1

, ..., hXi

tn

= ξ X

, P ˜ − a.s. (3.11) Notice that H ∈ H, and hence (H · W ) is a strong supermartingale under ˜ P, which implies by the time-change argument that the stochastic integral H · W

hXi·

is a supermartingale under ˜ P (with respect to its natural filtration) and so it is with ( ˜ H · X). Hence ˜ H ∈ H ˜ and further (λ, H) ˜ ∈ D. It follows that ˜ ˜ D(µ) ≤ D(µ), which concludes the proof.

4 Proof of Theorem 2.4

To prove our main results in Theorem 2.4, we begin by a lemma which is repeatedly used in the following and then cite some technical lemmas from the classical optimal stopping theory.

4.1 Technical lemmas

Recall that P

denotes the collection of all centered peacocks, which is a collection of vectors of probability measures on R. We first introduce a notion of convergence W

1

on P

which is stronger than the weak convergence. A sequence of centered peacocks µ

m

= (µ

m1

, · · · , µ

mn

)

m≥1

⊂ P

is said to converge under W

1

to µ

0

= (µ

01

, · · · , µ

0n

) ∈ P

if µ

mk

converges to µ

0k

under the Wasserstein metric for all k = 1, · · · , m (see e.g. Definition 6.1 in Villani [43]). This convergence is denoted by −→. It follows by Theorem 6.9 of [43] that the convergence

W1

−→

W1

holds if and only if for any φ ∈ C

1

,

m→∞

lim µ

mk

(φ) = µ

0k

(φ) for all k = 1, · · · , n. (4.12)

In particular, the convergence −→

W1

implies the (usual) weak convergence.

(11)

Further, in order to apply the Fenchel-Moreau theorem, we shall consider a linear topological space containing all centered peacocks. Let M denote the space of all finite signed measures ν on R such that R

R

1 +|x|

|ν|(dx) < +∞. We endow M with a topology of Wasserstein kind, denoted still by W

1

without any confusion.

Let (ν

m

)

m≥1

⊂ M and ν

0

∈ M, we say ν

m

converges to ν

0

under W

1

if

m→∞

lim Z

R

φ(x)ν

m

(dx) = Z

R

φ(x)ν

0

(dx) for all φ ∈ C

1

. (4.13) Let M

n

:= M×...×M be the n−product of M, endowed with the product topology.

It is clear that under W

1

, P

is a closed convex subspace of M

n

and the restriction of this convergence on P

is as same as the Wasserstein convergence.

It is well known that the space of all finite signed measures equipped with the weak convergence topology is a locally convex topological vector space, and its dual space is the space of all bounded continuous functions (see e.g. Section 3.2 of Deuschel and Stroock [14]). By exactly the same arguments, we have the following similar result.

Lemma 4.1. There exists a topology O

n

for M

n

which is compatible with the convergence −→

W1

such that (M

n

, O

n

) is a Hausdorff locally convex space. Moreover, its dual space can be identified by (M

n

)

= Λ.

We next turn to the space P (Ω) of all Borel probability measures on the Polish space Ω. Denote by C

b

(Ω) the collection of all bounded continuous functions on Ω, and B

mc

(Ω) the collection of all bounded measurable function φ, such that θ 7→ φ(ω, θ) is continuous for all ω ∈ Ω. Notice that the weak convergence topology on P(Ω) is defined as the coarsest topology under which P 7→ E

P

[ξ] is continuous for all ξ ∈ C

b

(Ω). Following Jacod and M´emin [29], we introduce the stable convergence topology on P (Ω) as the coarsest topology under which P 7→ E

P

[ξ] is continuous for all ξ ∈ B

mc

(Ω). Recall that every probability measure in P (defined by (2.1)) has the same marginal law on Ω. Then as an immediate consequence of Proposition 2.4 of [29], we the following result.

Lemma 4.2. The weak convergence topology and the stable convergence topology coincide on the space P.

Lemma 4.3. Let (µ

m

)

m≥1

be a sequence of centered peacocks such that µ

m

−→

W1

µ

0

, (P

m

)

m≥1

be a sequence of probability measures with P

m

∈ P (µ

m

) for all m ≥ 1.

Then (P

m

)

m≥1

is relatively compact under the weak convergence topology. More- over, any accumulation point of ( P

m

)

m≥1

belongs to P(µ

0

).

Proof. (i) For any ε > 0, there exists a compact set D ⊂ Ω such that P

m

(D × Θ) = P

0

(D) ≥ 1 − ε for every m ≥ 1. In addition, by Proposition 7 of Monroe [35], one has for any constant C > 0,

P

m

T

n

≥ C

≤ C

−1/3

1 + µ

mn

(|x|)

2

≤ C

−1/3

1 + sup

m≥1

µ

mn

(|x|)

2

.

Choose the cube [0, C]

n

large enough such that P

m

T ∈ [0, C]

n

≥ 1 − ε for all m ≥ 1. The tightness of (P

m

)

m≥1

under weak convergence topology follows by P

m

D × [0, C]

n

≥ P

m

D × Θ + P

m

Ω × [0, C]

n

− 1 ≥ 1 − 2ε for all m ≥ 1.

(12)

Let P

0

be any limit point. By possibly subtracting a subsequence, we assume that P

m

→ P

0

weakly.

(ii) Notice that B is F −Brownian motion under each P

m

and thus the process ϕ(B

t

) − R

t

0 1

2

ϕ

′′

(B

s

)ds is a F−martingale under P

m

whenever ϕ is bounded, smooth and of bounded derivatives. Notice that the maps (ω, θ) 7→ ϕ(ω

t

) − R

t

0

ϕ

′′

s

)ds is also bounded continuous, then

E

P0

h

ϕ(B

t

) − ϕ(B

s

) − Z

t

s

1

2 ϕ

′′

(B

u

)du ψ i

= 0 (4.14)

for every r < s < t and bounded continuous and F

r

−measurable random variable ψ. Since F

r

can be generated by bounded continuous random variables (see Lemma A.1), it follows that (4.14) is still true for every bounded and F

r

−measurable ψ.

Letting s → r, by the dominated convergence theorem, it follows that (4.14) holds for every r = s < t and bounded F

s

−measurable random variable ψ. This implies that B is a F −Brownian motion under P

0

.

(iii) We next assume that P

m

∈ P(µ

m

) and prove

B

Tn∧·

is uniformly integrable under P

0

. (4.15) The convergence of (µ

m

)

m≥1

to µ

0

implies in particular

E

Pm

B

Tn

− R

+

= µ

mn

(|x| − R)

+

−→ µ

0n

(|x| − R)

+

< + ∞.

Therefore, for every ε > 0, there is R

ε

> 0 large enough such that µ

mn

(|x|−R

ε

)

+

<

ε for every m ≥ 1. It follows by Jensen’s inequality and |x|1

{|x|>2R}

≤ 2(|x| − R)

+

that

E

Pm

B

Tn∧t

1

{|BTn∧t|>Rε}

≤ 2E

Pm

B

Tn

− R

ε

+

≤ 2ε for all t ≥ 0.

Notice also that the function |x|1

{|x|>2Rε}

is lower semicontinuous and we obtain by Fatou’s lemma

E

P0

B

Tn∧t

1

{|BTnt|>Rε}

≤ lim inf

m→∞

E

Pm

B

Tn∧t

1

{|BTn∧t|>Rε}

≤ 2ε, which justifies the claim (4.15). Moreover, since the map (ω, θ) 7→ ω

θk

is continuous, it follows that B

Tk

P0

∼ µ

0k

for all k = 1, · · · , n. Therefore, P

0

∈ P(µ

0

), which concludes the proof.

We next recall some useful results from the classical optimal stopping theory (see e.g. El Karoui [16], Peskir and Shiryaev [38], Karatzas and Shreve [32] etc.) Let (Ω

, F

, P

) be an abstract complete probability space, which is equipped with two filtrations F

= (F

t

)

t≥0

and G

= (G

t

)

t≥0

, where F

t

⊆ G

t

for every t ≥ 0 and both filtrations satisfy the usual conditions. Denote F

:= ∨

t≥0

F

t

and G

:=

t≥0

G

t

. We denote further by T

F

the class of all F

−stopping times, and by T

G

the collection of all G

−stopping times. Let Y be a c` adl` ag F

−optional process defined on Ω

of class (D).

The first result is about the equivalence of the optimal stopping problem for

different filtrations under the following so-called Assumption (K) in the optimal

stopping theory.

(13)

Assumption 4.4 (K). For every t ≥ 0, every G

t

−measurable bounded random variable X satisfies

E X|F

t

= E

X|F

, P

− a.s.

Lemma 4.5. Under Assumption 4.4 we have sup

τ∈TF∗

E[Y

τ

] = sup

τ∈TG∗

E[Y

τ

].

Proof. The result follows by Theorem 5 of Szpirglas and Mazziotto [41]. We notice that they assume that Y is positive, which induces immediately the same result when Y is of class (D) since it can be dominated from below by some martingale.

The next result recalls the so-called Snell envelope characterization of optimal stopping problems.

Lemma 4.6. There is a c` adl` ag F

−adapted process Z, which is the smallest su- permartingale dominating Y , and satisfies E [Z

0

] = sup

τ∈TF∗

E

Y

τ

.

4.2 Proof of the first duality

We now provide the proof for the first duality result in Theorem 2.4. The main idea is to show that µ 7→ P (µ) is concave and upper-semicontinuous and then to use the Fenchel-Moreau theorem.

Lemma 4.7. Under Assumption 2.2, the map µ ∈ P

7→ P(µ) ∈ R is concave and upper-semicontinuous w.r.t. W

1

. Moreover, for every µ ∈ P

, there is some P

∈ P(µ) such that E

P

[Φ] = P (µ).

Proof. (i) Let µ

1

, µ

2

∈ P

, P

1

∈ P(µ

1

) and P

2

∈ P(µ

2

) and α ∈ (0, 1), it follows by definition that αP

1

+ (1 − α)P

2

∈ P(αµ

1

+ (1 − α)µ

2

). Therefore, it is clear that the map µ 7→ P (µ) is concave.

(ii) We now prove that µ 7→ P (µ) is upper-semicontinuous w.r.t. W

1

. Let (µ

m

)

m≥1

⊂ P

and µ

m

→ µ

0

∈ P

in W

1

. After possibly passing to a subse- quence, we can have a family (P

m

)

m≥1

such that

P

m

∈ P(µ

m

) and lim sup

m→∞

P(µ

m

) = lim

m→∞

E

Pm

h

Φ B, T i .

B Lemma 4.3, we may find a subsequence still named by ( P

n

)

n≥1

, which converges weakly to some P

0

∈ P(µ

0

). By Lemma 4.2, the map P 7→ E

P

Φ(B, T )

is upper- semicontinuous on P w.r.t. the weak convergence topology for all Φ satisfying Assumption 2.2. We then obtain by Fatou’s lemma that

lim sup

m→∞

P(µ

m

) = lim

m→∞

E

Pm

h

Φ B, T i

≤ E

P0

h

Φ B, T i

≤ P (µ

0

).

(iii) Let µ ∈ P

, choosing µ

m

= µ and using the same arguments, it follows immediately that there is some P

∈ P(µ) such that E

P

[Φ] = P (µ).

The results in Lemma 4.7 together with the Fenchel-Moreau theorem implies the first duality in Theorem 2.4. Before providing the proof, we consider the optimal stopping problem arising in the dual formulation (2.5). Denote for every λ ∈ Λ,

Φ

λ

(ω, θ) := Φ(ω, θ) − λ(ω

θ

) for all (ω, θ) ∈ Ω. (4.16)

(14)

Recall that T

a

denotes the collection of all increasing families of F

a

−stopping times τ = (τ

1

, · · · , τ

n

) such that B

τn∧·

is uniformly integrable. Let N > 0, denote also by T

Na

⊂ T

a

the subset of families τ = (τ

1

, · · · , τ

n

) such that τ

n

≤ N , P

0

−a.s. Denote further by P

N

⊂ P the collection of P ∈ P such that T

n

≤ N , P −a.s.

Lemma 4.8. Let Φ be bounded and continuous, then for every λ ∈ Λ sup

τ∈Ta

E

P0

λ

(B, τ )] = lim

N→∞

sup

τ∈TNa

E

P0

λ

(B, τ )] (4.17)

= lim

N→∞

sup

P∈PN

E

P

Φ

λ

(B, T )

= sup

P∈P

E

P

Φ

λ

(B, T ) .

In particular, let φ ∈ C

1

and denote by φ

conc

its concave envelope, then sup

τ∈Ta

E

P0

[φ(B

τn

)] = φ

conc

(0). (4.18) Proof. (i) Given λ ∈ Λ there is some constant C > 0 such that

Φ

λ

(B, τ)

≤ C 1 +

n

X

k=1

B

τk

. (4.19)

Let τ ∈ T

a

, define τ

N

:= (τ

1N

, · · · , τ

nN

) with τ

kN

:= τ

k

∧ N , then it is clear that lim

N→∞

Φ

λ

(B, τ

N

) = Φ

λ

(B, τ), P

0

−a.s. By the domination in (4.19) and the fact that B

τn∧·

is uniformly integrable, we have lim

N→∞

E

P0

Φ

λ

(B, τ

N

)

= E

P0

Φ

λ

(B, τ )

. It follows by the arbitrariness of τ ∈ T

a

and the fact T

Na

⊂ T

a

that sup

τ∈Ta

E

P0

λ

(B, τ )] = lim

N→∞

sup

τ∈TNa

E

P0

λ

(B, τ)].

By the same arguments, it is clear that we also have sup

P∈P

E

P

Φ

λ

(B, T )

= lim

N→∞

sup

P∈PN

E

P

Φ

λ

(B, T ) .

(ii) We now apply Lemma 4.5 to prove that for every fixed constant N > 0, sup

τ∈TNa

E

P0

λ

(B, τ )] = sup

P∈PN

E

P

Φ

λ

(B, T )

. (4.20)

First, let us suppose that n = 1. Let P ∈ P

N

, denote Y

t

:= Φ

λ

(B, t ∧ N ), it is clear that E

P

sup

t≥0

Y

t

< ∞. Denote by F

P

= (F

Pt

)

t≥0

the augmented filtration of F under P and by F

B,P

the filtration generated by B on Ω and by F

B,P

= (F

B,Pt

)

t≥0

its P −augmented filtration. It is clear that F

B,Pt

⊂ F

Pt

. More importantly, by the fact that B is a F

P

−Brownian motion under P, it is easy to check that the probability space (Ω, F

P

, P ) together with the filtration F

P

and F

B,P

satisfies Hypothesis (K) (Assumption 4.4). Then by Lemma 4.5, E

P

Φ

λ

(B, T )

≤ sup

τ∈Ta

N

E

P0

λ

(B, τ)]

and hence sup

P∈PN

E

P

Φ

λ

(B, T )

≤ sup

τ∈Ta

N

E

P0

λ

(B, τ )]. We then have equality

(4.20) since the inverse inequality is clear. Finally, when n > 1, it is enough to use

the arguments together with induction to prove (4.20).

(15)

(iii) To prove (4.18) it suffices to set Φ ≡ 0 and n = 1. Then by (4.17), it follows that

sup

τ∈Ta

E

P0

φ(B

τ

)

= lim

N→∞

sup

τ∈TNa

E

P0

φ(B

τ

)

≤ φ

conc

(0).

The other inequality is obvious by considering the exiting time of the Brownian motion from an open interval. We then conclude the proof.

Proof of Theorem 2.4 (i). The existence of optimal embedding is already proved in Lemma 4.7. For the first duality result, we shall use the Fenchel-Moreau theorem.

Let us first extend the map µ 7→ P (µ) from P

to M

n

by setting that P (µ) = −∞, for every µ ∈ M

n

\P

. It is easy to check, using Lemma 4.7, that the extended map µ 7→ P (µ) from the topological vector space M

n

to R is still concave and upper- semicontinuous. Then by the Fenchel-Moreau theorem together with Lemma 4.1 and, it follows that

P (µ) = P

∗∗

(µ)

= inf

λ∈Λ

n sup

ν∈P

sup

P∈P(ν)

E

P

Φ

λ

(B, T )

+ µ(λ) o

= inf

λ∈Λ

n sup

τ∈Ta

E

P0

Φ

λ

B, τ

+ µ(λ) o ,

where the last equality follows by (4.17). Hence we have P (µ) = D

0

(µ).

Remark 4.9. When Φ is bounded (which is the relevant case by the reduction of Section 4.3.1), we can prove further that

D

0

(µ) = inf

λ∈Λ+

n sup

τ∈Ta

E

P0

Φ

λ

(B, τ)

+ µ(λ) o

, (4.21)

where

Λ

+

:=

λ = (λ

1

, · · · , λ

n

) ∈ Λ : λ

k

≥ 0 for all k = 1, · · · , n .

Indeed, using (4.18), it is easy to see that in the definition of D

0

(µ), it is enough to take the infimum over the class of all functions λ ∈ Λ

+

such that the convex envelop λ

convk

(0) > −∞ for all k = 1, · · · , m, since by (4.18) and the boundedness of Φ, sup

τ∈Ta

E

P0

λ

] = +∞ whenver (−λ

k

)

conc

(0) = ∞ for some k. Hence the infimum is taken among all λ ∈ Λ such that λ

convk

(0) > −∞ for all k = 1, · · · , m, and consequently λ

k

is dominated from below by some affine function. Since E

P0

[B

τk

] = 0 for every τ ∈ T

a

, we see that by possibly subtracting from λ

k

the last affine function, it is enough to take infimum over the class Λ

+

.

4.3 Proof of the second duality

For the second duality, we shall use the Snell envelop characterization of the optimal stopping problem, together with the Doob-Meyer decomposition. We will provide the proof progressively.

Recall that T

a

denotes the collection of all increasing families of F

a

−stopping

times τ = (τ

1

, · · · , τ

n

) such that B

τn∧·

is uniformly integrable, and T

Na

⊂ T

a

the

subset of families τ = (τ

1

, · · · , τ

n

) such that τ

n

≤ N , P

0

−a.s. Denote also by T

0

the collection of all F

a

−stopping times τ

0

such that B

τ0∧·

is uniformly integrable,

and by T

N0

the collection of all stopping times τ

0

∈ T

0

such that τ

0

≤ N , P

0

−a.s.

(16)

4.3.1 Reduction to bounded reward functions

Proposition 4.10. To prove Theorem 2.4 (ii), it is enough to prove the duality P (µ) = D(µ) under additional condition that Φ is bounded.

Proof. Assume that the duality P (µ) = D(µ) holds true whenever Φ is bounded and satisfies Assumptions 2.2 and 2.3.

We now consider the case without boundedness of Φ. Let Φ

m

:= Φ ∨ (−m) (or Φ

m

:= P

n

k=1

(−m) ∨ Φ

k

in case of Assumption 2.3 (i) ), then Φ

m

is bounded and satisfies Assumptions 2.2 and 2.3.

Denote by P

m

(µ) and D

m

(µ) the corresponding primal and dual values associ- ated to the reward function Φ

m

, so that we have the duality

P

m

(µ) = D

m

(µ).

Further, notice that Φ

m

≥ Φ, one has P

m

(µ) = D

m

(µ) ≥ D(µ) ≥ P (µ). Then it is enough to show that

lim sup

m→∞

P

m

(µ) ≤ P (µ).

Let P

m

∈ P (µ) such that lim sup

m→∞

P

m

(µ) = lim sup

m→∞

E

Pm

m

]. Then af- ter possibly passing to a subsequence we may assume that lim sup

m→∞

P

m

(µ) = lim

m→∞

E

Pm

m

]. By Lemma 4.3, we know that (P

m

)

m≥1

is tight and every limit point belongs to P (µ). Let P

0

be a limit point of ( P

m

)

m≥1

, and label again the convergent subsequence by m, i.e. P

m

→ P

0

. Then by the monotone convergence theorem

P (µ) ≥ E

P0

[Φ] = lim

m→∞

E

P0

m

] = lim

m→∞

lim

l→∞

E

Pl

m

]

≥ lim

m→∞

l→∞

lim E

Pl

l

]

= lim sup

l→∞

P

l

(µ), which is the required result.

4.3.2 The second duality for a separable reward function

We first provide the duality result in a simplified context, so that the analysis of the multiple stopping problem in (2.5) is much easier. We observe that the following result is not needed for the proof of our main duality result, and is only provided here for the convenience of the reader.

Let us first introduce a stronger version of the second dual problem. Denote by H the collection of all F

0

−predictable process H

0

: R

+

× Ω → R such that the stochastic integral (H

0

· B )

t

:= R

t

0

H

s0

dB

s

is a martingale under P

0

, and (H

0

· B)

t

−C(1 + |B

t

|) for some constant C > 0. Denote further by S the set of all F −strong c` adl` ag supermartingale in (Ω, F, P

0

) such that |S

t

| ≤ C(1 + |B

t

|) for some constant C > 0.

Define then D

:= n

(λ, H

1

, · · · , H

n

) ∈ Λ × (H)

n

:

n

X

k=1

λ

k

θk

) + Z

θk

θk−1

H

sk

dB

s

≥ Φ ω, θ

for all 0 ≤ θ

1

≤ · · · ≤ θ

n

, and P

0

− a.e. ω ∈ Ω o

,

(17)

and

D

′′

:= n

(λ, S

1

, · · · , S

n

) ∈ Λ × (S)

n

:

n

X

k=1

λ

k

θk

) + S

θkk

− S

kθk−1

≥ Φ ω, θ for all 0 ≤ θ

1

≤ · · · ≤ θ

n

, and P

0

− a.e. ω ∈ Ω o

. Proposition 4.11. Suppose that Assumption 2.2 and Assumption 2.3 (i) hold true with Φ

k

: Ω × (R)

k

. Suppose in addition that Φ

k

(ω, θ

1

, · · · , θ

k

) depends only on (ω, θ

k

). Then

P(µ) = D

(µ) := inf

(λ,H)∈D

µ(λ) = D

′′

(µ) := inf

(λ,S)∈D′′

µ(λ).

Proof. Given the first duality P (µ) = D

0

(µ), it suffices to study the optimal stop- ping problem

sup

τ∈Ta

h

Φ

λ

B, τ i

= lim

N→∞

sup

τ∈TNa

E[Φ

λ

(B, τ )], (4.22) for a given λ ∈ Λ

+

(Remark 4.9) and under the additional condition that each Φ

k

is bounded (Proposition 4.10). Notice that in this case, there is some C such that

−C 1 +

n

X

k=1

θk

|

≤ Φ

λ

(¯ ω) ≤ C.

Suppose that n = 1, then by Lemma 4.6, there is a supermartingale (Z

t1,N

) for every N ∈ N which is the Snell envelop for the optimal stopping problem sup

τ∈Ta

N

E[Φ

λ

(B, τ)]. Clearly, Z

1,N

increases in N . Moreover, one has −C(1 +

|B

t

|) ≤ Z

t1,N

≤ C for some constant C independent of N . Then by the domi- nated convergence theorem together with Lemma 4.8, Z

1

:= sup

N∈N

Z

1,N

is still a supermartingale, of class (DL), such that

Z

01

= sup

τ∈Ta

E[Φ

λ

(B, τ )], and Z

t1

≥ Φ

λ

(B, t), for all t ≥ 0, P

0

− a.s.

We claim that the supermartingale Z

1

is right-continuous in expectation and hence admits a right-continuous modification. Indeed, let τ be a bounded stopping time, and (τ

n

)

n≥1

be a sequence of bounded stopping time such that τ

n

ց τ . Then by the supermartingale property, we have E[Z

τ1n

] ≤ E[Z

τ1

]. Further, for every ε > 0, by the definition of Z

1

, there is some bounded stopping time σ

ε

≥ τ such that E[Z

τ1

] ≤ E

Φ

λ

(B, σ

ε

)]+ε. It follows that E[Z

τ1n

] ≥ E[Φ

λ

(B, σ

ε

∧τ

n

)] → E[Φ

λ

(B, σ

ε

)].

Thus lim

n→∞

E [Z

τ1n

] = E [Z

τ1

] by arbitrariness of ε > 0.

Now, by the Doob-Meyer decomposition for right-continuous supermartingales together with the martingale representation, there is an F

a

−predictable process H

1

such that

λ

1

(B

t

) + (H

1

· B )

t

≥ Φ(B, t), for all t ≥ 0, P

0

− a.s.

Further, we can also choose H

1

to be F−predictable (see e.g. Theorem IV.78 and Remark IV.74 of Dellacherie and Meyer [13]). This proves in particular that

D

(µ) ≤ D

0

(µ) = P (µ).

(18)

Hence by weak duality, we get

P (µ) = D

0

(µ) = D

(µ).

Suppose now n = 2, we first consider the optimal stopping problem sup

τ∈TN0

Φ

2

(B, τ ) − λ

2

(B

τ

) ,

whose Snell envelop is given by Z

2,N

by Lemma 4.6, where in particular −C(1 +

|B

t

|) ≤ Z

t2,N

≤ C for some constant C independent of N , and Z

θ2,N2

≥ Φ

2

(B, θ

2

) − λ

2

(B

θ2

), for all θ

2

≤ N, P

0

− a.s.

We then reduce the multiple optimal stopping problem (4.22) to the n = 1 case, i.e.

sup

τ∈TNa

E

Φ

λ

(B, τ)

= sup

τ1∈TN0

E

Z

τ2,N1

+ Φ

1

(B, τ

1

) − λ

1

(B

τ1

) .

Using again the procedure for the case n = 1, we obtain a new Snell envelop, denoted by Z

1,N

, such that Z

t1,N

≥ −C(1 + |B

t

|).

Thus, Z

1,N

, Z

2,N

are both supermartingales of class (D), bounded from above by C, and dominated from below by −C(1+|B

t

|) for some constant C > 0 independent of N . More importantly, we have Z

01,N

= sup

τ∈Ta

N

E

Φ

λ

(B, τ ) , and Z

θ1,N1

+ Z

θ2,N2

− Z

θ2,N1

≥ Φ

λ

(B, θ

1

, θ

2

), for all θ

1

≤ θ

2

≤ N, P

0

− a.s.

Since Z

1,N

and Z

2,N

both increase in N , define Z

1

:= sup

N

Z

1,N

and Z

2

:=

sup

N

Z

2,N

, it follows by the dominated convergence theorem that Z

1

and Z

2

are both supermartingales of class (DL). By similar arguments as in the case n = 1, we may consider Z

1

and Z

2

in their right-continuous modifications. Moreover, it follows from Lemma 4.8 that Z

01

= sup

τ∈Ta

E

Φ

λ

(B, τ ) and Z

θ11

+ Z

θ22

− Z

θ21

≥ Φ

λ

(B, θ

1

, θ

2

), for all θ

1

≤ θ

2

, P

0

− a.s.

Then (S

1

, S

2

) := (Z

1

, Z

2

) are the required strong super-martingale in dual for- mulation D

′′

. Further, using the Doob-Meyer decomposition, together with the martingale representation on Z

1

and Z

2

, we obtain the process H = (H

1

, H

2

) as we need in the dual formulation D

.

Finally, the case n > 2 can be handled by exactly the same recursive argument as for the case n = 2.

4.3.3 The second duality under Assumption 2.3 (i) Let N > 0, we first study the multiple optimal stopping problem

sup

τ∈TNa

E

Φ

λ

(B, τ)

= sup

τ∈TNa

E h X

n

k=1

Φ

k

(B, τ

1

, · · · , τ

k

) − λ

k

(B

τk

) i

, (4.23) where λ ∈ Λ

+

and Φ

k

is bounded, so that

−C 1 +

n

X

k=1

θk

|

≤ Φ

λ

(¯ ω) ≤ C,

for some constant C. Denote v

Nn+1

(ω, θ

1

, · · · , θ

n

, θ

n

) := Φ

λ

(ω, θ

1

, · · · , θ

n

).

Références

Documents relatifs

We are now ready to state the central result of this article: a solution to this type of extended Schrödinger problems is Markov, and the additive functional given by Theorem

In §5, we prove that the Julia sets of weakly hyperbolic maps do not carry any invariant line fields provided they are not Latte ' s examples.. To do so, we use the same ideas as in

statistics on the number of iterations (over 20 runs) in Columns 6-8, statistics on the CPU time (over 20 runs) in Columns 9-11, the average percentage of the total CPU time spent

Abstract—In this paper, we propose a successive convex approximation framework for sparse optimization where the nonsmooth regularization function in the objective function is

proof of the monotonicity principle for classical optimal transport problem, see Villani [31, Chapter 5] and the corresponding adaptation by Zaev [32, Theorem 3.6] for the derivation

The main results are presented in Section 2: in Section 2.1 we formulate our optimal SEP given full marginals, in Section 2.2 we provide the general duality result, in Section 2.3

Madan and Yor [25] provided a sufficient condition on the marginals, under which the Azéma-Yor embedding stopping times corresponding to each marginal are automatically ordered, so

The introduced event arrival curves and workload curves are cumulative since they describe worst case aspects of task behavior with regard to a time interval ∆t (event arrival curve)