• Aucun résultat trouvé

BSDE formulation of combined regular and singular stochastic control problems

N/A
N/A
Protected

Academic year: 2021

Partager "BSDE formulation of combined regular and singular stochastic control problems"

Copied!
11
0
0

Texte intégral

(1)

BSDE formulation of combined regular

and singular stochastic control problems

Bruno Bouchard∗ Patrick Cheridito† Ying Hu‡

January 2018

Abstract

In this paper we study a class of combined regular and singular stochastic control problems that can be expressed as constrained BSDEs. In the Markovian case, this reduces to a char-acterization through a PDE with gradient constraint. But the BSDE formulation makes it possible to move beyond Markovian models and consider path-dependent problems. We also provide an approximation of the original control problem with standard BSDEs that yield a characterization of approximately optimal values and controls.

Keywords: singular stochastic control, constraint backward stochastic differential equation, minimal supersolution.

Mathematics Subject Classification. 93E20

1

Introduction

We consider a class of continuous-time stochastic control problems involving two different controls: a regular control affecting the state variable in an absolutely continuous way, and a singular control resulting in a cumulative impact of finite variation. For standard stochastic control in continuous time, we refer to the textbooks [13, 22, 14, 27, 26]. Singular stochastic control goes back to [2, 3] and has subsequently been studied by e.g. [4, 16, 17, 19, 18, 20, 9, 1, 23, 11, 21, 15]. For a typical Markovian singular stochastic control problems, it can be deduced from dynamic programming arguments that the optimal value is given by a viscosity solution of a PDE with

Universit´e Paris-Dauphine, PSL Research University, CNRS, CEREMADE, Paris, France

(bouchard@ceremade.dauphine.fr). Partially supported by CAESARS - ANR-15-CE05-0024.

Department of Mathematics, ETH Zurich, R¨amistrasse 101, 8092 Zurich, Switzerland.

Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes, France (ying.hu@univ-rennes1.fr) and School of

Mathematical Sciences, Fudan University, Shanghai 200433, China. Partially supported by Lebesgue center of mathematics “Investissements d’avenir” program - ANR-11-LABX-0020-01, by CAESARS - ANR-15-CE05-0024 and by MFG - ANR-16-CE40-0015-01.

(2)

gradient constraint. On the other hand, it has been shown that PDEs with gradient constraints are related to BSDEs with Z-constraints; see, e.g. [8, 25].

In this paper we directly show that a wide variety of combined regular and singular stochastic control problems can be represented as Z-constrained BSDEs1. This has the advantage that it

allows to study path-dependent problems. More precisely, we consider optimization problems of the form sup α,β E Z T 0 f(t, Xα,β, αt)dt + Z T 0 g(t, Xα,β, αt)dβt+ h(Xα,β)  (1.1) for a d-dimensional controlled process with dynamics

dXtα,β = µ(t, Xα,β, αt)dt + ν(t, Xα,β, αt)dβt+ σ(t, Xα,β)dWt, X0 = x ∈ Rd, (1.2)

where (Wt) is an n-dimensional Brownian motion, (αt) is a predictable process taking values

in a compact subset A ⊆ Rk (the regular control) and (β

t) is an l-dimensional process with

nondecreasing components (the singular control). The coefficients µ, ν, σ and the functions f, g, h are all allowed to depend in a non-anticipative way on the paths of Xα,β.

Our main representation result is that the optimal value of (1.1) is given by the initial value of the minimal supersolution of a BSDE

Yt= h(X) + Z T t p(s, X, Zs)ds − Z T t ZsdWs (1.3)

subject to a constraint of the form q(t, X, Zt) ∈ Rl−, where (Xt) is the unique strong solution of

an SDE

dXt= η(t, X)dt + σ(t, X)dWt, X0 = x,

with the same σ-coefficient as (1.2).

In addition, we show that the original problem (1.1) can be approximated with a sequence of standard BSDEs Ytj = h(X) + Z T t pj(s, X, Zsj)ds − Z T t ZsjdWs. (1.4)

While the minimal supersolution of the constrained BSDE (1.3) gives the optimal value of the control problem (1.1), the BSDEs (1.4) can be used to characterize nearly optimal values as well as approximately optimal controls.

Due to the constraint q, it might happen that the minimal supersolution of (1.3) jumps at the final time T . In our last result, we show how this jump can be removed by replacing h with the smallest majorant ˆh of h that is consistent with q – the so-called face-lift of h.

The rest of the paper is structured as follows. In Section 2, we introduce the notation and our main results. All proofs are given in Section 3.

1Independently, Elie, Moreau and Possama¨ı have been working on a similar idea in [10]. But the exact class

of control problems studied in [10] is different. Moreover, they employ analytic methods, while we use purely probabilistic arguments.

(3)

2

Results

We consider a combined regular and singular stochastic control problem of the form

I := sup (α,β)∈A E Z T 0 f(t, Xα,β, αt)dt + Z T 0 g(t, Xα,β, αt)dβt+ h(Xα,β)  (2.1)

for a constant time horizon T ∈ R+and a d-dimensional controlled process evolving according to

dXtα,β = µ(t, Xα,β, αt)dt + ν(t, Xα,β, αt)dβt+ σ(t, Xα,β)dWt, X0 = x ∈ Rd, (2.2)

where (Wt) is an n-dimensional Brownian motion on a probability space (Ω, F, P) with

corre-sponding augmented filtration F = (Ft). The set of controls A consists of pairs (α, β), where

(αt)0≤t≤T is an F-predictable process with values in a compact subset A ⊆ Rk (the regular

con-trol) and an l-dimensional F-adapted continuous process (βt) with nondecreasing components

such that β0 = 0 and βT ∈ L2(P) (the singular control). The coefficients µ, ν, σ and the

perfor-mance functions f, g, h can depend in a non-anticipative way on the paths of Xα,β. Depending on

their exact specification, there might exist an optimal control in A, or an optimal control might require (βt) to jump and can only be approximated with controls in A.

Let us denote by Cd the space of all continuous functions from [0, T ] to Rd and set

kxkt:= sup 0≤s≤t

|xs|, x∈ Cd,

where |.| is the Euclidean norm on Rd. We make the following

Assumption 2.1

(i) σ : [0, T ] × Cd→ Rd×n is a measurable function such that

Z T

0

|σ(t, 0)|2dt <∞ and |σ(t, x) − σ(t, y)| ≤ Lkx − ykt for some constant L ∈ R+.

(ii) µ is of the form µ(t, x, a) = η(t, x)+σ(t, x)˜µ(t, x, a) for measurable functions η : [0, T ]×Cd

Rd and ˜µ: [0, T ] × Cd× A → Rn such that Z T

0

|η(t, 0)|2dt <∞ and |η(t, x) − η(t, y)| ≤ Lkx − ykt for some constant L ∈ R+,

˜

µ(t, x, a) is bounded and continuous in a and Z T 0 sup a∈A |µ(t, 0, αt)|2dt <∞, sup a∈A

(4)

(iii) ν is of the form ν(t, x, a) = σ(t, x)˜ν(t, x, a) for a measurable function ˜ν: [0, T ] × Cd× A →

Rd×n such that ˜ν(t, x, a) is bounded and continuous in a and Z T 0 sup a∈A |ν(t, 0, αt)|2dt <∞, sup a∈A

|ν(t, x, a) − ν(t, y, a)| ≤ Lkx − ykt for some L ∈ R+.

(iv) The functions f, g : [0, T ] × Cd× A → R are measurable; f (t, x, a) and g(t, x, a) are

non-anticipative in x and upper semicontinuous in (x, a); h : Cd → R is upper semicontinuous

in x; and the supremum in (1.1) is finite.

Under these assumptions, equation (2.2) has for every pair (α, β) ∈ A a unique strong solution (Xtα,β), and the SDE

dXt= η(t, X)dt + σ(t, X)dWt (2.3)

has a unique strong solution (Xt); see e.g. Protter (2004).

For our main representation result, Theorem 2.2, we need the mappings p : [0, T ]×Cd×Rn→ R

and q : [0, T ] × Cd× Rn→ Rl given by

p(t, x, z) := sup

a∈A

{f (t, x, a) + z ˜µ(t, x, a)} and qi(t, x, z) := sup a∈A    gi(t, x, a) + n X j=1 zj˜νji(t, x, a)    .

In this paper, a supersolution of the BSDE

Yt= h(X) + Z T t p(s, X, Zs)ds − Z T t ZsdWs with constraint q(t, X, Zt) ∈ Rl−

consists of a triplet (Y, Z, K) ∈ S2× H2× K2 such that

Yt= h(X)+ Z T t p(s, X, Zs)ds + (KT− Kt)− Z T t

ZsdWs and q(t, X, Zt) ∈ Rl− for all t, (2.4)

where

• S2is the space of d-dimensional RCLL F-adapted processes (Y

t) such that E sup0≤t≤T|Yt|2 <

∞,

• H2 the space of Rd×n-valued F-predictable processes (Zt) such that ER0T|Zt|2dt <∞, and

• K2 the set of processes (K

t) in S2 with nondecreasing components starting at 0.

Moreover, we call (Y, Z, K) a minimal supersolution of (2.4) if Yt≤ Yt′, 0 ≤ t ≤ T , for any other

supersolution (Y′, Z, K); see e.g. Peng (1999).

(5)

Theorem 2.2 The constrained BSDE (2.4) has a minimal supersolution (Y, Z, K), and Y0 = I.

The next result shows that problem (2.1) can be approximated by restricting the controls to regular piecewise constant controls: for j ∈ N, denote by Aj the set of all pairs (α, β) ∈ A of the form α =Pm−1

i=0 ai1(ti,ti+1]and βt=

Rt

0bsds, where b =

Pm−1

i=0 bi1(ti,ti+1], the biare Fti-measurable

with values in [0, j]l and 0 = t

0< t1 < t2· · · < tm = T is a deterministic partition of [0, T ]. The

corresponding control problem is

Ij := sup (α,β)∈Aj E Z T 0 f(t, Xα,β, αt)dt + Z T 0 g(t, Xα,β, αt)dβt+ h(Xα,β)  , (2.5)

and the following holds:

Proposition 2.3 One has Ij ↑ I for j → ∞.

Moreover, since (2.5) is a regular control problem, it admits a representation through a stan-dard BSDE Yt= h(X) + Z T t pj(s, X, Zs)ds − Z T t ZsdWs (2.6)

with a driver given by

pj(t, x, z) := sup

a∈A,b∈[0,j]m

{f (t, x, a) + z ˜µ(t, x, a) + [g(t, x, a) + z˜ν(t, x, a)]b} .

Compared to the constrained BSDE (2.4), which gives the optimal value of the control prob-lem (2.1), the BSDE (2.6) provides a characterization of the optimal value of (2.5) as well as corresponding optimal controls.

Theorem 2.4 For everyj∈ N, BSDE (2.6) has a unique solution (Yj, Zj) in S2×H2. Moreover,

Y0j = Ij, and for any pair of progressively measurable functionals αˆ: [0, T ] × C → A, ˆb: [0, T ] ×

C→ [0, j]l satisfying

f(t, X, ˆαt(X))+Ztjµ(t, X, ˆ˜ αt(X))+[g(t, X, ˆαt(X))+Ztjν(t, X, ˆ˜ αt)]ˆbt(X) = pj(t, X, Ztj) dt × dP-a.e.,

αt= ˆαt(Xα,β) and βt=

Rt

0ˆbs(Xα,β)ds defines a pair in A such that

Ij = E Z T 0 f(t, Xα,β, αt)dt + Z T 0 g(t, Xα,β, αt)dβt+ h(Xα,β)  .

Our last result concerns the continuity of the minimal supersolution of (2.4) at the final time T. Due to the constraint q, Y might jump downwards at T . This can be avoided by modifying h. Define the face-lift ˆh: C → R as follows

ˆ

h(x) := inf{h(x + ν(T, x)l1{T }) + g(T, x)l, l∈ Rd+}. Then the following holds follwing an argument from [6].

(6)

Proposition 2.5 The BSDE Yt= ˆh(X) + Z T t p(s, X, Zs)ds − Z T t ZsdWs with constraint q(t, X, Zt) ∈ Rl (2.7)

admits a minimal supersolution ( ˆY , ˆZ, ˆK), and one has ∆ ˆYT = 0 as well as ( ˆYt, ˆZt, ˆKt) =

(Yt, Zt, Kt) for t ∈ [0, T ), where (Y, Z, K) is the minimal supersolution of (2.4).

3

Proofs

We start with the

Proof of Proposition 2.3. It is straightforward to see that Ij is nondecreasing and Ij ≤ I. By

a density argument, we can prove that

lim

j I j = I.

 Next, we show that the approximate problems (2.5) admit a weak formulation. To do that, we note that by Girsanov’s theorem, the process

Wtα,β:= Wt−

Z t

0

[˜µ(s, X, αs) + ˜ν(s, X, αs)bs] ds.

is for every pair (α, β) ∈ Aj, a Brownian motion under the measure Pα,β given by

dPα,β dP = E Z . 0 [˜µ(s, X, αs) + ˜ν(s, X, αs)bs] dWs  .

Moreover, the following holds:

Lemma 3.1 For all (α, β) ∈ Aj, the augmented filtration generated by Wα,β equals F.

Proof. Denote the augmented filtration of Wα,β by Fα,β = (Fα,β

t ). Since X is a strong solution

of the SDE (2.3), it is F-adapted. So it follows from the definition of Wα,β that Fα,β is contained in F.

On the other hand, one has α = Pm−1

i=0 ai1(ti,ti+1] and b =

Pm−1

i=0 bi1(ti,ti+1] for ai and bi Fti

-measurable. In particular a0 and b0 are deterministic. So it follows from Assumption 2.1 that on

[0, t1], (Xt) is the unique strong solution of

(7)

Hence, (Xt)t∈[t0,t1]is (F

α,β

t )t∈[t0,t1]-adapted, from which it follows that

Wt= Wtα,β+

Z t

0

[˜µ(s, X, αs) + ˜ν(s, X, αs)bs] ds, t∈ [0, t1],

is (Ftα,β)t∈[0,t1]-adapted. This shows that a1 and b1 are Ftα,β1 -measurable. Now the lemma follows

by induction over i. 

Using Lemma 3.1, one can derive the following weak formulation of problem (2.5): Lemma 3.2 One has

Ij = sup (α,β)∈Aj Eα,β Z T 0 f(t, X, αt)dt + Z T 0 g(t, X, αt)dβt+ h(X)  , (3.1)

where Eα,β denotes the expectation under Pα,β.

Proof. For all (α, β) ∈ Aj, Xα,β is the unique strong solution of

dXtα,β = µ(t, Xα,β, αt)dt + ν(s, Xα,β, αt)btdt+ σ(t, Xα,β)dWt, X0 = x,

and X the unique strong solution of

dXt= µ(t, X, αt)dt + ν(s, X, αt)btdt+ σ(t, X)dWtα,β, X0= x.

Since a0 and b0 are deterministic, (αt, βt, Xt)t∈[0,t1] has the same distribution under the measure

Pα,β as (αt, βt, Xtα,β)t∈[0,t

1] under P. Moreover, a1 and b1 are functions of (Wt)t∈[0,t1]. So if one

defines ˜a1 and ˜b1 to be the same functions of (Wtα,β)t∈[0,t1], then (˜αt, ˜βt, Xt)t∈[t1,t2] has the same

distribution under Pα, ˜˜β as (α

t, βt, Xtα,β)t∈[t1,t2] under P. Continuing like this, one sees that for

every pair (α, β) ∈ Aj, there exists a pair (˜α, ˜β) ∈ Ajsuch that (˜α, ˜β, X) has the same distribution

under Pα, ˜˜β as (α, β, Xα,β) under P. Conversely, it can be deduced from Lemma 3.1 with the same

argument that for every pair (α, β) ∈ Aj, there exists a pair (˜α, ˜β) ∈ Aj such that (˜α, ˜β, Xα, ˜˜β)

has the same distribution under P as (α, β, X) under Pα,β. This proves the lemma. 

Now, we are ready to give the Proof of Theorem 2.4

It follows from our assumptions that the BSDE (2.6) satisfies the standard conditions. So it has a unique solution (Yj, Zj) in S2× H2; see e.g. ... For each pair (α, β) ∈ Aj, we set

Ytα,β := Eα,β Z T t f(s, X, αs)ds + Z T t g(t, X, αs)dβs+ h(X) | Ft  .

(8)

By Lemma 3.1 and the predictable representation theorem that there exists an Rn-valued

F-predictable process Zα,β such that Eα,β Z T 0 f(s, X, αs)ds + Z T 0 g(t, X, αs)dβs+ h(X) | Ft  = Y0α,β+ Z t 0 Zsα,βdWsα,β. Hence, Ytα,β = h(X) + Z T t f(s, X, αs)ds + Z T t g(s, X, αs)dβs− Z T t Zsα,βdWsα,β = h(X) + Z T t n f(s, X, αs) + Zsα,βµ(s, X, α˜ s) o ds + Z T t n g(s, X, αs) + Zsα,βν(s, X, α˜ s) o dβs− Z T t Zsα,βdWs.

By a comparison result for BSDEs (see e.g. ...), one has Yj ≥ Yα,β. On the other hand, it

can be deduced from a measurable selection argument that there exist progressively measurable functions ˜ α: [0, T ] × C × Rn→ A and ˜b : [0, T ] × C × Rn→ [0, j]l such that f(t, x, ˜α(t, x, z)) + z ˜µ(t, x, ˜α(t, x, z)) + [g(t, x, ˜α(t, x, z)) + z˜ν(t, x, ˜α(t, x, z))]˜b(t, x, z) = pj(t, x, z). αt = ˜α(t, X, Zj) and βt = Rt 0˜b(s, X, Z j

s)ds defines a pair in A that can be approximated by a

sequence of pairs (αn, βn) ∈ Aj in L2. Then

Eαn,βn Z T 0 f(s, X, αns)ds + Z T 0 g(t, X, αns)dβsn+ h(X)  converges to Eα,β Z T 0 f(s, X, αs)ds + Z T 0 g(t, X, αs)dβs+ h(X)  , and Y0j = h(X) + Z T 0 pj(s, X, Zsj)ds − Z T 0 ZsjdWs = h(X) + Z T 0 f (s, X, αs) + Zsjµ(s, X, α˜ s) ds + Z T 0 g(s, X, αs) + Zsjν(s, X, α˜ s) bsds− Z T 0 ZsjdWs = h(X) + Z T 0 {f (s, X, αs) + g(s, X, αs)bs} ds − Z T 0 ZsjdWsα,β = Eα,β Z T 0 f(s, X, αs)ds + Z T 0 g(t, X, αs)dβs+ h(X)  .

(9)

This shows that Y0j = Ij.

Finally, if ˆα: [0, T ]×C → A and ˆb: [0, T ]×C → [0, j]lare progressively measurable functionals

such that

f(t, X, ˆαt(X))+Ztjµ(t, X, ˆ˜ αt(X))+[g(t, X, ˆαt(X))+Ztjν(t, X, ˆ˜ αt)]ˆbt(X) = pj(t, X, Ztj) dt × dP-a.e.,

it follows as above that

Y0j = Eα(X), ˆˆ β(X) Z T 0 f(s, X, ˆα(X)s)ds + Z T 0 g(t, X, ˆα(X)s)d ˆβ(X)s+ h(X)  . Moreover, αt = ˆαt(Xα,β) and βt = Rt

0ˆbs(Xα,β)ds defines a pair in A such that (α, β, Xα,β) has

the same distribution under P as (ˆα(X), ˆβ(X), X) under Pα(X), ˆˆ β(X). As a consequence,

Ij = E Z T 0 f(t, Xα,β, αt)dt + Z T 0 g(t, Xα,β, αt)dβt+ h(Xα,β)  ,

and the proof is complete. 

Proof of Theorem 2.2

We know that Ij ↑ I and Ij = Y0j, where (Yj, Zj) is the solution of (2.6). On the other hand, it follows from Peng (1999) that Yj increases to Y , where (Y, Z) is the maximal subsolution of

(2.4). 

References

[1] F. Baldursson (1987). Singular stochastic control and optimal stopping. Stochastics 21(1), 1–40.

[2] J.A. Bather and H. Chernoff (1967). Sequential decisions in the control of a spaceship. Proc. Fifth Berkeley Symposium on Mathematical Statistics and Probability 3, 181–207.

[3] J.A. Bather and H. Chernoff (1967). Sequential decisions in the control of a spaceship (finite fuel). J. Appl. Prob. 4, 584–604.

[4] V.E. Benes, L.A. Shepp and H.S. Witsenhausen (1980). Some solvable stochastic control problems. Stochastics 4(1), 39–83.

[5] D. P. Bertsekas and S. E. Shreve (1978). Stochastic Optimal Control: The Discrete Time Case. New York: Academic Press, 1978.

[6] B. Bouchard, R. Elie and L. Moreau (1914). Regularity of BSDEs with a convex constraint on the gains-process. Preprint arXiv:1409.5369.

(10)

[7] B. Bouchard, D. Possama and X. Tan. A general Doob-Meyer-Mertens decomposition for g-supermartingale systems. Preprint arXiv:1505.00597.

[8] R. Buckdahn and Y. Hu (1998). Hedging contingent claims for a large investor in an incom-plete market. Adv. in Appl. Probab. 30(1), 239–255.

[9] P.L. Chow, J.L. Menaldi and M. Robin (1985). Additive control of stochastic linear systems with finite horizon. SIAM J. Control and Optimization 6, 858–899.

[10] R. Elie, L. Moreau and D. Possama¨ı (2017). On a class of path-dependent singular stochastic control problems. Arxiv Preprint.

[11] N. El Karoui and I. Karatzas (1988). Probabilistic aspects of finite-fuel, reflected follower problems. Acta Appl. Math. 11, 223–258.

[12] N. El Karoui, S. Peng and M. C. Quenez (1997). Backward stochastic differential equations in finance. Math. Finance 7(1), 1--71.

[13] W. Fleming and R. Rishel (1975). Deterministic and Stochastic Optimal Control. Springer-Verlag.

[14] W. Fleming and M. Soner (1993). Controlled Markov Processes and Viscosity Solutions. Springer-Verlag.

[15] U.G. Haussmann and W. Suo (1995). Singular optimal stochastic controls II. Dynamic pro-gramming. SIAM J. Control Optim. 33(3), 937–959.

[16] I. Karatzas (1981). The monotone follower problem in stochastic decision theory. Applied Mathematics and Optimization 7, 175–189.

[17] I. Karatzas (1983). A class of singular stochastic control problems. Adv. in Appl. Probab. 15(2), 225–254.

[18] I. Karatzas (1985). Probabilistic aspects of finite-fuel stochastic control. Proc Nat. Acad. Sci. USA 82(17), 5579–81.

[19] I. Karatzas and S.E. Shreve (1984). Connection between optimal stopping and singular stochastic control I. Monotone follower problems. SIAM J. on Control and Optimization 22, 856–877.

[20] I. Karatzas and S.E. Shreve (1985). Connection between optimal stopping and singular stochastic control II. Reflected follower problems. SIAM J. on Control and Optimization 23, 433–451.

(11)

[21] T.O. Kobila (1993). A class of solvable stochastic investment problems involving singular controls. Stoch. Stoch. Rep. 43, 29–63.

[22] N. Krylov (1980). Controlled Diffusion Processes. Springer-Verlag.

[23] J.L. Menaldi and M.I. Taksar (1987). Singular control of multidimensional Brownian motion. IFAC Proceedings Volume 20(5), 221–224.

[24] S. Peng (1999). Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob–Meyer’s type. Probab. Theory Related Fields 113(4), 473–499.

[25] S. Peng and M. Xu (2013). Constrained BSDEs, viscosity solutions of variational inequalities and their applications. Math. Control Relat. Fields 3(2), 233–244.

[26] N. Touzi (2013). Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE. Fields Institute Monographs. Springer.

[27] J. Yong and X.Y. Zhou (2000). Stochastic Controls, Hamiltonian Systems and HJB Equa-tions. Springer-Verlag.

Références

Documents relatifs

Next in Section 3, we provide a probabilistic interpretation of the numerical scheme, by showing that the numerical solution is equivalent to the value function of a

The results of illite fine fraction dating ob- tained in this study constrain the phase of major cooling, inferred thrusting, and structural uplift to between ca. 60 and

Biotechnology is a particularly salient example as the astounding rate of growth in this industry and a general desire to partake of its lucrative economic bounty

Under general assumptions on the data, we prove the convergence of the value functions associated with the discrete time problems to the value function of the original problem..

To reduce the constrained stochastic control problem (1.1) to the above abstract framework, we use a weak formulation of the control problem by considering the law of

The maturity randomization technique introduced by Carr [2] provides an inter- esting algorithm for the computation of a finite horizon optimal stopping problem by passing to a

Within a general abstract framework, we show that any optimal control problem in standard form can be translated into a stochastic target problem as defined in [17], when- ever

The main idea of the proof is that as soon as one knows that the value function of the singular control problem is associated to a constrained BSDE, one can use the fact that such