• Aucun résultat trouvé

Optimal investment with a risk penalization

Dans le document The DART-Europe E-theses Portal (Page 179-187)

XTˆπi−λiX¯Ti,ˆπRT

0 giuπˆiu)du

. (6.8)

In order to simplify notations, from now on we will writeXti and ¯Xti. We say that a random variable has exponential moments of any order if:

∀δ >0, Eeδ|X|<∞, (6.9)

and we introduce the set:

E :={(Yt); (Ft)-predictable and ∀δ >0, Eeδsupt∈[0,T]|Yt|<∞}. (6.10) Remark 6.3 Notice that, without further assumptions, the framework of the previous chap-ter is a particular case of this one. Indeed it corresponds to the functions:

gi(x) =

(0 if x∈Ai

+∞if x6∈Ai.

But in fact we will make stronger assumptions later which will exclude this setting.

In the first section of this paper, we shall characterize the value function of a problem of optimal investment with a penalization function. Then we will apply this result to our problem of optimal investment under relative performance concerns and in particular show the existence and uniqueness of a Nash equilibrium for general dynamics of the assets. Then we will take a closer look at the case of deterministic θ and σ. Finally we will derive a few examples.

6.3 Optimal investment with a risk penalization

In this section, we consider a single agent who wants to track a contingent claim F, which is assumed to be FT-measurable and to have exponential moments of any order (recall the definition given by (6.9)). Given η > 0 and a Lipschitz penalization function g : R → R∪ {+∞}, we consider the following problem:

V = sup

π∈A

E−eη1(XTπRT

0 g(σuπu)du−F) (6.11)

whereAin the set of admissible processes, which are processes satisfying assumptions (6.6)-(6.7) with ηi replaced byη.

6.3. OPTIMAL INVESTMENT WITH A RISK PENALIZATION 173 Let us introduce the following function f :Rd×Rd→R:

f(θ, z) := inf

p∈Rdm(θ, z, p) := inf

p∈Rd{g(p) + 1

2η|p−z|2−p.θ} (6.12) Introducing the convex dual of h(p) := g(p) + 1

2η|p|2 by means of the Fenchel-Legendre transform (which is convex):

∀x∈Rd, ˜h(x) = sup

p∈Rd{p.x−h(p)}, (6.13)

we notice that f can be rewritten as:

f(θ, z) = 1

2η|z|2−˜h(1

ηz+θ), (6.14)

andf is continuous w.r.t (θ, z). In fact, as ˜h is convex, for anyθ ∈Rd, the gradient off(θ, .) is defined for almost very z ∈ Rd and f is locally Lipschitz w.r.t z, uniformly on compact sets w.r.t θ, which means that for all compact set K ⊂Rd:

∃LK >0, ∀(θ, z, z)∈Rd×Rd×Rd, |f(θ, z)−f(θ, z)| ≤LK|z−z|.

Moreover, as g is Lipschitz, for any fixed (θ, z) ∈ Rd×Rd, m(θ, z, p) → ∞ as |p| → ∞ so that the infimum in the definition of f is attained, which implies that for any θ ∈ Rd and z ∈Rd, there exists p ∈Rd satisfying f(θ, z) = m(θ, z, p). We introduce:

I(θ, z) := arg min

p∈Rdm(θ, z, p).

Assuming the axiom of choice, we can simultaneously choose a representant in each I(θ, z) and therefore define a function p :Rd×Rd →Rd such that f(θ, z) =m(θ, z, p(θ, z)). We hereafter call such a function a representation of I.

In order to simplify notations, we write ft(z) :=f(θt, z),mt(z) := m(θt, z),It(z) := I(θt, z).

Consider the following assumptions:

−ft is Lipschitz w.r.t z, uniformly int, ω; (H1)

−ft is locally Lipschitz w.r.t z, uniformly in t, ω and has linear growth w.r.t z,

uniformly in t, ω; (H1)

−for any representation p of I, there exists C >0, ∀z, t, ω, |pt, z)−z| ≤C a.s;

(H2)

−for any (t, ω, z)∈R+×Ω×Rd, It(z) is a singleton. (H3)

In (H1), by uniform linear growth we mean that:

∃L1, L2 >0, ∀t, ω, z, |ft(z)| ≤L1+L2|z|.

We derive here a few sufficient conditions for assumptions (H1) to (H3) to hold.

Proposition 6.4 (i) If there exists a compact K such that g is C1 on Rd\K and ∇g is bounded on Rd\K, then (H1) and (H2) are satisfied;

(ii) If moreoverp7→ ∇g(p) +1

ηpis one-to-one on Rd\K, and its inverse is Lipschitz on its domaine, then (H1) and (H2) are satisfied;

(iii) If h(p) =g(p) + 1

2η|p|2 is strictly convex, then (H3) is satisfied.

Remark 6.5 In particular, if g(x) = |x|is the canonical euclidean norm, assumptions (H1) to (H3) are satisfied.

Ifg(x) = 0, then we are in the same setting as in complete market situation of the previous chapter, see section 4.3.

Proof. (i): As g is Lipschitz, for any (t, ω, z) ∈ R+×Ω×Rd, the minimum is attained.

Therefore it is either attained for a certain p ∈ K or for a p satisfying the first order condition:

∇g(p) = θt+ 1

η(z−p).

Let us show that for z large enough p cannot be in K. Indeed by convexity we have 2|p−z|2 ≥ |z|2−2|p|2 so that:

p∈Kinf{g(p) + 1

2η|p−z|2−p.θt} ≥ inf

p∈K{g(p)− 1

2η|p|2− ||θ|||p|}+ 1 4η|z|2

≥A+ 1 4η|z|2,

for a certain constantA, independant fromz, while forp=z,B being the Lipschitz constant for g:

g(z)−z.θt≤ |g(0)|+ (B+||θ||)|z|. Therefore:

p∈Kinf{g(p) + 1

2η|p−z|2−p.θt}− inf

p∈Rd{g(p) + 1

2η|p−z|2−p.θt}

≥A− |g(0)|+ 1

4η|z|2−(B +||θ||)|z|

→ ∞ as|z| → ∞,

6.3. OPTIMAL INVESTMENT WITH A RISK PENALIZATION 175 which implies that there exists a compact K of Rd such that for z 6∈K, any minimizer of ft is not in K. And so for z 6∈ K, let t∈ [0, T] and p =pt(z) such that ft(z) =mt(z, p), then we must have:

∇g(p) = θt+ 1

η(z−p).

so that as ∇g is bounded by a certain C >0, for allt,z ∈K and p(z), we get that:

|pt(z)−z| ≤η(C+||θ||).

Finally, if z ∈K, we show as before that p cannot be too large. Indeed g(z)−z.θt is bounded for z ∈K, while:

g(p) + 1

2η|p−z|2 −p.θt ≥ −(C+||θ||)|p| − 1

2η|z|2+ 1

4η|p|2 → ∞, as |p| → ∞, uniformly in z ∈K.

Then we already know that ft is uniformly locally Lipschitz, so we only need to check that it has uniform linear growth. We know that pt(z)−z is uniformly bounded by C and g is B-Lipschitz, so we have:

|ft(z)|=

g(pt(z)) + 1

2η|pt(z)−z|2−pt(z).θt

≤ |g(0)|+ [B+||θ||]|pt(z)|+ 1 2ηC2

≤ |g(0)|+ 1

2ηC2+ [B+||θ||] (C+|z|), so we have the result.

(ii): Now if p 7→ ∇g(p) + 1ηp is one-to-one outside K, it is a bijection from Rd \K into a certain D and we write its inverse ϕ : D → Rd\K. As we have seen before, there exists a compact C such that (Rd \C) ⊂ D. Moreover, It(z) is a singleton for |z| large enough, uniformly in (t, ω). Now if ϕ is Lipschitz, then z 7→ p(z) is Lipschitz too for |z| large enough, uniformly in (t, ω), and as (H2) is satisfied, z 7→ |p(z)−z|2 is also uniformly Lipschitz, so that ftis uniformly Lipschitz for z large enough. Now because of (i), we know that ft is also uniformly locally Lipschitz so it is uniformly globally Lipschitz on Rd.

(iii): Using the expression (6.14) for ft, it is immediate that the minimum is attained

only at one point if h is strictly convex. ✷

For any m∈N, we denote by H2(Rm) andS2(Rm) the following spaces of processes:

H2(Rm) :={(Yt); Rm-valued, predictable process with E Z T

0 |Yt|2dt <∞}, (6.15) S2(Rm) :={(Yt); Rm-valued, continuous and adapted process with Eh

sup

t∈[0,T]|Yt|2i

<∞}, (6.16) and byP theσ-field of predictable sets of [0, T]×Ω. Finally letT be the family of stopping times less or equal to T.

Then we have the following result:

Theorem 6.6 Assume that g satisfies (H1) and (H2) and F has exponential moments of any order. Then the value of the optimization problem (6.11) is given by:

V(x) = −eη1(x−Y0),

where (Y, Z) is the unique solution in S2(R)×H2(Rd) of the following BSDE:

dYt=−ft(Zt)dt+Zt.dWt

YT =F with ft defined by (6.12).

Moreover Y ∈ E and there exists an optimal portfolio ˆπ∈ A such that for each t∈[0, T]:

ˆ

πt∈σ−1t It(Zt), P-a.s.

Assume also that (H3) is satisfied, then the optimal portfolio is unique.

Proof. From the definition offt, we immediately see thatf is P × B(Rd)-measurable, that f.(0) ∈ S2(R) and thanks to assumption (H1), ft is uniformly Lipschitz. F ∈ L2 is clear too. Therefore the existence and uniqueness of (Y, Z) ∈ S2(R)×H2(Rd) are a well-known result (see Pardoux and Peng [63] or El Karoui, Peng and Quenez [26]). Moreover, asF has exponential moments of any order, we can apply Corollary 4 of Briand and Hu [7], which guarantees that Y ∈ E (again it is straightforward that their assumptions are satisfied in our case).

Then we define for π∈ A:

Mtπ :=−e1η(XtπR0tg(σuπu)du−Yt),

6.3. OPTIMAL INVESTMENT WITH A RISK PENALIZATION 177 and we will show that Mπ is a P-supermartingale for any π ∈ A, and a P-martingale for a certain ˆπ, which will then be an optimal control.

We write pttπt and we compute:

dMtπ

Mtπ =−1 η

(ptt−g(pt) +ft(Zt)− 1

2η|pt−Zt|2)dt+ (pt−Zt).dWt

. By definition of ft, −1

η

ptt−g(pt) +ft(Zt)− 1

2η|pt−Zt|2

>0, while Mtπ <0, so that Mπ is a local supermartingale for any π ∈ A. If ˆπ = σ−1p, where p is the process constructed in Lemma 6.9, then Mˆπ is a local martingale. Moreover, if (H3) is satisfied, then Mπ is a strict local supermartingale for any π 6= ˆπ.

In other words there exists a sequence (τnnπ) of stopping times such that τn → ∞a.s and for any n and 0≤s≤t≤T:

E[Mt∧τπ n|Fs]≤Ms∧τπ n for any π ∈ A E[Mt∧τπˆ n|Fs] =Ms∧τπˆ n.

Now ifπ ∈ A, because of condition (6.7), there exists p >1 such that {e1ηXτπ; τ ∈ T } is uniformly bounded in Lp by a constant C while Y ∈ E. Let r ∈(1, p), then p

r >1, so we define the conjugate of p

r >1 by q such that r p+1

q = 1. Then using Holder’s inequality we get for any τ ∈ T:

Eerη(XτπR0τg(σuπu)du−Yτ)≤

Eepη(XτπR0τg(σuπu)du)rp

EerqηYτ1q

≤Crp

Eerqη supt∈[0,T]|Yt|1q . As rqη > 0, {erη(XτπRτ

0 g(σuπu)du−Yτ); τ ∈ T } is uniformly bounded in Lr (r > 1) and therefore is uniformly integrable. As a consequence we can apply Lebesgue’s theorem while we send n to infinity and we have:

E[Mtπ|Fs] = lim

n→∞E[Mt∧τπ n|Fs]≤ lim

n→∞Ms∧τπ n =Msπ.

Finally let us show that ˆπ ∈ A. Then as previously we could apply Lebesgue’s theorem which would guarantee the martingale property of Mˆπ, and so the optimality of ˆπ.

By definition,ft(Zt) = mt(Zt,πˆt) so that we have for anyτ ∈ T and r >1:

eηr(XτπˆRτ

0 g(σuˆπu)du−Yτ) =eηr(x−Y0)erη(R0τuˆπuu−g(σuπˆu)+ftuπˆu)]du+Rτ

0uπˆu−Zu).dWu)

=eηr(x−Y0)erη(R0τ η

2uπˆu−Zu|2du+Rτ

0uπˆu−Zu).dWu).

Thanks to assumption (H2), we know that |σtπˆt −Zt| ≤ C. As θ is bounded, it implies that (eηr

Rt

0uπˆu−Zu).dWur22Rt 0

1

uˆπu−Zu|2du

) is a P-martingale and that we can define the equivalent measure Qη by its Radon-Nikodym density:

dQη

dP =erη

RT

0 uˆπu−Zu).dWur2

2

RT 0

1

uˆπu−Zu|2du

. Then we have:

Eerη(XτπˆRτ

0 g(σuπˆu)du−Yτ) =erη(x−Y0)EQηe

r(r−1) 2

Rτ

0 uπˆu−Zu|2du

≤erη(x−Y0)er(r−1)2 T C2.

Let p ∈ (1, r), and q defined by pr + 1q = 1, applying Holder’s inequality and using the fact that Y ∈ E, we get:

Eepη(XτπˆR0τg(σuπˆu)du) ≤

Eeηr(XτˆπR0τg(σuˆπu)du−Yτ)pr

EepqηYτ1q

≤A, where A is a constant independent of τ, so that ˆπ∈ A.

Finally, if (H3) is satisfied, we see that ˆπis the unique optimal portfolio as for any other

π ∈ A, Mπ is a strict supermartingale. ✷

Remark 6.7 From the previous proof, we can see that in fact for any p >1, the family:

{eηi1(Xτi,ˆπR0τgiuπˆiu)du); τ ∈ T } is uniformly bounded in Lp(P).

Remark 6.8 In fact this result is also true if assumption (H1) is replaced by (H1), as Hamadene [38] showed that under this weaker assumption, the existence and uniqueness of a BSDE still hold true. But as this is true only in dimension 1, we will not be able to use this result in the next section, and therefore we stated our result in the Lipschitz case.

We show here that we can indeed select in a predictable way the process p in the previous proof.

Lemma 6.9 Let (Zt) be a predictable process. Then there exists a predictable process (pt) satisfying for all t ∈[0, T], pt ∈It(Zt), P-a.s.

6.3. OPTIMAL INVESTMENT WITH A RISK PENALIZATION 179 Proof. We will show that there exists aB(Rd)⊗ B(Rd)-B(Rd) measurable mappingψ such that ψ(θ, z)∈I(θ, z).

Recall that: m(θ, z, p) =g(p) + 1

2η|p−z|2−p.θ. Moreover,m is continuous w.r.t (θ, z, p) and, asg is Lipschitz, for anyn ∈N, there existsYn compact ofRdsuch that, for anyp6∈Yn

and (θ, z)∈B¯(0, n), we have:

m(θ, z, p)> m(θ, z,0),

where ¯B(0, n) is the closed ball of radius equal toninR2d. Therefore, for any (θ, z)∈B(0, n),¯ we have:

f(θ, z) = inf

p∈Rdm(θ, z, p) = inf

p∈Yn

m(θ, z, p).

Let us define the following sequence of functions (fn):

fn(θ, z) := inf

p∈Yn

m(θ, z, p).

We therefore have fn = f on ¯B(0, n), and in particular (fn) converges pointwise to f. As Yn is compact, we can use Proposition 7.33, p.153 of Bertsekas and Shreve [5], which tells us that, for each n, there exists aB(Rd)⊗ B(Rd)-B(Rd) measurable function

ϕn :X →Yn,

such thatm(θ, z, ϕn(θ, z)) =fn(θ, z) for any (θ, z)∈R2d. Moreover, as fn=f on ¯B(0, n)⊂ B¯(0, n+ 1), fn =fn+1 on ¯B(0, n). Therefore if we define ψ00 and for n ≥0:

ψn+1 =

n on ¯B(0, n) ϕn+1 otherwise,

thenψnisB(Rd)⊗B(Rd)-B(Rd) measurable as well, andm(θ, z, ψn(θ, z)) =fn(θ, z). Finally, asS

n∈NB(0, n) =¯ R2dwhileψn+1non ¯B(0, n), (ψn) convergences pointwise to a function ψ which is B(Rd)⊗ B(Rd)-B(Rd) measurable and such that ψ(θ, z)∈I(θ, z) for any θ, and z.

As (θt) and (Zt) are predictable processes,pt =ψ(θt, Zt) satisfies the required conditions.

We next introduce the dynamic value function:

V(τ, Xτ) = ess sup

π∈A

Eh

−eη1(Xτ+RτTσuπu.(dWuudu)−RT

τ g(σuπu)du−F)|Fτi

(6.17) The following result shows that ˆπ is optimal is a stronger way.

Proposition 6.10 (Dynamic programming) For any stopping time τ ∈ T, we have:

V(τ, Xτ) = −eη1(Xτ−Yτ),

where Y is the solution of the BSDE given in Theorem 6.6. Moreover an optimal portfolio for the problem starting at τ is the one given in Theorem 6.6.

Proof. Using Doob’s optional sampling theorem, we can do exactly the same as in the

proof of Theorem 6.6, but starting from τ instead of 0. ✷

Dans le document The DART-Europe E-theses Portal (Page 179-187)