• Aucun résultat trouvé

Risk measures

N/A
N/A
Protected

Academic year: 2022

Partager "Risk measures"

Copied!
85
0
0

Texte intégral

(1)

Risk measures

Proofs and additional remarks

Christian Y. Robert

ISFA - Université Lyon 1

October 2011

(2)

⊲ COMONOTONIC RISKS DEFINITIONS

1. A set S in R2 is said to be comonotonic, if, for all (y1, y2) and (z1, z2) in this set, yi < zi for some i implies yj ≤ zj for j = i.

◦ Notice that a comonotonic set is a ‘thin’ set, in the sense that it is contained in a one-dimensional subset of R2.

2. When the support of a random vector is a comonotonic set, the random vector itself and its joint distribution are called comonotonic.

(3)

PROPOSITION :

1. A random vector (X, Y ) is comonotonic if and only if X and Y may be written as non-decreasing functions of the same random variable.

2. A random vector (X, Y ) is comonotonic if and only if

P(X x, Y y) = min{P(X x),P(Y y)}

for all x, y ∈ R.

3. A random vector (X, Y ) is comonotonic if and only if (X, Y ) =d (FX−1(U), FY−1(U))

where FX−1 stands for the quantile function of X (see below), and U is a random variable that is uniformly distributed over the unit interval (0,1).

(4)

⊲ COMPARISONS OF RISKS DEFINITION

Let Y be a set of univariate distribution functions. The binary relation is a partial order on Y if for any elements X with df FX, Y with df FY and Z with df FZ in Y, the following properties hold :

(i) If XY and Y Z then XZ (transitivity).

(ii) XX (reflexivity).

(iii) If XY and Y X then X = Y (antisymmetry).

If, in addition, for any given pair X and Y of elements of either XY or Y X holds, then is said to be a total order.

Remark : We write XY but we actually mean FXFY . In other words, when we say that a risk X is smaller than a risk Y for the stochastic order relation, we assert that this ordering holds for the respective dfs of these risks. Therefore, the joint distribution of X and Y is irrelevant ; only their marginal distributions are important.

(5)

⊲ First-order stochastic dominance

PROP : Risk Y dominates risk X stochastically at first order if and only if there exist random variables X =d X and Y =d Y such that P(X Y ) = 1.

Remark : If Y DS1 X, then FY (d) ≤ FX(d), ∀d ∈ R.

If moreover FX and FY are increasing df, we have FY−1(FY (d) = d, FX(X) =d U where U is uniformly distributed over the unit interval (0,1) and FY−1(U) =d Y (see below).

Therefore X = X and Y = FY−1(FX(X)) are desirable random variables since X ≤ Y a.s.

by the previous relation.

(6)

PROP : Risk Y dominates risk X stochastically at first order if and only if E[u(−X)] E[u(−Y )]

for all non-decreasing functions u (such that the expectations exist).

PROOF : First note that, letting v(x) = −u(−x), the condition : E[u(−X)] E[u(−Y )]

for all non-decreasing functions u is equivalent to the condition : E[v(X)] E[v(Y )]

for all non-decreasing functions v.

The ⇐ part is obvious since F¯X(z) = E[I

{X>z]] and the function x → I

{X>z] is non-decreasing for any z. To get the converse implication, it suffices to invoke the previous proposition and to write

E[v(X)] = Ev(X) Ev(Y ) = E[v(Y )].

(7)

PROP : If X and Y have density propobability functions such that there exists a real number c and :

fX (d) ≥ fY (d) for d ∈] − ∞, c) fX (d) ≤ fY (d) for d ∈ [c,∞[.

then Y DS1 X.

PROOF : For x < c, we get FX(x) =

x

−∞fX (u)du ≥

x

−∞ fY (u) du = FY (x) For x > c, we get

FX(x) = 1 −

x fX (u)du ≥ 1 −

x fY (u)du = FY (x) and this concludes the proof.

(8)

⊲ Second-order stochastic dominance

PROP : Risk Y dominates risk X stochastically at second order if and only if E[u(−X)] E[u(−Y )]

for all non-decreasing and concave function u (such that the expectations exist).

PROOF : First note that, letting v(x) = −u(−x), the condition : E[u(−X)] E[u(−Y )]

for all non-decreasing concave functions u is equivalent to the condition : E[v(X)] E[v(Y )]

for all non-decreasing convexe functions v.

The ⇐ implication is obvious since the function x → (x − t)+ is convex for all t ∈ R. To get the converse, note that every continuous function v convex is the limit

(9)

of an increasing sequence of functions : vn(x) = α1 + α2x +

n j=0

β(n)j

x − t(n)j

+

with β(n)j ≥ 0. It allows us to write E[vn(X)] = α1 + α2E[X] +

n j=0

β(n)j E

X − t(n)j

+

≤ α1 + α2E[Y ] +

n j=0

β(n)j E

Y − t(n)j

+

= E[vn(Y )]

for every n. Taking the limit (Monotone convergence theorem) yields E[v(X)] E[v(Y )].

(10)

PROP : Risk Y dominates risk X stochastically at second order if and only if there exists a random variable D such that :

X + D =d Y and E[D|X] 0 a.s.

PROOF : The ⇐ implication is derived by using the conditional Jensen’s inequality E[(Y d)+] = E[(X + D d)+] = EX[E[ (X + D d)+

+|X

≥ EX[(X + E[D|X] d)+ E[(X d)+].

The other implication is difficult to prove.

(11)

PROP : If E[X] E[Y ] and if there exists a real number c such that FX (d) ≤ FY (d) for d ∈] − ∞, c)

FX (d) ≥ FY (d) for d ∈ [c,∞[

then Y DS2 X. PROOF : Note that

E[ (X d)

+] =

d (x − d)dFX (x)

= −[(x − d)(1 − FX (x))]d +

d

X(x)dx =

d

X(x)dx πX(d) = −(1 − FX(d)) = −F¯X(d).

Moreover limd→∞E[ (X d)

+] = 0 and since E[ (X d)

+] +d = E[max(X, d)]

d→−∞lim

E[ (X d)

+] + d = E[X].

Let us consider the function φ(d) = πY (d) − πX(d). We have limd→−∞ φ(d) = E[Y ] E[X] 0, limd→∞ φ(d) = 0 and φ(d) = FY (d) FX(d).

(12)

PROP : If Y DS1 X, then Y DS2 X. PROOF : Y DS1 X if and only if

E[u(−X)] E[u(−Y )]

for all non-decreasing function u (such that the expectations exist). Hence the in- equality holds if u is a non-decreasing and concave function. So it is clear that Y DS2 X.

Note that two risks may be stochastically comparable at second order but not at first order.

(13)

⊲ PROPERTIES OF RISK MEASURES

PROP : Π satisfies the properties 1) Monotonicity, 2) Objectivity, iff it satisfies the Invariance by first-order stochastic dominance property.

PROOF : The ⇐ implication is derived by noting that if P(X Y ) = 1 then P(X > d) P(Y > d) for all d R and hence the Monotonicity property is satisfied. Moreover if X =d Y then X DS1 Y and Y DS1 X. By the first-order stochastic dominance property, we deduce that Π(X) = Π(Y ).

The ⇒ part is proven by noting that X DS1 Y if and only if there exist random variables X =d X and Y =d Y such that P(X Y ) = 1. By Monotonicity property Π(X) ≤ Π(Y ) and by the Objectivity property Π(X) ≤ Π(Y ).

(14)

PROP : If Π satisfies the Invariance by second-order stochastic dominance property, then it satisfies the Invariance by first-order stochastic dominance property.

PROOF : The proposition is proven by noting that, if Y DS1 X, then Y DS2 X. Indeed, assume that X DS1 Y , then Y DS2 X and by the Invariance by second- order stochastic dominance property, we deduce that Π(X) ≤ Π(Y ). Therefore we have shown that

X DS1 Y ⇒ Π(X) ≤ Π(Y ).

(15)

PROP : Assume that Π is a risk measure that satisfies the Positive homogeneity property. Π satisfies the Convexity property iff it satisfies the Subadditivity property.

PROOF : Assume that for any positive constant c and for all risks X, Π(cX) = cΠ(X).

i) ⇒ part : take α = 1/2 in the Convexity property, 1

2Π(X + Y ) = Π

1

2X + 1 2Y

≤ 1

2Π(X) + 1

2Π(Y ) to derive the Subadditivity property.

ii) ⇐ part : for α ∈ [0,1],

Π(αX + (1 − α)Y ) ≤ Π(αX) + Π((1 − α)Y ) = αΠ(X) + (1 − α)Π(Y ).

(16)

PROP : Assume that Π is a risk measure that satisfies the Convexity property and Π(0) = 0. Π satisfies the Positive homogeneity property iff it satisfies the Subaddi- tivity property.

PROOF : If Π is a risk measure that satisfies the Convexity property, then t → Π(tX)/t is a non-decreasing function for t > 0, since by taking 0 < t1 < t2, and α = t1/t2,

Π(t1X) = Π(αt2X + (1 − α) × 0) ≤ αΠ(t2X) + (1 − α)Π(0) = t1

t2Π(t2X).

i) ⇒ part : obvious since

Π(X + Y ) = 2Π

1

2X + 1 2Y

≤ Π(X) + Π(Y ).

ii) ⇐ part : for k ∈ N and k 2, Π(kX) kΠ(X), i.e. Π(kX)/k Π(X). But since Π is a risk measure that satisfies the Convexity property, t → Π(tX)/t is a non-decreasing and therefore it must be constant for t > 0.

(17)

PROP : If Π satisfies the Monotonicity property and the “No unjustified loading”

property, then it satisfies the “Non-excessive loading” property.

PROOF : Since P(X max[X]) = 1, we deduce that

Π(X) ≤ Π(max[X]) = max[X] by the “No unjustified loading” property.

(18)

PROP : If Π satisfies the properties 1) “Non-excessive loading”, 2) Convexity, then it satisfies the Monotonicity property.

PROOF : For α ∈ (0,1)

Π(αX) = Π

αY + (1 − α) α

(1 − α)(X − Y )

≤ αΠ(Y ) + (1 − α)Π

α

(1 − α)(X − Y )

.

If P(X Y ) = 1, then max[X − Y ] ≤ 0, Π α(1 − α)−1(X − Y ) ≤ 0 and Π(αX) ≤ αΠ(Y ).

Let α ր 1 to conclude.

(19)

PROP : If Π satisfies the properties 1) Objectivity, 2) “No unjustified loading”, 3) Convexity, 4) Convergence in distribution, then it satisfies the “Non-negative loading”

property.

PROOF : Let X, X1, X2... be iid random variables. First note that by the convexity property

Π

1

2X1 + 1 2X2

≤ 1

2Π (X1) + 1

2Π (X2) = Π (X).

Let X¯k = (X1 + ... + Xk)/k. We show by induction and the convexity property that, for any k ∈ N, Π2k ≤ Π (X) since

2k+2 = k

k + 1X¯2k + 1 k + 1

1

2X2k+1 + 1

2X2k+2

. Now by the law of large numbers X¯kP,d E[X] and

Π X¯2k → Π (E[X]) = E[X] and hence Π (X) ≥ E[X].

(20)

PROP : If Π satisfies the properties 1) Objectivity, 2) Monotonicity, 3) Convexity, 4) if (Xn) converges in distribution to X then Π(Xn) → Π(X), then the risk measure does not depend on risks.

PROOF : Fix two numbers a < b. Define X0 = b and for n = 1,2,3, ... let Xn = a + 2n(b − a)I

U∈[0,2−n)

Xn = a + 2n(b − a)I

U∈[2−n,2−n+1)

where U is a random variable that is uniformly distributed over the unit interval (0,1). Thus Xn =d Xn ,

Xn = 1

2 Xn+1 + Xn+1 and Xnd a. Convexity and law-invariance imply

Π(Xn) = Π

1

2 Xn+1 + Xn+1 ≤ 1

2Π(Xn+1) + 1

2Π(Xn+1 ) = Π(Xn+1).

(21)

Thus n → Π(Xn) is an increasing sequence. Therefore monotonicity and conver- gence in distribution of the risk measure imply

Π(a) = lim

n→∞Π(Xn) ≥ Π(X0) = Π(b) ≥ Π(a).

Thus Π(a) = Π(b) = π and by monotonicity we get for any X with a ≤ X ≤ b that Π(X) = π.

As a and b are arbitrary, this implies that Π is constant on the set of all bounded random variables.

Finally approximate an unbounded random variable Y by the sequence Yn = (Y ∧ n)∨ −n of bounded random variables to extend to unbounded random variables too.

Remark : Xnd a but max[Xn] →d ∞ = a!

(22)

PROP : If Π satisfies the properties 1) Objectivity, 2) Comonotonic additivity, 3) Invariance by second-order stochastic dominance, then it satisfies the Subadditivity property.

PROOF : Let X and Y be two random variables, and U be a random variable that is uniformly distributed over the unit interval (0,1). Set

Xc = FX−1(U) and Y c = FY−1(U).

For d = d1 + d2, we have

(x + y − d)+ = ((x − d1) + (y − d2))+ ≤ ((x − d1)+ + (y − d2)+)+

= (x − d1)+ + (y − d2)+. Let us now choose

d1 = FX−1c(FXc+Y c(d)) and d2 = F−1

Y d (FXc+Y c(d)) and note that, for any d where FXc+Y c is increasing,

d1 + d2 = (FX−1c + FY−1c )(FXc+Y c(d)) = FX−1c+Y c(FXc+Y c(d)) = d.

(23)

It follows that

E[(X + Y d)+]

≤ E[(X d1)+] + E[(Y d2)+]

= E[(Xc F−1

Xc(FXc+Y c(d)))+] + E[(Y c F−1

Y d (FXc+Y c(d)))+]

= E[(F−1

X (U) − FX−1c(FXc+Y c(d)))+] + E[(F−1

Y (U) − F−1

Y d (FXc+Y c(d)))+]

= E[(F−1

X + FY−1)(U) − (FX−1c + F−1

Y d )(FXc+Y c(d)))+]

= E[(Xc + Y c d)+] and then

X + Y DS2 Xc + Y c.

If the risk measure Π is invariant by second-order stochastic dominance and is additive for comonotonic risks, then

Π(X + Y ) ≤ Π(Xc + Y c) = Π(X) + Π(Y ), which proves the stated result.

(24)

VaR

There are basically two ways to define a generalized inverse for a distribution function.

DEFINITION

Given a df F, we define the inverse functions F−1 and F−1+ of F as F−1(α) = inf{x ∈ R : F(x) α} = sup{x R : F(x) < α}

and

F−1+(α) = inf{x ∈ R : F(x) > α} = sup{x R : F(x) α}

for α ∈ [0,1], where, by convention, inf ∅ = +∞ and sup= −∞.

(25)
(26)
(27)

One can check that :

1. F−1 and F−1+ are both non-decreasing (they are continuous everywhere, except on an at most countable set of points) ;

2. F−1 is left-continuous while F−1+ is right-continuous ;

3. F−1(α) = F−1+(α) if, and only if, α does not correspond to a ‘flat part’ of F or equivalently, if, and only if, F−1 is continuous at α.

(28)

LEMMA : For all x ∈ R and for all α ∈ (0,1)

(i) F−1(α) ≤ x ⇔ α ≤ F(x)

(ii) F−1+(α) ≥ x ⇔ α ≥ F(x−) = P(X < x)

PROOF : We only prove (i) ; (ii) can be proven in a similar way. The ‘⇒’ part is proven if we can show that

α > F(x) ⇒ F−1(α) > x

Assume that α > F(x). Then there exists an ǫ > 0 such that α > F(x + ǫ). From the sup-definition of V aR[X; α], we find that x + ǫ ≤ F−1(α), which implies that

F−1(α) > x.

We now prove the ‘⇐’ part. If α ≤ F(x) then we find that α ≤ F(x + ǫ) for all ǫ > 0. From the inf-definition of F−1(α), we can conclude that F−1(α) ≤ x + ǫ for all ǫ > 0. Taking the limit for ǫ ↓ 0, we obtain F−1(α) ≤ x.

(29)

PROPOSITION : Let X be an rv. For any 0 < α < 1, the following equalities hold : (i) If t is non-decreasing and continuous then Ft(X−1)(α) = t FX−1(α).

(ii) If t is non-decreasing and continuous then Ft(X)−1+(α) = t FX−1+(α).

PROOF : We only prove (i) ; (ii) can be proven in a similar way. By application of the previous lemma, we find that the following equivalences hold for all real x :

Ft(X−1)(α) ≤ x ⇔ α ≤ Ft(X)(x) ⇔ α ≤ FX(t−1+(x))

⇔ FX−1(α) ≤ t−1+(x) ⇔ t FX−1(α) ≤ x

Note that the above proof only holds if t−1+ is finite. But one can verify that the equivalences also hold if t−1+(x) = ±∞.

Remark : The continuity assumption put on the function t can be relaxed as follows : in (i) it is enough for t to be left-continuous, whereas in (ii) it is enough for t to be right-continuous.

(30)

PROPOSITION :

(i) If an rv X has a continuous df F, then F(X) ∼ U ni(0,1).

(ii) Let X be an rv with df F, not necessarily continuous. If U ∼ U ni(0,1), then X =d F−1(U) =d F−1+(U).

PROOF :

(i) For all 0 < u < 1,

P(F(X) u) = P(X F−1(u)) = 1 F(F−1(u)) = 1 u from which we conclude that F(X) ∼ U ni(0,1).

(ii) We see from the lemma that

P(F−1(U) x) = P(U F(x)) = F(x).

The other statement has a similar proof.

(31)

PROP : VaR satisfies the “Non-excessive loading” property.

PROOF : Since X ≤ max[X] we have that V aR[X;α] ≤ max[X] whatever α, so that VaR is indeed no-ripoff.

PROP : VaR does not satisfy the “Non-negative loading” property.

PROOF : Let us define α = F(E[X]). It is clear that VaR does not exceed the expected loss X for probability levels less than α.

PROP : VaR satisfies the “No unjustified loading” property.

PROOF : It is easy to see that for any probability level α > 0, V aR[c;α] = c.

(32)

PROP : VaR satisfies the Objectivity property.

PROOF :This is a direct consequence of the definition of VaR, since it only depends on the df of X.

PROP : VaR satisfies the Translativity property.

PROOF : VaR possesses the very convenient stability property that the VaR of a non- decreasing function t of some rv X is obtained by applying the same function to the initial VaR. Let us consider the function t : x → x +c, we deduce that VaR has the translativity property.

(33)

PROP : VaR fails to be subadditive.

i) A counter-example

Let us consider two independent risks with unit Pareto distribution X ∼ P ar(1,1) and Y ∼ P ar (1,1), i.e.

P (X > t) = P (Y > t) = 1

1 + t, t > 0.

On the one hand,

V aR [X;α] = V aR [Y ;α] = 1

1 − α − 1.

On the other hand, one can show that

P (X + Y ≤ t) = 1 − 2

2 + t + 2ln (1 + t) (2 + t)2 .

(34)

Since

P (X + Y ≤ 2V aR [X;α]) = α − (1 − α)2 2 ln

1 + α 1 − α

< α, we get

V aR[X;α] + V aR[Y ;α] < V aR[X + Y ;α]

and, in such a case, diversification will lead to more risk being reported.

(35)

ii) Elliptical distributions et subadditivity of VaR DEFINITION :

1. A random vector X = (X1, . . . , Xn) has spherical distribution, if for every ortho- gonal map U ∈ Rn×n (i.e. UU = UU = Id),

U X =d X.

◦ The multivariate standard Gaussian distribution is a spherical distribution since fUX(x) = fX(U−1x) = 1

(2π)n/2 exp

−1

2(U−1x)(U−1x)

= 1

(2π)n/2 exp

−1

2xUU−1x)

= fX(x)

(36)

PROP : The following are equivalent.

(i) X is spherical

(ii) There exists a function ψ such that, for all t ∈ Rd E[eitX] = ψ(t2).

(iii) For every a ∈ Rd

aX =d aX1.

(iv) X =d RS where S is uniformly distributed on the unit sphere Sn−1 = {t ∈ Rd : t2 = 1} and R 0 is a radial random variable, independent of S.

ψ is called the characteristic generator of the spherical distribution and we write X ∈ Sn(ψ).

(37)

2. A random vector X = (X1, . . . , Xn) has an elliptical distribution (X ∈ E(µ, A, ψ)) if there exist µ ∈ Rn, A Rn×d and Y Sd(ψ) such that

X =d µ + AY.

◦ It follows that any random vector with components that are linear combinations of the components of an elliptical distribution is again an elliptical distribution with the same characteristic generator.

◦ The Gaussian and the Student distributions are examples of elliptical distributions.

◦ Any multivariate elliptical distribution with mutually independent components and finite variance must necessarily be multivariate normal.

(38)

PROPOSITION : Let X ∈ E(µ, A, ψ) and M = {L : L = λ0 + λX}. For any L1, L2 ∈ M, α ≥ 0.5

V aR[L1 + L2;α] ≤ V aR[L1;α] + V aR[L2;α].

PROOF : Let L1 = λ0,1 + λ1X and L2 = λ0,2 + λ2X. We have V aR[L1 + L2;α]

= λ0,1 + λ0,2 + V aR[(λ1 + λ2)X;α]

= λ0,1 + λ0,2 + V aR[(λ1 + λ2)µ + 1 + λ2)AY1;α]

= λ0,1 + λ0,2 + (λ1 + λ2)µ + 1 + λ2)A V aR[Y1;α]

If α ≥ 0.5, then V aR[Y1;α] ≥ 0 and V aR[L1 + L2;α]

≤ λ0,1 + λ0,2 + (λ1 + λ2)µ + λ1A + λ2A V aR[Y1;α]

= V aR[L1;α] + V aR[L2;α].

(39)

PROP : VaR satisfies the Comonotonic additivity property.

PROOF : For all non-decreasing (left-continuous) functions h and g,

V aR[h(X) + g(X);α] = V aR[(h + g)(X);α] = (h + g)(V aR[X;α])

= h(V aR[X;α]) + g(V aR[X;α])

= V aR[h(X); α] + V aR[g(X);α]

PROP : VaR satisfies the Positive homogeneity property.

PROOF : Let us consider the function t : x → λx with λ > 0, we deduce that VaR has the translativity property.

PROP : VaR satisfies the Monotonicity property.

PROOF : Clearly, if P(X Y ) = 1 holds then FX(x) FY (x) is true for any x.

Therefore, V aR[X;α] ≤ V aR[Y ;α] holds in such a case for any probability level α (by symmetry with respect to the main diagonal).

(40)

PROP : VaR satisfies the Invariance by first-order stochastic dominance property.

PROOF : It is easy to show that

X DS1 Y ⇔ P(Y d) P(X d) ∀d R

⇔ V aR[X;α] ≤ V aR[Y ;α] ∀α ∈ (0,1).

PROP : VaR does not satisfy the Invariance by second-order stochastic dominance property.

PROOF : Since X DS2 Y X DS1 Y , the Invariance by second-order stochastic dominance property may not be satisfied.

PROP : VaR does not satisfy the Convexity property.

PROOF : VaR is not subadditive and satisfies the Positive homogeneity property.

(41)

PROP : VaR does not satisfy the Iterativity property.

PROOF : Let

X Y

∼ N

µX µY

,

σ2X ρσXσY ρσXσY σ2Y

The conditional distribution of X given Y = y is X|Y = y ∼ N

µX + ρσX

σY (y − µY ), σ2X(1 − ρ2)

.

Hence

V aR[X;α] = µX + σXΦ−1(α) V aR[X|Y ;α] = µX + ρσX

σY (Y − µY ) + σX

(1 − ρ2−1(α) V aR[V aR[X|Y ;α];α] = µX + σX (1 − ρ2) + ρ

Φ−1(α) and we deduce that V aR[V aR[X|Y ;α];α] = V aR[X;α] only if ρ = 0.

(42)

PROP : VaR satisfies the Convergence in distribution.

PROOF : It is well known that the weak convergence of the dfs ensures the same type of convergence for the quantile functions.

PROP : VaR does not satisfy the Stability by mixing property.

PROOF : Consider for example the case where X =d 1

0 + 1

2N (0,1). It is easily seen that

V aR[X;α] = 1

−1(α)

(43)

PROP : Let (X, Y ) be a random vector with pdf f(., .), then

∂γV aR[X + γY ;α] = E[Y |X + γY = V aR[X + γY ;α]]

PROOF : It may be found for example in Gouriéroux C., J.P. Laurent and O. Scaillet (2000). Sensitivity Analysis of Values at Risk. Journal of Empirical Finance, 7, 225- 245.

(44)

PROP : Let (X, Y ) be a random vector with pdf f(., .), then

2

∂γ2V aR[X + γY ;α] = ∂

∂sV[Y |X + γY = s]

s=V aR[X+γY;α]

+ V[Y |X + γY = s]

∂sfX+γY(s)

s=V aR[X+γY;α]

PROOF : It may be found for example in Gouriéroux C., J.P. Laurent and O. Scaillet (2000). Sensitivity Analysis of Values at Risk. Journal of Empirical Finance, 7, 225- 245.

(45)

⊲ TVaR and associated risk measures PROP : For any α ∈ (0,1)

T V aR[X;α] = V aR[X;α] + 1

1 − αES[X;α]

CT E[X;α] = V aR[X;α] + 1

1 − F(V aR[X;α])ES[X;α]

CV aR[X;α] = 1

1 − F(V aR[X;α])ES[X;α]

PROOF : The first expression follows from ES[X;α] =

1

0 (V aR[X;ξ] − V aR[X;α])+

=

1

α V aR[X;ξ]dξ − V aR[X;α](1 − α).

(46)

The second and third expression follow from

ES[X;α] = E[X V aR[X;α]|X > V aR[X;α]]P(X > V aR[X; α]).

Remark :

1. If F has a positive probability distribution function, for any α ∈ (0,1), T V aR[X;α] = CT E[X;α].

2. For any α ∈ (0,1)

minπ (E[(X π)+] + (1 α)π)

= E[(X V aR[X;α])+] + (1 α)V aR[X;α]

= (1 − α)T V aR[X; α].

(47)

EXAMPLES :

1. Consider a random variable X ∼ N(µ, σ2) which is normally distributed with mean µ and variance σ2. We have

V aR[X;α] = µ + σΦ−1(α) T V aR[X;α] = µ + σϕ(Φ−1(α))

1 − α CT E[X;α] = µ + σϕ(Φ−1(α))

1 − α CV aR[X;α] = σ

ϕ(Φ−1(α))

1 − α − Φ−1(α)

ES[X;α] = σϕ(Φ−1(α)) − σΦ−1(α)(1 − α)

(48)

2. Consider a random variable that is lognormally distributed, i.e. ln X ∼ N(µ, σ2).

We have

V aR[X;α] = eµ+σΦ−1(α)

T V aR[X;α] = eµ+σ2/2Φ(σ − Φ−1(α)) 1 − α

CT E[X;α] = eµ+σ2/2Φ(σ − Φ−1(α)) 1 − α

CV aR[X;α] = µ+σ2/2Φ(σ − Φ−1(α))

1 − α − eµ+σΦ−1(α)

ES[X;α] = eµ+σ2/2Φ(σ − Φ−1(α)) − eµ+σΦ−1(α)(1 − α)

(49)

PROP : TVaR satisfies the “Non-excessive loading” property.

PROOF : This comes from the fact that VaR is known to be no-ripoff, so that T V aR[X;α] = 1

1 − α

1

α V aR[X;ξ]dξ ≤ 1 1 − α

1

α max[X]dξ.

PROP : TVaR satisfies the “Non-negative loading” property.

PROOF : This is again an immediate consequence of the corresponding properties for VaRs, since

T V aR[c;α] = 1 1 − α

1

α V aR[c;ξ]dξ = 1 1 − α

1

α cdξ = c.

(50)

PROP : TVaR satisfies the “No unjustified loading” property.

PROOF : If U ∼ Uni(0,1) then

E[X] = E[F−1(U)] =

1

0 F−1(u)du = T V aR[X; 0].

The claimed property will hold if we are able to show that TVaR is non-decreasing in the probability level. We clearly have that

T V aR[X;α] = 1 1 − α

E[X]

α

0 V aR[X;ξ]dξ

. Therefore, we can write

d

dαT V aR[X;α] = 1

1 − α (T V aR[X;α] − V aR[X;α]). Since α → V aR[X;α] is non-decreasing,

T V aR[X;α] = 1 1 − α

1

α V aR[X;ξ]dξ ≤ 1 1 − α

1

α V aR[X;α]dξ = V aR[X;α]

(51)

which gives

d

dαT V aR[X;α] ≥ 0.

We conclude

T V aR[X;α] ≥ T V aR[X; 0] = E[X]

so that TVaR induces a non-negative loading whatever the probability level α.

PROP : TVaR satisfies the Objectivity property.

PROOF : Knowing α → T V aR[X;α] is equivalent knowing α → V aR[X;α] since by definition

T V aR[X;α] = 1 1 − α

1

α V aR[X;ξ]dξ.

and

V aR[X;α] = T V aR[X;α] − (1 − α) d

dαT V aR[X;α].

Hence TVaR satisfies the Objectivity property.

(52)

PROP : TVaR satisfies the Translativity property.

PROOF : This is immediate from the corresponding properties of the VaRs T V aR[X + c;α] = 1

1 − α

1

α V aR[X + c;ξ]dξ

= 1

1 − α

1

α V aR[X;ξ]dξ + c = T V aR[X;α] + c PROP : TVaR satisfies the Subadditivity property.

PROOF : First note that

T V aR[X;α] = min

π

π + 1

(1 − α)E[(X π)+]

.

(53)

We thus have for any 0 < λ < 1 that T V aR[λX + (1 − λ)Y ;α]

π + 1

(1 − α)E[(λX + (1 λ)Y π)+]

π=λV aR[X;α]+(1−λ)V aR[Y ;α]

= λV aR[X;α] + (1 − λ)V aR[Y ;α]

+ 1

(1 − α)E[(λX + (1 λ)Y (λV aR[X;α] + (1 λ)V aR[Y ;α]))+

≤ λV aR[X;α] + (1 − λ)V aR[Y ;α]

+ λ

(1 − α)E[(X V aR[X;α])+] + (1 λ)

(1 − α)E[(Y V aR[Y ;α])+]

= λT V aR[X;α] + (1 − λ)T V aR[Y ;α].

Hence the TVaR is convexe and since it is positive homogeneous, it is subadditive.

PROP : TVaR satisfies the Comonotonic additivity property.

PROOF : This is immediate from the corresponding properties of the VaRs.

(54)

PROP : TVaR satisfies the Positive homogeneity property.

PROOF : This is immediate from the corresponding properties of the VaRs.

PROP : TVaR satisfies the Monotonicity property.

PROOF : This is immediate from the corresponding properties of the VaRs.

PROP : TVaR satisfies the Invariance by first-order stochastic dominance property.

PROOF : TVaR satisfies the Invariance by second-order stochastic dominance property and so it satisfies the Invariance by first-order stochastic dominance property..

(55)

PROP : TVaR satisfies the Invariance by second-order stochastic dominance property.

PROP : For any random pair (X, Y ) we have that X DS2 Y if and only if their respective T V aR’s are ordered :

X DS2 Y ⇔ T V aR[X;α] ≤ T V aR[Y ;α] ∀α ∈ (0,1)

PROOF : First we assume X DS2 Y and let α ∈ (0,1). Consider the function f(d) defined by

f(d) = (1 − α)π + E[(X d)+].

We have

T V aR[X;α] = f(V aR[X;α])

1 − α ≤ f(V aR[Y ;α]) 1 − α

= V aR[Y ;α] + 1

1 − αE[(X V aR[Y ;α])+]

≤ V aR[Y ;α] + 1

1 − αE[(Y V aR[Y ;α])+] = T V aR[Y ;α].

(56)

To prove the other implication, we assume that the TVaR’s are ordered for all α ∈ (0,1). Note that for any random variable X, we have that

E[(X d)+] = E[(F−1

X (U) − d)+]

=

1

FX(d) V aR[X;α]dα − d(1 − FX(d)).

Hence, for d such that 0 < FX(d) < 1, we find

E[(X d)+] = (T V aR[X;FX(d)] − d) (1 − FX(d))

≤ (T V aR[Y ;FX(d)] − d) (1 − FX(d))

= E[(Y d)+] +

FY(d)

FX(d) (V aR[Y ;α] − α)dα

Using the equivalence α ≤ FY (d) ⇔ d ≥ V aR[Y ;α], it is straightforward to prove that

FY(d)

FX(d) (V aR[Y ;α] − α)dα ≤ 0.

(57)

If FX(d) = 1, we find E[(X d)+] = 0 E[(Y d)+]. Since E[X] E[Y ] Thus E[(X d)+] = E[X] E[(Y d)+] also holds for d such that FX(d) = 0.

Hence, we have proven that X DS2 Y .

PROP : TVaR satisfies the Convexity property.

PROOF : See the proof for the Subadditivity property.

PROP : TVaR does not satisfy Iterativity property.

PROOF : Consider the case where

X Y

∼ N

µX µY

,

σ2X ρσXσY ρσXσY σ2Y

(58)

PROP : TVaR satisfies the Convergence in distribution property if moreover E[Xn] E[Xn].

PROOF : By using the Objectivity property.

PROP : TVaR does not satisfy the Stability by mixing property.

PROOF : Consider for example the case where X =d 1

0 + 1

2N (0,1).

(59)

Remark : Let X and x be such that F(x) > 0. For any event A such that P(A) = F(x),

E[X|A] E[X|X > x].

It suffices to write

E[X|X > x] = x + E[X x|X > x, A]P(A|X > x) +EX x|X > x,A¯P A|X > x¯

≥ x + E[X x|X > x, A] P(A|X > x)

= x + E[X x|X > x, A] P(X > x|A)

≥ x + E[X x|X > x, A] P(X > x|A) +E[X x|X x, A]P(X x|A)

= E[X|A].

(60)

It sheds a new light on CTE, which can be represented as a worst-case conditional expectation since

CT E[X;α] = sup E[X|A]|P(A) F¯(V aR[X; α]) which reduces to

CT E[X;α] = sup{E[X|A]|P(A) 1 α}

when F is continuous.

This result is closely related to the notion of scenario or stress testing : the CTE appears as the largest possible expected value of X under the set of all plausible scenarios (that is, those whose probabilities exceed 1 − α).

(61)

PROP : Let (X, Y ) be a random vector with pdf f(., .), then

∂γT V aR[X + γY ;α] = E[Y |X + γY V aR[X + γY ;α]]

PROOF : See for example Scaillet, O., 2004. Nonparametric Estimation and Sensitivity Analysis of Expected Shortfall. Mathematical Finance, 14 : 115-129.

(62)

PROP : Let (X, Y ) be a random vector with pdf f(., .), then

2

∂γ2T V aR[X + γY ;α] = 1

1 − αV[Y |X + γY = V aR[X + γY ;α]]

×fX+γY (V aR[X + γY ;α]).

PROOF : See for example Scaillet, O., 2004. Nonparametric Estimation and Sensitivity Analysis of Expected Shortfall. Mathematical Finance, 14 : 115-129.

(63)

⊲ RISK MEASURES BASED ON EXPECTED UTILITY THEORY

Remark : u may be chosen such that u(0) = 0, u(0) = 1 and u′′(0) = −a ≤ 0.

PROP : Π(.) satisfies the “Non-excessive loading” property (no ripoff).

PROOF : Because u is non-decreasing and X ≤ max[X] a.s., we have 0 = E[u(Π(X) X)] u(Π(X) max[X])

so that Π(X) ≤ max[X] holds, and the zero-utility premiums satisfy the no-ripoff condition.

PROP : Π(.) satisfies the “Non-negative loading” property.

PROOF : If u is concave then Jensen’s inequality ensures that 0 = E[u(Π(X) X)] u(Π(X) E[X])

so that Π(X) ≥ E[X] and the zero-utility premiums contain a non-negative loading.

(64)

PROP : Π(.) satisfies the “No unjustified loading” property.

PROOF : Note that

0 = E[u(Π(c) c)] = u(Π(c) c).

Since u(0) > 0, we deduce that Π(c) = c.

PROP : Π(.) satisfies the Objectivity property.

PROOF : If X and Y have the same distribution,

0 = E[u(Π(X) X)] = E[u(Π(Y ) Y )] = E[u(Π(X) Y )]

and hence Π(X) = Π(Y ).

PROP : Π(.) satisfies the Translativity property.

PROOF : We have

0 = E[u(Π(X +c)(X +c))] = E[u((Π(X +c)c)X)] = E[u(Π(X)X)]

and then Π(X + c) = Π(X) + c.

(65)

PROP : Π(.) does not satisfy the Subadditivity property.

PROOF : Consider the case where

X Y

∼ N

0 0

,

1 ρ ρ 1

and u(x) = −e−αx for α > 0. Then Π(X) = 1

α lnEeαX = α

2 and Π(X + Y ) = (1 + ρ)α and therefore Π(X + Y ) = Π(X) + Π(Y ).

Note that Π(X) satisfies the Additivity for independent risks property iff u(x) =

−e−αx or u(x) = x (up to a linear relation).

PROP : Π(.) does not satisfy the Comonotonic additivity property.

PROOF : Π(.) does not satisfy the Positive homogeneity property and therefore it does not satisfy the Comonotonic additivity property.

(66)

PROP : Π(.) does not satisfy the Positive homogeneity property.

PROOF : Consider the case where X ∼ N (0,1). Then Π(λX) = λ 1

λα lnE[eλαX] = λαλ

2 = λ2Π(X).

PROP : Π(.) satisfies the Monotonicity property.

PROOF : Π(.) satisfies the Invariance by first-order stochastic dominance property and therefore it satisfies the Monotonicity property.

PROP : Π(.) satisfies the Invariance by first-order stochastic dominance property.

PROOF : Π(.) satisfies the Invariance by second-order stochastic dominance property and therefore it satisfies the Invariance by first-order stochastic dominance property.

PROP : Π(.) satisfies the Invariance by second-order stochastic dominance property.

(67)

PROOF : Assume that X DS2 Y . We have

E[u(Π(X) X)] E[u(Π(X) Y )].

But

0 = E[u(Π(X) X)] = E[u(Π(Y ) Y )] E[u(Π(X) Y )]

and it follows that Π(Y ) ≥ Π(X).

PROP : Π(.) satisfies the Convexity property if u′′ < 0.

PROOF : Consider two risks X and Y and define

g(t;X, Y ) = Π(X + tV ) where V = Y − X.

Assume that g(t) = g(t;X, Y ) is convexe for all X and Y and for α ∈ (0,1). Then Π(αX + (1 − α)Y ) = Π(X + (1 − α)V ) = g((1 − α))

≤ αg(0) + (1 − α)g(1) = αΠ(X) + (1 − α)Π(Y ).

(68)

It is now enough to show that

g′′(0;X, Y ) ≥ 0 for all X and Y since

g′′(0;X + tV, Y ) = (1 − t2)g′′(t;X, Y ).

But

g′′(0;X, Y ) = −E[u′′(Π(X) X)(g′′(0;X, Y ) V )2] E[u(Π(X) X)] .

(69)

PROP : Π(.) satisfies the Iterativity property iff u(x) = −e−αx or u(x) = x (up to a linear relation).

PROOF : The ‘⇐’ part is proven if we can show that Π(X) = Π(Π(X|Y )) for u(x) = x (obvious) and for u(x) = −e−αx. But

Π (X) = 1

α lnE[eαX] = 1

α lnE[E[eαX|Y ]]

= 1

α lnE

eαα1 lnE[eαX|Y ]

= Π (Π(X|Y )).

The ‘⇒’ part is proven the following way. Let z > 0 and y1, y2 ∈ [0,1]. Define : Y = 1

y1 + 1

y2 and X|Y = y =d Xy,z =d (1 − y)δ0 + yδz. On the other hand

X =d 1

2Xy1,z + 1

2Xy2,z =d Xq,z with q = 1

2(y1 + y2).

(70)

We will use the following notation

π(yi) = Π (X|Y = yi) which satisfies

yiu (π(yi) − z) + (1 − yi)u(π(yi)) = 0.

Differentiating one time with respect to yi

∂yi (yiu (π(yi) − z) + (1 − yi)u(π(yi))) = 0 and letting yi tends to 0 leads to

u(−z) = −π(0).

By iterativity, we have that Π(X) = Π (Π (X|Y )) i.e. : 1

2u (π(q) − π(y1)) + 1

2u (π(q) − π(y2)) = 0

(71)

Differentiate two times with respect to yi

2

∂yi2

1 2u

π

y1 + y2 2

− π(y1)

+ 1 2u

π

y1 + y2 2

− π(y2)

= 0.

And choosing y1 = y2 = y, we get

π′′(y) + aπ(y)2 = 0 with π(0) = 0, π(1) = z.

If a = 0, π(y) = y. If a > 0, π(y) = a−1 log(1 − y + yeaz) and u(−z) =

−π(0) = −a−1 (eaz − 1).

PROP : Π(.) satisfies the Convergence in distribution property if limn→∞ E[u(−Xn)] = E[u(−X)].

PROOF : Obvious.

PROP : Π(.) does not satisfy the Stability by mixing property.

PROOF : By choosing an appropriate counter-example.

Références

Documents relatifs

Since the consequence’s indicators are evaluated five classes have been defined based on the consequence level considered within the threshold, where “1” is equal to the minimum

Comparisons between the CO columns above the ISSJ de- rived from the FTIR and from the MOPITT space borne in- strument, taking into account the smoothing affecting MO- PITT

In fact, we define for Christoffel words a new matrix called second order balance matrix which gives information on the balance property of a word that codes the number of 1’s

The two remaining cases (Moroccan and Vietnamese origins) fit with the FOSD criterion, so that Moroccan-origin women (Table 5B) and Vietnamese-origin men face strong discrimination

The integer n is called the weight of λ, denoted ||λ||, while k is the length of λ, denoted ℓ(λ). The definition of dominance can be extended to finite non- increasing sequences

In accordance with the spirit of Condorcet’s analysis, one extension of the first- order stochastic dominance might be to define a voting rule which always selects the

This result can be explained by two effects: (a) because firm I cannot deduct VAT, higher taxes will directly lead to higher production costs, and (b) a higher tax rate

89 For a long time, giving the imagination extraordinary powers was the standard means of denying the actions of demons and witches; ‘that it can at distance work upon another