• Aucun résultat trouvé

Convexity conditions for normal mean-variance mixture distribution in joint probabilistic constraints

N/A
N/A
Protected

Academic year: 2021

Partager "Convexity conditions for normal mean-variance mixture distribution in joint probabilistic constraints"

Copied!
58
0
0

Texte intégral

(1)

HAL Id: hal-03092800

https://hal.archives-ouvertes.fr/hal-03092800

Preprint submitted on 2 Jan 2021

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Convexity conditions for normal mean-variance mixture distribution in joint probabilistic constraints

Hoang Nam Nguyen, Abdel Lisser

To cite this version:

Hoang Nam Nguyen, Abdel Lisser. Convexity conditions for normal mean-variance mixture distribu-

tion in joint probabilistic constraints. 2021. �hal-03092800�

(2)

Convexity conditions for normal mean-variance mixture distribution in joint probabilistic

constraints

Hoang Nam NGUYEN

a

, Abdel LISSER

a

a

Laboratory of signals and systems, Centrale Supélec

Abstract

In this paper, we study the linear programming with probabilistic con- straints. We suppose that the distribution of the constraint rows is a normal mean-variance mixture distribution and the dependence of rows is repre- sented by an Archimedean copula. We prove the convexity of the feasibility set in some additional conditions. Next, we propose a sequential approxima- tion by linearization which provides a lower bound and a gradient descent method which provides an upper bound with numerical results.

Keywords:

Probabilistic constraints, Archimedean copulas, Normal mean-variance mixture distributions, Convex optimization

1. Introduction

We study the following linear programming with joint probabilistic con-

straints:

(3)

min c

T

x

subject to P { V x ≤ D } ≥ 1 − ǫ

x ∈ Q. (1)

where Q is a closed convex subset of R

n

; c ∈ R

n

, D := (D

1

, .., D

K

) ∈ R

K

is a deterministic vector, V := [v

1

, .., v

K

]

T

is a random matrix with size K × n, where v

k

is a random vector in R

n

, ∀ k = 1, K and ǫ ∈ [0, 1] .

1.1. Survey of literature

The probabilistic constraint optimisation has been widely studied since longtime ago. Prékopa studied the concavity and quasi-concavity proper- ties for probability distribution functions in his article [16] in 1970. Sen introduced a relaxation method for probabilistic constraint programming with discrete random variable in [21]. Lobo studied some applications of second-order cone programming in [12] which gave a new approach for solv- ing problems of probabilistic constraints. Henrion gave a general structural property for linear probabilistic constraints in [10]. In 2014, Cheng used the second-order cone programming approach for solving his joint probabilistic constraints problem in [4]. He supposed that the distribution of the con- straint rows is elliptically distributed and the dependence of the rows follows an Archimedean copula.

In this paper, we study the same probabilistic constraint problem as in [4].

We suppose that the distribution of the row vectors is normal mean-variance

mixture distributed and the dependence of the rows is an Archimedean cop-

ula. By assuming some conditions, we prove the convexity of the feasible set

(4)

of solutions. We propose two approximations method which gives a lower bound and an upper bound and present some numerical results.e

1.2. Why is normal mean-variance mixture distribution?

Definition 1.1. A random variable X in R

n

is a normal mean-variance mixture distribution if:

X = µ + γW + √ W AZ,

where (1) Z is a n-dimension standard normal distribution N

n

(0, I

n

).

(2) W a positive random variable independent of Z.

(3) A ∈ R

n×k

is a matrix such that AA

T

= Σ, where Σ a semidefinite positive matrix ∈ R

n×n

.

(4) µ and γ are n-real vectors.

The relation between normal mean-variance distributions and elliptical distributions is represented by the following proposition:

Proposition 1. [[13], theorem 3.25] Denote Ψ

the set of characteristic generators which generate a spherical distribution in dimension n, for all n ≥ 1. Hence, Y follows S

n

(ψ) with ψ ∈ Ψ

if and only if Y = √

W Z , where Z is a n-dimension standard normal distribution N

n

(0, I

n

) independent of W ≥ 0.

Based on this proposition, we deduce that an elliptical distribution U can be represented in the form U = µ + √

W AZ with µ ∈ R

n

, A ∈ R

n×k

and Z is a n-dimension standard normal distribution N

n

(0, I

n

) if and only if

U = µ + AY , where Y follows S

n

(ψ) and ψ ∈ Ψ

.

(5)

The family of distributions normal mean-variance mixture represents a comprehensive subset of the family of elliptical distributions. There exists some elliptical distributions which cannot be represented in the form of a normal mean-variance mixture distribution. However, it represents a big subset of the family of elliptical distributions and plays an important role in the elliptical world.

Next, we define an important subset of the family of normal mean- variance mixture distributions , the family of hyperbolic distributions.

Definition 1.2. A random variable X is a hyperbolic distribution if it is a normal mean-variance mixture where the random variable W in definition (1.1) is an inverse Gaussian distribution whose density function with respect to the measure of Lebesgue is:

g(w) = Cw

λ−1

exp

− 1

2 (χw

−1

+ ψw)

, ∀ w ∈ [0, ∞ ).

where C is a constant and the domain of variation of the parameters is:

χ > 0, ψ ≥ 0 if λ < 0.

χ > 0, ψ > 0 if λ = 0.

χ ≥ 0, ψ > 0 if λ > 0.

The family of hyperbolic distributions is a generalization of many elliptical

distributions. For example, the t-multivariate distribution with parameters

(Σ, µ, ν) is a particular case of hyperbolic distributions when λ =

2ν

; χ = ν

(6)

; ψ = 0 ; µ = µ ; Σ = Σ ; γ = 0 . We summarize some important elliptical distributions (in dimension p) by the following table:

Density µ Σ γ λ χ ψ

Normal C.exp −

12

k x k

2

µ Σ 0 0 0 0

t-distribution C.(1 +

kxνk2

)

p+ν2

µ Σ 0

2ν

ν 0 Cauchy C.(1 + k x k

2

)

p+12

µ Σ 0

−12

1 0

Laplace µ Σ 0 1 0 -2

Pearson VII C.(1 +

kxmk2

)

N

µ Σ 0

p2

− N m 0

A disadvantage of elliptical distributions is that they are symmetric. We should not use elliptical family for modelling in some cases. For example, we can find some applications of hyperbolic distributions in modelling financial in [7], [1], [20] and [19].

2. Normal mean-variance mixture constraints in linear program- ming

In this section, we study the linear programming (1). We suppose that v

i

are random vectors. We find some sufficient conditions (necessary if possible)

such as P { V x ≤ D } is a concave function with respect to x. A convex

optimization can be defined formally as following:

(7)

min f(x)

subject to g

k

(x) ≤ 0, k = 0, m − 1 Gx h

Ax = b x ∈ Q.

where Q is a closed convex subset of R

n

; f (x) and g

i

(x) are convex func- tions, i = 0, m − 1; h and b are n-real vectors ; G and A are deterministic matrices.

2.1. Preliminaries

Proposition 2 ([11], lemme 3.1). Given F : R → [0, 1] a distribution function with (r + 1) - decreasing density for some r > 0 and a threshold t

(r + 1) > 0. Hence, the function z 7→ F (z

1r

) is a concave function on (0, t

(r + 1)

r

). Moreover, F (t) < 1, ∀ t ∈ R .

Definition 2.1. ([4]) A real function f : R → R is K - monotone on an open interval I ⊆ R with K ≥ 2 if it is differentiable up to (K − 2)th - order and the derivatives are satisfied by:

( − 1)

k

d

k

dt

k

f (t) ≥ 0, ∀ 0 ≤ k ≤ K − 2 et ∀ t ∈ I.

and the function ( − 1)

K−2dtdKK22

f (t) is non-increasing and convex on I.

Proposition 3 ([14]). Given ψ : [0, 1] → [0, + ∞ ) a strictly decreasing func-

tion such that ψ(1) = 0. Hence, it is the generator of an archimedean copula

in dimension K if and only if ψ

1

is K-monotone on (0, ψ(0)).

(8)

Definition 2.2. Given f : Q × R → R a real function, where Q is a subset of R

n

. We say that f is differentiable at x ∈ Q if there exists a function g : Q → R

Rn

such that ∀ θ ∈ R , there exists a neighbourhood N (θ) of θ such that if we note f

N(θ)

the restriction of f on N (θ), we have:

ǫ→

lim

0,ǫ∈Rn

f

N(θ)

(x + ǫ, :) − f

N(θ)

(x, :) − < ǫ, g(x) >

k ǫ k

max

= 0,

where R

Rn

denotes the set of functions from R to R

n

and k . k

max

is the maximum norm.

Moreover, we say that f is differentiable up to second-order at x if there exists a function h : Q → R

Rn×n

such that we have:

ǫ→

lim

0,ǫ∈Rn

f

N(θ)

(x + ǫ, :) − f

N(θ)

(x, :) − < ǫ, g(x) > −

12

ǫ

T

h(x)ǫ k ǫ k

2

max

= 0

where R

Rn×n

denotes the set of functions from R to R

n×n

.

Denote

dxdf

:= g the derivative at first-order of f according to x and

d2f

dx2

:= h the derivative at second-order of f according to x.

In the programming (1), we suppose that v

k

is a normal mean-variance mixture distribution, for 1 ≤ k ≤ K. Next, we will show that the feasible set of (1) is a convex set by adding some additional conditions.

2.2. Individual chance constraints

Suppose that K = 1 and V follows a normal mean-variance mixture

distribution with parameters (µ, Σ, γ) (cf. definition (1.1)). Suppose that

0 ∈ / Q.

(9)

Lemma 2.1 (Proposition 5, [5]). The standard normal distribution has 3-decreasing density with a threshold t

(3) = √

3.

The constraint in (1) can be rewritten as follows:

P

v1

v

T1

x ≤ D

≥ 1 − ǫ.

⇔ P

W,Z

T

+ W γ

T

+ √

W Z

T

A

T

)x ≤ D

≥ 1 − ǫ

⇔ P

W,Z

(W γ

T

+ √

W Z

T

A

T

)x ≤ D − x

T

µ

≥ 1 − ǫ.

⇔ P

W,X

W

k x

T

Σ

12

k

2

x

T

γ + √

W X ≤ D − x

T

µ k x

T

Σ

12

k

2

!

≥ 1 − ǫ.

By letting X := Z

T

A

T

x k x

T

Σ

12

k

2

! .

⇔ E

 1

X≤

− x

T

γ k x

T

Σ

12

k

2

√W+

D − x

T

µ

√ W k x

T

Σ

12

k

2

 ≥ 1 − ǫ.

⇔ E

W

 E

 1

X≤

− x

T

γ k x

T

Σ

12

k

2

√W+

D − x

T

µ

√ W k x

T

Σ

12

k

2

| W

 ≥ 1 − ǫ.

⇔ E

W

"

Φ

X

− x

T

γ k x

T

Σ

12

k

2

√ W + D − x

T

µ

√ W k x

T

Σ

12

k

2

!#

≥ 1 − ǫ. (2) where ǫ ∈ (0, 1), X follows a standard normal distribution N (0, 1), Φ

X

is the distribution function of X , W is a positive random variable independent of X.

We have the following theorem:

Theorem 2.2. Consider the linear programming (1). Let

(10)

M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } is the feasible set of (1). Suppose that K = 1 and V follows a normal mean-variance mixture distribution with parameters (µ, Σ, γ).

Suppose that:

(1) Σ is a definite positive matrix with 0 < λ

min

≤ λ

max

, where λ

max

is the biggest eigenvalue of Σ and λ

min

is the smallest eigenvalue of Σ.

(2) W is a random variable in [t

min

, t

max

] with 0 ≤ t

min

≤ t

max

≤ t

+

. (3) For all x ∈ Q, we have:

c > 0.

and 1 t

+

√ b

2

− 4ac − b

2c or b

2

≤ 4ac.

where:

(11)

a = ( − x

T

γ)

2

λ

min

+ α

x,γ,Σ

.

b = 2( − x

T

γ)(D − x

T

µ)λ

min

− 6(x

T

Σx) k γ kk µ k + β

x,γ,µ,Σ

. c = (D − x

T

µ)

2

λ

min

+ θ

x,µ,Σ

.

α

x,γ,Σ

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γ,µ,Σ

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µ,Σ

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ)γ.

v = 4( − x

T

γ)γ + 4(D − x

T

µ)µ.

q = 4(D − x

T

µ)µ.

z = Σx.

(4) For all x ∈ Q, we have:

2 p

( − x

T

γ)(D − x

T

µ)

√ x

T

Σx > √ 3.

( − x

T

γ) > 0.

D − x

T

µ > 0.

Hence, M is a convex set.

Proof. The proof of this theorem follows this outline:

(i) We show that f(x, W ) := − x

T

γ k x

T

Σ

12

k

2

√ W + D − x

T

µ

√ W k x

T

Σ

12

k

2

is a (-2)-

concave function according to x on Q, for all W ≥ 0.

(12)

(ii) By using (i), we show that Φ

X

(f (x, W )) is a concave function accord- ing to x, for all W ≥ 0.

(iii) By using (ii), we show that E

W

X

(f (x, W ))] is a concave function according to x on Q.

(iv) By using (iii), we deduce that the feasible set of (1) is a convex set.

Proof of (i). Let f (x, W ) := − x

T

γ k x

T

Σ

12

k

2

√ W + D − x

T

µ

√ W k x

T

Σ

12

k

2

.

The ( − 2) − concavity of f(x, W ) is equivalent to the convexity of the following function:

h(x, W ) := x

T

Σx − x

T

γ √

W +

1W

(D − x

T

µ)

2

= x

T

Σx

W (x

T

γ)

2

+

W1

(x

T

µ − D)

2

+ 2x

T

γ(x

T

µ − D) .

Let W (x

T

γ)

2

+

W1

(x

T

µ − D)

2

+ 2x

T

γ (x

T

µ − D) =: M . Denote H

x

h(x, W ) the gradient vector of h according to x and H

x2

h(x, W ) the Hessian matrix of h according to x.

By a direct calculation, we deduce a formula of the gradient vector and the Hessian matrix of h according to x as follows:

( ∗ )H

x

h(x, W ) = 2.M

−1

.Σx − M

−2

.(x

T

Σx).

.

2W x

T

γ.γ + 2

W (x

T

µ − D).µ + 2(x

T

µ − D).γ + 2x

T

γ.µ

.

(13)

( ∗ )H

x2

h(x, W ) = 2M

−1

.Σ − 8M

−2

.

(W x

T

γ + x

T

µ − D).(Σxγ

T

+ γx

T

Σ)

− 8M

2

. 1

W (x

T

µ − D) + x

T

γ

.(Σxµ

T

+ µx

T

Σ)

+ 6.M

2

(x

T

Σx).

W γγ

T

+ 1

W µµ

T

+ γµ

T

+ µγ

T

.

= 2.M

2

W ( − x

T

γ) + D − x

T

µ . .

( − x

T

γ)Σ + 4(Σxγ

T

+ γx

T

Σ) + 1

W

(D − x

T

µ)Σ + 4(Σxµ

T

+ µx

T

Σ)

+ 6.M

−2

(x

T

Σx).

W γγ

T

+ 1

W µµ

T

+ γµ

T

+ µγ

T

.

⇔ H

x2

h(x, W ) 2.M

−2

=

= W

( − x

T

γ)

2

Σ + 4( − x

T

γ).(Σxγ

T

+ γx

T

Σ) + 3(x

T

Σx)γγ

T

+ +

2( − x

T

γ)(D − x

T

µ)Σ + 4( − x

T

γ)(Σxγ

T

+ γx

T

Σ) + +

4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)(γµ

T

+ µγ

T

) + + 1

W

(D − x

T

µ)

2

Σ + 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)µµ

T

.

= W A + B + 1 W C.

where

A = ( − x

T

γ)

2

Σ + 4( − x

T

γ).(Σxγ

T

+ γx

T

Σ) + 3(x

T

Σx)γγ

T

. B = 2( − x

T

γ)(D − x

T

µ)Σ + 4( − x

T

γ)(Σxγ

T

+ γx

T

Σ)+

+ 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)(γµ

T

+ µγ

T

).

C = (D − x

T

µ)

2

Σ + 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)µµ

T

.

For that h is a convex function according to x for all W ∈ [0, t

+

], it is

(14)

necessary that the Hessian matrix H

x2

h(x, W ) is semidefinite positive for all (x, W ) ∈ Q × [0, t

+

]. That is equivalent to the semidefinite positivity of the matrix W A + B +

W1

C for all (x, W ) ∈ Q × [0, t

+

]. We have:

(1) A = ( − x

T

γ)

2

Σ + 4( − x

T

γ).(Σxγ

T

+ γx

T

Σ) + 3(x

T

Σx)γγ

T

.

Given M, N two any symmetric matrix. Denote M N if the matrix M − N is a semidefinite positive matrix.

By using this notation, we deduce the following inequalities:

* Σ λ

min

I d

n

.

* 4( − x

T

γ)(Σxγ

T

+ γx

T

Σ) - is a symmetric matrix therefore diagonal- izable. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:

u

T

z ± s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

where u = 4( − x

T

γ)γ and z = Σx.

* γγ

T

0.

Let α

x,γ,Σ

:= u

T

z − q

(u

T

z)

2

+ P

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

. Hence, we deduce the following inequality:

A

( − x

T

γ)

2

λ

min

+ α

x,γ,Σ

I d

n

. (3) (2) B = 2( − x

T

γ)(D − x

T

µ)Σ + 4( − x

T

γ)(Σxγ

T

+ γx

T

Σ)

+ 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)(γµ

T

+ µγ

T

).

We have:

* Σ λ

min

I d

n

.

(15)

* γµ

T

+ µγ

T

− 2 k γ kk µ k .

* 4( − x

T

γ)(Σxγ

T

+ γx

T

Σ) + 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) - is a symmetric matrix, hence diagonaliazble. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:

z

T

v ± s

(z

T

v)

2

+ X

1≤i<j≤n

(z

i

v

j

− z

j

v

i

)

2

where z = Σx and v = 4( − x

T

γ)γ + 4(D − x

T

µ)µ.

Let β

x,γ,µ,Σ

:= v

T

z − q

(v

T

z)

2

+ P

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

. Hence, we deduce the following inequality:

B

2( − x

T

γ)(D − x

T

µ)λ

min

− 6(x

T

Σx) k γ kk µ k + β

x,γ,µ,Σ

I d

n

. (4) (3) C = (D − x

T

µ)

2

Σ + 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) + 3(x

T

Σx)µµ

T

. We have:

* Σ λ

min

I d

n

.

* 4(D − x

T

µ)(Σxµ

T

+ µx

T

Σ) - is a symmetric matrix, therefore diago- nalizable. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:

z

T

q ± s

(z

T

q)

2

+ X

1≤i<j≤n

(z

i

q

j

− z

j

q

i

)

2

where q := 4(D − x

T

µ)µ and z = Σx.

* µµ

T

0.

Let θ

x,µ,Σ

:= q

T

z − q

(q

T

z)

2

+ P

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

.

(16)

Hence, we deduce the following inequality:

C (D − x

T

µ)

2

λ

min

+ θ

x,µ,Σ

. (5)

By using (3), (4), (5), we deduce the following inequality:

W A + B + 1

W C W

( − x

T

γ)

2

λ

min

+ α

x,γ,Σ

I d

n

+ (6) 2( − x

T

γ )(D − x

T

µ)λ

min

− 6(x

T

Σx) k γ kk µ k + β

x,γ,µ,Σ

I d

n

+ 1

W

(D − x

T

µ)

2

λ

min

+ θ

x,µ,Σ

I d

n

. where:

α

x,γ,Σ

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γ,µ,Σ

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µ,Σ

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ)γ.

v = 4( − x

T

γ)γ + 4(D − x

T

µ)µ.

q = 4(D − x

T

µ)µ.

z = Σx.

Let a = ( − x

T

γ)

2

λ

min

+ α

x,γ,Σ

,

b = 2( − x

T

γ)(D − x

T

µ)λ

min

− 6(x

T

Σx) k γ kk µ k + β

x,γ,µ,Σ

,

c = (D − x

T

µ)

2

λ

min

+ θ

x,µ,Σ

.

(17)

Obviously, the semidefinite positivity of the Hessian matrix H

x2

h(x, W ) is deduced by the positivity of W a + b +

W1

c, for all W ∈ [0, t

+

].

We deduce a sufficient condition as follows:

c > 0.

and 1 t

+

√ b

2

− 4ac − b

2c or b

2

≤ 4ac.

where:

a = ( − x

T

γ)

2

λ

min

+ α

x,γ,Σ

.

b = 2( − x

T

γ)(D − x

T

µ)λ

min

− 6(x

T

Σx) k γ kk µ k + β

x,γ,µ,Σ

. c = (D − x

T

µ)

2

λ

min

+ θ

x,µ,Σ

.

α

x,γ,Σ

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γ,µ,Σ

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µ,Σ

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ)γ.

v = 4( − x

T

γ)γ + 4(D − x

T

µ)µ.

q = 4(D − x

T

µ)µ.

z = Σx.

which is satisfied by the assumption (3) of the theorem.

(18)

We deduce that f(x, W ) is a (-2) - concave function according to x on Q, for all W ∈ [0, t

+

].

Proof of (ii). Given x

1

, x

2

∈ Q, W ∈ [0, t

+

], α ∈ [0, 1]. We have:

f [αx

1

+ (1 − α)x

2

, W ] ≥

αf

−2

(x

1

, W ) + (1 − α)f

−2

(x

2

, W )

21

. (by the (-2) - concavity of f by (i))

⇐⇒ Φ

X

(f [αx

1

+ (1 − α)x

2

, W ])

≥ Φ

X

αf

−2

(x

1

, W ) + (1 − α)f

−2

(x

2

, W )

21

(by the strict increasing of Φ

X

). (7)

By using the lemma (2.1), we deduce that the function Φ

X

(t

21

) is a concave function on (0, (t

)

−2

) where t

= √

3. We will show that f

−2

(x

1

, W ) is on (0, (t

)

2

).

Proof. In fact, we have:

| f (x

1

, W ) | =

− x

T1

γ k x

T1

Σ

12

k

2

√ W + D − x

T1

µ

√ W k x

T1

Σ

12

k

2

≥ 2 p

( − x

T1

γ)(D − x

T1

µ) p x

T1

Σx

1

(by Cauchy inequality)

> t

(by the assumption (4) of the theorem)

⇐⇒ 0 < f (x

1

.W )

−2

< (t

)

−2

.

(19)

Hence, by using the concavity of Φ

X

(t

21

) on (0, (t

)

2

), we have:

Φ

X

αf

2

(x

1

, W ) + (1 − α)f

2

(x

2

, W )

21

≥ αΦ

X

(f(x

1

.W )) + (1 − α)Φ

X

(f(x

2

, W )) .

Hence, by combining with (7), we deduce that Φ

X

(f (x, W )) is a concave function according to x on Q, for all W ∈ [0, t

+

].

Proof of (iii). Let g(x, W ) := Φ

X

(f(x, W )). By (ii), we shown that g(x, W ) is a concave function according to x on Q, for all W ∈ [0, t

+

].

Noting that E

W

: g → E

W

(g) is a linear transformation and the concavity is conserved by the linear transformations, we deduce that:

E

W

X

(f(x, W ))] − is a concave function according to x on Q.

Proof of (iv). Given x, y two points which satisfy the constraint (2) and α ∈ [0, 1]. We have:

E

W

X

(f (x, W ))] ≥ 1 − ǫ.

E

W

X

(f (y, W ))] ≥ 1 − ǫ.

E

W

X

(f (αx + (1 − α)y, W ))] ≥

≥ α E

W

X

(f (x, W ))] + (1 − α) E

W

X

(f (y, W ))] .

(by the concavity of E

W

X

(f(x, W ))] by (iii)) .

(20)

We deduce that E

W

X

(f (αx + (1 − α)y, W ))] ≥ 1 − ǫ, as well as αx + (1 − α)y satisfies the constraint (2). Hence, the feasible set of (1) is a convex set.

2.3. Independent joint chance constraints

Suppose that v

i

follows a normal mean-variance mixture with parameters (µ

i

, Σ

i

, γ

i

), for 1 ≤ i ≤ K and the vectors v

i

are independent. Suppose that 0 ∈ / Q.

The constraint in (1) can be rewritten as follows:

P { V x ≤ D } ≥ 1 − ǫ.

K

Y

i=1

P

v

iT

x ≤ D

i

≥ 1 − ǫ (by the independence).

K

Y

i=1

E

Wi

"

Φ

Xi

− x

T

γ

i

k x

T

Σ

i12

k

2

p W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

!#

≥ 1 − ǫ ( by using the transformation of (2) in the section (2.2)).

K

X

i=1

log E

Wi

"

Φ

Xi

− x

T

γ

i

k x

T

Σ

i12

k

2

p W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

!#!

≥ log(1 − ǫ).

(8) We have the following theorem:

Theorem 2.3. Consider the linear programming (1). Let

(21)

M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } the feasible set of (1). Suppose that

∀ i = 1, K, v

i

follows a normal mean-variance mixture with parameters (µ

i

, Σ

i

, γ

i

) and the vectors v

i

are independent.

Suppose that ∀ i = 1, K, we have:

(1) Σ

i

is a definite positive matrix with 0 < λ

i,min

≤ λ

i,max

, where λ

i,max

is the biggest eigenvalue of Σ

i

and λ

i,min

is the smallest eigenvalue of Σ

i

. (2) W

i

is a positive variable in [t

i,min

, t

i,max

] with 0 ≤ t

i,min

≤ t

i,max

≤ t

+i

. (3) For all x ∈ Q and i = 1, K, we have:

c

i

> 0.

and 1 t

+i

p b

2i

− 4a

i

c

i

− b

i

2c

i

or b

2i

≤ 4a

i

c

i

.

where:

(22)

a

i

= ( − x

T

γ

i

)

2

λ

i,min

+ α

x,γii

.

b

i

= 2( − x

T

γ

i

)(D

i

− x

T

µ

i

i,min

− 6(x

T

Σ

i

x) k γ

i

kk µ

i

k + β

x,γiii

. c

i

= (D

i

− x

T

µ

i

)

2

λ

i,min

+ θ

x,µii

.

α

x,γii

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γiii

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µii

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ

i

i

.

v = 4( − x

T

γ

i

i

+ 4(D

i

− x

T

µ

i

i

. q = 4(D

i

− x

T

µ

i

i

.

z = Σ

i

x.

(4) For all x ∈ Q, we have:

2 p

( − x

T

γ

i

)(D

i

− x

T

µ

i

)

√ x

T

Σ

i

x > √

3, ∀ i = 1, K.

( − x

T

γ) > 0.

D − x

T

µ > 0.

Hence, M is a convex set.

Proof. Based on the proof of theorem (2.2), we deduce that E

Wi

"

Φ

Xi

− x

T

γ

i

k x

T

Σ

i12

k

2

√ W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

!#

is a concave function, i.e

that is also a log-concave function.

(23)

Hence, P

K

i=1

log E

Wi

"

Φ

Xi

− x

T

γ

i

k x

T

Σ

i12

k

2

√ W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

!#!

is a con- cave function. We deduce that the feasible set of (1) is convex.

2.3.1. Dependent joint chance constraints with independent copula

Suppose that v

i

follows a normal mean-variance mixture distribution with parameters (µ

i

, Σ

i

, γ

i

), for 1 ≤ i ≤ K. Suppose that 0 ∈ / Q.

The constraint in (1) can be rewritten as follows:

P { V x ≤ D } ≥ 1 − ǫ.

⇔ P (

K

[

i=1

v

Ti

x ≤ D

i

)

≥ 1 − ǫ.

⇔ E

W

 E

 1

SK i=1





Xi(x)≤

− x

T

γ

i

k x

T

Σ

i12

k

2

√Wi+

D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2





| [ W

i

≥ 1 − ǫ,

following the same procedure as (2).

⇔ E

W

[Φ (g

1

(x, W ), .., g

K

(x, W ))] ≥ 1 − ǫ.

Here we suppose that W and X(x) are independent, with

W := (W

1

, .., W

K

), X (x) := (X

1

(x), .., X

K

(x)), X

i

(x) := Z

iT

A

Ti

x k x

T

Σ

i12

k

2

, g

i

(x, W ) := − x

T

γ

i

k x

T

Σ

i12

k

2

p W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

. (9)

where Φ is the distribution function of X.

Remark. In general, unfortunately, we cannot prove the concativity of the

function Φ (g

1

(x, W ), .., g

K

(x, W )), i.e E

W

[Φ (g

1

(x, W ), .., g

K

(x, W ))].

(24)

Consequently, we suppose additionally that for i = 1, K, we have t

i,min

≈ t

i,max

. Hence, we deduce that E

W

[Φ (g

1

(x, W ), .., g

K

(x, W ))]

≈ Q

K

i=1

(t

i,max

− t

i,min

) × Φ (g

1

(x, w

1

), .., g

K

(x, w

K

)), where w

i

is an arbi- trary point on [t

i,min

, t

i,max

]. Then the constraint in (1) can be rewritten as follows:

Φ (g

1

(x, w

1

), .., g

K

(x, w

K

)) ≥ θ (10) where X(x) := (X

1

(x), .., X

K

(x)), X

i

(x) :=

ZTiATix

kxTΣi12k2

, g

i

(x, W ) := − x

T

γ

i

k x

T

Σ

i12

k

2

√ W

i

+ D

i

− x

T

µ

i

√ W

i

k x

T

Σ

i12

k

2

and Φ is the distribution function of X.

Suppose that there exists an archimedean copula C which does not de- pend on x such that

Φ (g

1

(x, w

1

), .., g

K

(x, w

K

)) = C[F

1

(g

1

(x, w

1

)), .., F

K

(g

K

(x, w

K

))], where F

i

the distribution function of X

i

(x) which is a standard normal distribution and C(u) = ψ

(1)

P

K

i=1

ψ(u

i

)

, where ψ is a generator of C. We reformulate

the constraint (10) as follows:

(25)

Φ (g

1

(x, w

1

), .., g

K

(x, w

K

)) ≥ θ

⇔ C [F

1

(g

1

(x, w

1

)), .., F

K

(g

K

(x, w

K

))] ≥ θ

K

X

i=1

ψ [F

i

(g

i

(x, w

i

))] ≤ ψ(θ) (by the decreasing property of ψ)

⇔ F

i

(g

i

(x, w

i

)) ≥ ψ

(1)

i

ψ(θ)] , ∀ i = 1, K

K

X

i=1

α

i

= 1

⇔ g

i

(x, w

i

)

2

≤ F

i(−1)

ψ

(1)

i

ψ(θ)]

2

, ∀ i = 1, K

K

X

i=1

α

i

= 1. (11)

We have the following theorem:

Theorem 2.4. Consider the linear programming (1). Let

M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } the feasible set of (1). Suppose that ∀ i = 1, K, v

i

follows a normal mean-variance mixture with parameters (µ

i

, Σ

i

, γ

i

). Suppose that W

i

is a positive random variable in [t

i,min

, t

i,max

] with t

i,min

≈ t

i,max

. Let

ZiTATix

kxTΣi12k2

:= X

i

(x), ∀ 1 ≤ i ≤ K and x ∈ Q.

Suppose that the multivariate copula of X(x) := (X

1

(x), .., X

K

(x)) is in- dependent of x. Moreover, W and X(x) are independent. Denote C the multivariate copula of X(x). Suppose that C is an archimedean copula where ψ is a generator. Suppose that θ ≥ Φ

N(0,1)

( √

3).

Suppose that ∀ i = 1, K we have:

(1) Σ

i

is a definite positive matrix with 0 < λ

i,min

≤ λ

i,max

, where λ

i,max

is the biggest eigenvalue of Σ

i

and λ

i,min

is the smallest eigenvalue of Σ

i

.

(26)

(2) W

i

is a random variable in [t

i,min

, t

i,max

] with 0 ≤ t

i,min

≤ t

i,max

≤ t

+i

. (3) For all x ∈ Q and i = 1, K, we have:

c

i

> 0.

and 1 t

+i

p b

2i

− 4a

i

c

i

− b

i

2c

i

or b

2i

≤ 4a

i

c

i

. where:

a

i

= ( − x

T

γ

i

)

2

λ

i,min

+ α

x,γii

.

b

i

= 2( − x

T

γ

i

)(D

i

− x

T

µ

i

i,min

− 6(x

T

Σ

i

x) k γ

i

kk µ

i

k + β

x,γiii

. c

i

= (D

i

− x

T

µ

i

)

2

λ

i,min

+ θ

x,µii

.

α

x,γii

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γiii

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µii

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ

i

i

.

v = 4( − x

T

γ

i

i

+ 4(D

i

− x

T

µ

i

i

. q = 4(D

i

− x

T

µ

i

i

.

z = Σ

i

x.

(4) For all x ∈ Q, we have:

(27)

2 p

( − x

T

γ

i

)(D

i

− x

T

µ

i

)

√ x

T

Σ

i

x > √

3, ∀ i = 1, K.

( − x

T

γ) > 0.

D − x

T

µ > 0.

Hence, M is a convex set.

Proof. The proof of this theorem follows this outline:

(i) We show that g

i

(x, w

i

) is a (-2)-concave function according to x on Q,

∀ w

i

∈ [t

i,min

, t

i,max

].

(ii) We show that F

i(−1)

ψ

(1)

i

ψ(θ)]

2

is a convex function according to α

i

on [0, 1].

(iii) By using (i) and (ii), we deduce that the feasible set of (1) is a convex set.

Proof of (i). The proof follows from theorem (2.2).

Proof of (ii). Let H = ψ

(−1)

i

ψ(θ)]. By a direct calculation, we deduce the following formulation:

d

2

H

2i

= − ψ(θ)

2

× ψ

′′

(H) ψ

(H)

3

By using the properties of a generator of an archimedean copula, we deduce that

d2H2

i

≥ 0, i.e H is a convex function according to α

i

(*).

We prove that u(x) := F

i(1)

(x)

−2

is a concave function according to x if x ≥ F

i

( √

3) (**).

(28)

Proof. In fact, let F

i(−1)

(x) = v, we have:

u(x)

′′

= 6v

4

− 2v

2

1

exp( − v

2

) .

Hence, we deduce that u

′′

(x) ≤ 0 if and only if x ≥ F

i

( √ 3).

We prove that ψ

(1)

i

ψ(θ)] ≥ F

i

( √

3) (***).

Proof. In fact, by using the fact that 0 ≤ α

i

≤ 1 and θ ≥ F

i

( √

3), we deduce the proof.

We need the following lemma:

Lemma 2.5. [[15]] Let M

1

, M

2

⊂ R and M

1

, M

2

are convex sets. Suppose f

1

: M

1

→ M

2

is a convex function on M

1

and f

2

: M

2

→ R is a decreasing and concave function on M

2

. Then, the composition function f

2

o f

1

is a convex function on M

1

.

By applying (*),(**),(***) and the lemma (2.5), we deduce the convexity of F

i(−1)

ψ

(1)

i

ψ(θ)]

2

.

Proof of (iii). By using (i) and (ii), we deduce that all constraints are convex.

(29)

2.4. Dependent joint chance constraints with general copula

We rewrite the constraint of (1) in the previous section as follows:

Φ (g

1

(x, w

1

), .., g

K

(x, w

K

)) ≥ θ

⇔ C [F

1

(g

1

(x, w

1

)), .., F

K

(g

K

(x, w

K

))] ≥ θ

⇔ g

i

(x, w

i

)

2

≤ F

i(1)

ψ

(1)

i

ψ(θ)]

2

, ∀ i = 1, K

K

X

i=1

α

i

= 1.

In this section, we suppose that the copula C is a function C(x) of x, and for all x, C(x) is an Archimedean copula with generator ψ

x

. The last constraint is rewritten as follows:

⇔ g

i

(x, w

i

)

−2

≤ F

i(1)

ψ

(−1)x

i

ψ

x

(θ)]

2

, ∀ i = 1, K

K

X

i=1

α

i

= 1.

The only difference between this case and the independent copula case is that the element on the right-hand side F

i(−1)

n

ψ

x(−1)

i

ψ

x

(θ)] o

2

does

not depend on x, then that is a function depending only on α

i

. Based on

the same proof of the previous section, we only need to find a family of

generators ψ

x

such that ψ

x(1)

i

ψ

x

(θ)] is a convex function with respect to

(x, α

i

). Unfortunately, we do not have the convexity in this case because of

α

i

. However, if there exists a lower bound ǫ > 0 for the α

i

(i.e α

i

≥ ǫ), we can

prove that if θ ≈ 1, we have the convexity. We have the following theorem:

(30)

Theorem 2.6. Consider the linear programming (1). Let

M the feasible set of (1). Suppose that ψ

x

(t) = g(x)(t

g(x)1

− 1) (a Clayton copula family with g(x) > 0). Suppose that g(x) is continuously differentiable up to second order (cf. definition (2.2)) where g

′′

(x) ≤ 0 and there exists a δ > 0 such that g(x) ≥ δ, ∀ x ∈ Q. Suppose que θ ≥ Φ

N(0,1)

( √

3).

Suppose that ∀ i = 1, K we have:

(1) Σ

i

is a definite positive matrix with 0 < λ

i,min

≤ λ

i,max

, where λ

i,max

is the biggest eigenvalue of Σ

i

and λ

i,min

is the smallest eigenvalue of Σ

i

. (2) For all x ∈ Q and i = 1, K, we have:

c

i

> 0.

and 1 t

+i

p b

2i

− 4a

i

c

i

− b

i

2c

i

or b

2i

≤ 4a

i

c

i

.

where:

(31)

a

i

= ( − x

T

γ

i

)

2

λ

i,min

+ α

x,γii

.

b

i

= 2( − x

T

γ

i

)(D

i

− x

T

µ

i

i,min

− 6(x

T

Σ

i

x) k γ

i

kk µ

i

k + β

x,γiii

. c

i

= (D

i

− x

T

µ

i

)

2

λ

i,min

+ θ

x,µii

.

α

x,γii

:= u

T

z − s

(u

T

z)

2

+ X

1≤i<j≤n

(u

i

z

j

− u

j

z

i

)

2

.

β

x,γiii

:= v

T

z − s

(v

T

z)

2

+ X

1≤i<j≤n

(v

i

z

j

− v

j

z

i

)

2

.

θ

x,µii

:= q

T

z − s

(q

T

z)

2

+ X

1≤i<j≤n

(q

i

z

j

− q

j

z

i

)

2

. u = 4( − x

T

γ

i

i

.

v = 4( − x

T

γ

i

i

+ 4(D

i

− x

T

µ

i

i

. q = 4(D

i

− x

T

µ

i

i

.

z = Σ

i

x.

(4) For all x ∈ Q, we have:

( − x

T

γ) > 0.

D − x

T

µ > 0.

Hence, there exists ω(ǫ, δ) depending on ǫ and δ such that ∀ θ ≥ ω(ǫ, δ), we have M is a convex set.

Proof. Based the same proof of theorem (2.4), we only need to prove that

U (α

i

, x) := ψ

x(−1)

i

ψ

x

(θ)] is convex with respect to (x, α

i

), ∀ θ ≥ ω(ǫ, δ), for

some ω(ǫ, δ).

(32)

Let u

x

: R → R a family of real functions depending on x such that there exists q

x

∈ L ( R , R

n

) and v

x

∈ M

n×n

( R ) such that q

x

=

dxd

u

x

and v

x

=

dxd22

u

x

(cf. definition (2.2)). For a function f : R

n

→ R , we have the following equations:

d

dx [u

x

(f(x))] = q

x

(f(x)) + u

x

(f (x)).f

(x). (12)

d

dx [q

x

(f (x))] = v

x

(f (x)) + q

x

(f (x)).f

(x)

T

. (13) Let J = ψ

x(1)

(θ); K

x

:= ψ

(x1)

; L

x

=

dxd

K

x

; M

x

=

dxd22

K

x

.

We deduce the following equations:

K

x

(t) =

t g(x) + 1

−g(x)

L

x

(t) = K

x

(t) t

t + g(x) − log t

g(x) + 1

g

(x) M

x

(t) =

"

t

t + g(x) − log t

g (x) + 1

2

+ t

tg(x) + g(x)

2

− t (t + g(x))

2

#

× K

x

(t)g

(x)g

(x)

T

+ K

x

(t)

t

t + g(x) − log t

g(x) + 1

g

′′

(x) K

x

(t) = − K

x

(t) g(x)

t + g (x) K

x′′

(t) = K

x

(t) g(x)

2

+ g(x)

(t + g(x))

2

L

x

(t) = K

x

(t)

g(x) t + g(x) log

t g(x) + 1

− t(g(x) + 1) (t + g(x))

2

g

(x). (14)

(33)

We have:

d22

i

U (α

i

, x) = J

2

K

x′′

i

J) > 0. Hence, the necessary and suffi- cient condition for the convexity of U (α

i

, x) is the semidefinite positivity of the following symmetric matrix:

"

d

2

dx

2

d

2

i2

d

2

dxdα

i

d

2

dxdα

i

T

#

o

[U (α

i

, x)]

= Q

1

× M

x

i

J) + Q

2

× [L

x

i

J )L

x

(J)

T

+ L

x

(J )L

x

i

J)

T

] + Q

3

× [L

x

(J )L

x

(J)

T

+ L

x

(J )L

x

(J )

T

] + Q

4

× [L

x

(J )L

x

(J )

T

]

+ Q

5

× L

x

i

J)L

x

i

J)

T

. (15) where

Q

1

= J

2

K

x′′

i

J )

1 − α

i

K

x

i

J ) K

x

(J )

Q

2

= J K

x

i

J) K

x

(J)

Q

3

= α

i

J

2

K

x

i

J )K

x′′

i

J ) K

x

(J )

2

Q

4

= − α

i

J

2

K

x′′

(J)K

x

i

J )K

x′′

i

J )

K

x

(J )

3

− 2α

i

JK

x′′

i

J )K

x

i

J )

K

x

(J)

2

− K

x

i

J)

2

K

x

(J)

2

Q

5

= − J

2

. (16)

By using the equations of (14), we deduce that the equation (15) is equiv-

alent to the following equation:

(34)

"

d

2

dx

2

d

2

2i

d

2

dxdα

i

d

2

dxdα

i

T

#

o

[U(α

i

, x)]

= A × g

′′

(x) + (B

1

+ B

2

+ B

3

+ B

4

+ B

5

) × g

(x)g

(x)

T

(17)

where

(35)

A = J

2

K

x

i

J)

2

g(x) + g(x)

2

i

J + g(x))

2

"

1 − α

i

α

i

J + g(x) J + g(x)

−g(x)−1

#

×

α

i

J

α

i

J + g(x) − log α

i

J

g(x) + 1

. B

1

= J

2

K

x

i

J )

2

g(x) + g(x)

2

i

J + g(x))

2

"

1 − α

i

α

i

J + g(x) J + g(x)

−g(x)−1

#

×

"

α

i

J

α

i

J + g(x) − log α

i

J

g(x) + 1

2

+ α

i

J α

i

J + g(x)

1

g(x) − 1 α

i

J + g(x)

# . B

2

= JK

x

i

J )

2

g(x)

α

i

J + g(x) log α

i

J

g(x) + 1

− α

i

J[g(x) + 1]

i

J + g(x)]

2

×

J J + g(x) − log

J

g(x)

+ 1

 + JK

x

i

J )

2

× g(x)

J + g(x) log J

g(x) + 1

− J[g(x) + 1]

[J + g(x)]

2

α

i

J

α

i

J + g(x) − log α

i

J

g(x) + 1

B

3

= − 2α

i

J

2

K

x

i

J )

2

[g(x) + 1][J + g(x)]

2

i

J + g(x)]

3

J

J + g(x) − log J

g(x) + 1

×

g(x) J + g(x) log

J g(x) + 1

− J[g(x) + 1]

[J + g(x)]

2

. B

4

= K

x

i

J)

2

[J + g (x)]

i

J + g(x)]

2

×

J

[J + g(x)] − log J

g (x) + 1

2

×

×

i

J [g(x) + 1][J + g(x)]

α

i

J + g(x) − α

i

J

2

[g(x) + 1]

2

α

i

J + g(x) − J − g(x)

B

5

= − J

2

K

x

i

J )

2

g(x)

α

i

J + g(x) log α

i

J

g(x) + 1

− α

i

J [g(x) + 1]

2

i

J + g(x)]

2

2

(18)

When θ ≈ 1, we deduce that J ≈ 0. By using the Taylor development,

we can show the following equalities:

(36)

A ≤ 0, ∀ J ≥ 0.

B

1

≈ J

4

× α

2i

(1 − α

i

)[g(x) + 1]

g(x)

4

. B

2

≈ J

3

× α

i

+ α

2i

2g(x)

4

. B

3

≈ − J

5

× α

i

[g(x) + 1]

g(x)

5

. B

4

≈ − J

4

× 1

4g(x)

4

. B

5

≈ − J

4

× α

2i

g(x)

4

.

Remark. Here, we denote A ≈ B if

AB

→ 1, when J → 0.

By using the assumptions that g

′′

(x) ≤ 0, α

i

≥ ǫ and g(x) ≥ δ, we deduce that there exists a ω(ǫ, δ) depending on (ǫ, δ) such that for all θ ≥ ω(ǫ, δ), we have:

"

d

2

dx

2

d

2

2i

d

2

dxdα

i

d

2

dxdα

i

T

#

o

[U (α

i

, x)] ≥ 0.

We deduce that U (α

i

, x) is a convex function with respect to (α

i

, x).

3. Approximation methods and numerical results

3.1. Individual chance constraints case

In theorem (2.2) where K = 1, n is the dimension, we choose , D =

200 × n, t

+

= 1, µ = γ = ( − 1, − 1, .., − 1), Σ = I d

n

(identity matrix) , i.e

Références

Documents relatifs

In this manuscript, uncertainty quantification (UQ) has been performed utilizing two di fferent model form def- initions that can be defined within a coupled model framework: (1) at

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

The two classes of elements of the model discussed above contain elements that are abstractions of physical components actually present in the system (the genetic

Una delle motivazioni di questo rifiuto, addotta a più riprese, è che gli accordi internazionali stipulati con i paesi interessati del Nordafrica non consentono di

If the distribution of r conditional on   (and their past) were multivariate normal, with a linear mean a + b  and a constant covariance matrix Ω, then OLS

Consider a metrical space on the continuous functions defined on the time interval of the trajectories and use tests on functional data to analyze the time stability of the

In this paper, we show under which conditions generalized finite mixture with underlying normal distribution are identifiable in the sense that a given dataset leads to a

Detection of signif- icant motifs [7] may be based on two different approaches: either by comparing the observed network with appropriately randomized networks (this requires