HAL Id: hal-03092800
https://hal.archives-ouvertes.fr/hal-03092800
Preprint submitted on 2 Jan 2021
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de
Convexity conditions for normal mean-variance mixture distribution in joint probabilistic constraints
Hoang Nam Nguyen, Abdel Lisser
To cite this version:
Hoang Nam Nguyen, Abdel Lisser. Convexity conditions for normal mean-variance mixture distribu-
tion in joint probabilistic constraints. 2021. �hal-03092800�
Convexity conditions for normal mean-variance mixture distribution in joint probabilistic
constraints
Hoang Nam NGUYEN
a, Abdel LISSER
aa
Laboratory of signals and systems, Centrale Supélec
Abstract
In this paper, we study the linear programming with probabilistic con- straints. We suppose that the distribution of the constraint rows is a normal mean-variance mixture distribution and the dependence of rows is repre- sented by an Archimedean copula. We prove the convexity of the feasibility set in some additional conditions. Next, we propose a sequential approxima- tion by linearization which provides a lower bound and a gradient descent method which provides an upper bound with numerical results.
Keywords:
Probabilistic constraints, Archimedean copulas, Normal mean-variance mixture distributions, Convex optimization
1. Introduction
We study the following linear programming with joint probabilistic con-
straints:
min c
Tx
subject to P { V x ≤ D } ≥ 1 − ǫ
x ∈ Q. (1)
where Q is a closed convex subset of R
n; c ∈ R
n, D := (D
1, .., D
K) ∈ R
Kis a deterministic vector, V := [v
1, .., v
K]
Tis a random matrix with size K × n, where v
kis a random vector in R
n, ∀ k = 1, K and ǫ ∈ [0, 1] .
1.1. Survey of literature
The probabilistic constraint optimisation has been widely studied since longtime ago. Prékopa studied the concavity and quasi-concavity proper- ties for probability distribution functions in his article [16] in 1970. Sen introduced a relaxation method for probabilistic constraint programming with discrete random variable in [21]. Lobo studied some applications of second-order cone programming in [12] which gave a new approach for solv- ing problems of probabilistic constraints. Henrion gave a general structural property for linear probabilistic constraints in [10]. In 2014, Cheng used the second-order cone programming approach for solving his joint probabilistic constraints problem in [4]. He supposed that the distribution of the con- straint rows is elliptically distributed and the dependence of the rows follows an Archimedean copula.
In this paper, we study the same probabilistic constraint problem as in [4].
We suppose that the distribution of the row vectors is normal mean-variance
mixture distributed and the dependence of the rows is an Archimedean cop-
ula. By assuming some conditions, we prove the convexity of the feasible set
of solutions. We propose two approximations method which gives a lower bound and an upper bound and present some numerical results.e
1.2. Why is normal mean-variance mixture distribution?
Definition 1.1. A random variable X in R
nis a normal mean-variance mixture distribution if:
X = µ + γW + √ W AZ,
where (1) Z is a n-dimension standard normal distribution N
n(0, I
n).
(2) W a positive random variable independent of Z.
(3) A ∈ R
n×kis a matrix such that AA
T= Σ, where Σ a semidefinite positive matrix ∈ R
n×n.
(4) µ and γ are n-real vectors.
The relation between normal mean-variance distributions and elliptical distributions is represented by the following proposition:
Proposition 1. [[13], theorem 3.25] Denote Ψ
∞the set of characteristic generators which generate a spherical distribution in dimension n, for all n ≥ 1. Hence, Y follows S
n(ψ) with ψ ∈ Ψ
∞if and only if Y = √
W Z , where Z is a n-dimension standard normal distribution N
n(0, I
n) independent of W ≥ 0.
Based on this proposition, we deduce that an elliptical distribution U can be represented in the form U = µ + √
W AZ with µ ∈ R
n, A ∈ R
n×kand Z is a n-dimension standard normal distribution N
n(0, I
n) if and only if
U = µ + AY , where Y follows S
n(ψ) and ψ ∈ Ψ
∞.
The family of distributions normal mean-variance mixture represents a comprehensive subset of the family of elliptical distributions. There exists some elliptical distributions which cannot be represented in the form of a normal mean-variance mixture distribution. However, it represents a big subset of the family of elliptical distributions and plays an important role in the elliptical world.
Next, we define an important subset of the family of normal mean- variance mixture distributions , the family of hyperbolic distributions.
Definition 1.2. A random variable X is a hyperbolic distribution if it is a normal mean-variance mixture where the random variable W in definition (1.1) is an inverse Gaussian distribution whose density function with respect to the measure of Lebesgue is:
g(w) = Cw
λ−1exp
− 1
2 (χw
−1+ ψw)
, ∀ w ∈ [0, ∞ ).
where C is a constant and the domain of variation of the parameters is:
χ > 0, ψ ≥ 0 if λ < 0.
χ > 0, ψ > 0 if λ = 0.
χ ≥ 0, ψ > 0 if λ > 0.
The family of hyperbolic distributions is a generalization of many elliptical
distributions. For example, the t-multivariate distribution with parameters
(Σ, µ, ν) is a particular case of hyperbolic distributions when λ =
−2ν; χ = ν
; ψ = 0 ; µ = µ ; Σ = Σ ; γ = 0 . We summarize some important elliptical distributions (in dimension p) by the following table:
Density µ Σ γ λ χ ψ
Normal C.exp −
12k x k
2µ Σ 0 0 0 0
t-distribution C.(1 +
kxνk2)
−p+ν2µ Σ 0
−2νν 0 Cauchy C.(1 + k x k
2)
−p+12µ Σ 0
−121 0
Laplace µ Σ 0 1 0 -2
Pearson VII C.(1 +
kxmk2)
−Nµ Σ 0
p2− N m 0
A disadvantage of elliptical distributions is that they are symmetric. We should not use elliptical family for modelling in some cases. For example, we can find some applications of hyperbolic distributions in modelling financial in [7], [1], [20] and [19].
2. Normal mean-variance mixture constraints in linear program- ming
In this section, we study the linear programming (1). We suppose that v
iare random vectors. We find some sufficient conditions (necessary if possible)
such as P { V x ≤ D } is a concave function with respect to x. A convex
optimization can be defined formally as following:
min f(x)
subject to g
k(x) ≤ 0, k = 0, m − 1 Gx h
Ax = b x ∈ Q.
where Q is a closed convex subset of R
n; f (x) and g
i(x) are convex func- tions, i = 0, m − 1; h and b are n-real vectors ; G and A are deterministic matrices.
2.1. Preliminaries
Proposition 2 ([11], lemme 3.1). Given F : R → [0, 1] a distribution function with (r + 1) - decreasing density for some r > 0 and a threshold t
∗(r + 1) > 0. Hence, the function z 7→ F (z
−1r) is a concave function on (0, t
∗(r + 1)
−r). Moreover, F (t) < 1, ∀ t ∈ R .
Definition 2.1. ([4]) A real function f : R → R is K - monotone on an open interval I ⊆ R with K ≥ 2 if it is differentiable up to (K − 2)th - order and the derivatives are satisfied by:
( − 1)
kd
kdt
kf (t) ≥ 0, ∀ 0 ≤ k ≤ K − 2 et ∀ t ∈ I.
and the function ( − 1)
K−2dtdKK−−22f (t) is non-increasing and convex on I.
Proposition 3 ([14]). Given ψ : [0, 1] → [0, + ∞ ) a strictly decreasing func-
tion such that ψ(1) = 0. Hence, it is the generator of an archimedean copula
in dimension K if and only if ψ
−1is K-monotone on (0, ψ(0)).
Definition 2.2. Given f : Q × R → R a real function, where Q is a subset of R
n. We say that f is differentiable at x ∈ Q if there exists a function g : Q → R
Rnsuch that ∀ θ ∈ R , there exists a neighbourhood N (θ) of θ such that if we note f
N(θ)the restriction of f on N (θ), we have:
ǫ→
lim
0,ǫ∈Rnf
N(θ)(x + ǫ, :) − f
N(θ)(x, :) − < ǫ, g(x) >
k ǫ k
max
= 0,
where R
Rndenotes the set of functions from R to R
nand k . k
maxis the maximum norm.
Moreover, we say that f is differentiable up to second-order at x if there exists a function h : Q → R
Rn×nsuch that we have:
ǫ→
lim
0,ǫ∈Rnf
N(θ)(x + ǫ, :) − f
N(θ)(x, :) − < ǫ, g(x) > −
12ǫ
Th(x)ǫ k ǫ k
2max
= 0
where R
Rn×ndenotes the set of functions from R to R
n×n.
Denote
dxdf:= g the derivative at first-order of f according to x and
d2f
dx2
:= h the derivative at second-order of f according to x.
In the programming (1), we suppose that v
kis a normal mean-variance mixture distribution, for 1 ≤ k ≤ K. Next, we will show that the feasible set of (1) is a convex set by adding some additional conditions.
2.2. Individual chance constraints
Suppose that K = 1 and V follows a normal mean-variance mixture
distribution with parameters (µ, Σ, γ) (cf. definition (1.1)). Suppose that
0 ∈ / Q.
Lemma 2.1 (Proposition 5, [5]). The standard normal distribution has 3-decreasing density with a threshold t
∗(3) = √
3.
The constraint in (1) can be rewritten as follows:
P
v1v
T1x ≤ D
≥ 1 − ǫ.
⇔ P
W,Z(µ
T+ W γ
T+ √
W Z
TA
T)x ≤ D
≥ 1 − ǫ
⇔ P
W,Z(W γ
T+ √
W Z
TA
T)x ≤ D − x
Tµ
≥ 1 − ǫ.
⇔ P
W,XW
k x
TΣ
12k
2x
Tγ + √
W X ≤ D − x
Tµ k x
TΣ
12k
2!
≥ 1 − ǫ.
By letting X := Z
TA
Tx k x
TΣ
12k
2! .
⇔ E
1
X≤
− x
Tγ k x
TΣ
12k
2√W+
D − x
Tµ
√ W k x
TΣ
12k
2
≥ 1 − ǫ.
⇔ E
W
E
1
X≤
− x
Tγ k x
TΣ
12k
2√W+
D − x
Tµ
√ W k x
TΣ
12k
2
| W
≥ 1 − ǫ.
⇔ E
W"
Φ
X− x
Tγ k x
TΣ
12k
2√ W + D − x
Tµ
√ W k x
TΣ
12k
2!#
≥ 1 − ǫ. (2) where ǫ ∈ (0, 1), X follows a standard normal distribution N (0, 1), Φ
Xis the distribution function of X , W is a positive random variable independent of X.
We have the following theorem:
Theorem 2.2. Consider the linear programming (1). Let
M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } is the feasible set of (1). Suppose that K = 1 and V follows a normal mean-variance mixture distribution with parameters (µ, Σ, γ).
Suppose that:
(1) Σ is a definite positive matrix with 0 < λ
min≤ λ
max, where λ
maxis the biggest eigenvalue of Σ and λ
minis the smallest eigenvalue of Σ.
(2) W is a random variable in [t
min, t
max] with 0 ≤ t
min≤ t
max≤ t
+. (3) For all x ∈ Q, we have:
c > 0.
and 1 t
+≥
√ b
2− 4ac − b
2c or b
2≤ 4ac.
where:
a = ( − x
Tγ)
2λ
min+ α
x,γ,Σ.
b = 2( − x
Tγ)(D − x
Tµ)λ
min− 6(x
TΣx) k γ kk µ k + β
x,γ,µ,Σ. c = (D − x
Tµ)
2λ
min+ θ
x,µ,Σ.
α
x,γ,Σ:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γ,µ,Σ:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µ,Σ:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ)γ.
v = 4( − x
Tγ)γ + 4(D − x
Tµ)µ.
q = 4(D − x
Tµ)µ.
z = Σx.
(4) For all x ∈ Q, we have:
2 p
( − x
Tγ)(D − x
Tµ)
√ x
TΣx > √ 3.
( − x
Tγ) > 0.
D − x
Tµ > 0.
Hence, M is a convex set.
Proof. The proof of this theorem follows this outline:
(i) We show that f(x, W ) := − x
Tγ k x
TΣ
12k
2√ W + D − x
Tµ
√ W k x
TΣ
12k
2is a (-2)-
concave function according to x on Q, for all W ≥ 0.
(ii) By using (i), we show that Φ
X(f (x, W )) is a concave function accord- ing to x, for all W ≥ 0.
(iii) By using (ii), we show that E
W[Φ
X(f (x, W ))] is a concave function according to x on Q.
(iv) By using (iii), we deduce that the feasible set of (1) is a convex set.
Proof of (i). Let f (x, W ) := − x
Tγ k x
TΣ
12k
2√ W + D − x
Tµ
√ W k x
TΣ
12k
2.
The ( − 2) − concavity of f(x, W ) is equivalent to the convexity of the following function:
h(x, W ) := x
TΣx − x
Tγ √
W +
√1W(D − x
Tµ)
2= x
TΣx
W (x
Tγ)
2+
W1(x
Tµ − D)
2+ 2x
Tγ(x
Tµ − D) .
Let W (x
Tγ)
2+
W1(x
Tµ − D)
2+ 2x
Tγ (x
Tµ − D) =: M . Denote H
xh(x, W ) the gradient vector of h according to x and H
x2h(x, W ) the Hessian matrix of h according to x.
By a direct calculation, we deduce a formula of the gradient vector and the Hessian matrix of h according to x as follows:
( ∗ )H
xh(x, W ) = 2.M
−1.Σx − M
−2.(x
TΣx).
.
2W x
Tγ.γ + 2
W (x
Tµ − D).µ + 2(x
Tµ − D).γ + 2x
Tγ.µ
.
( ∗ )H
x2h(x, W ) = 2M
−1.Σ − 8M
−2.
(W x
Tγ + x
Tµ − D).(Σxγ
T+ γx
TΣ)
− 8M
2. 1
W (x
Tµ − D) + x
Tγ
.(Σxµ
T+ µx
TΣ)
+ 6.M
−2(x
TΣx).
W γγ
T+ 1
W µµ
T+ γµ
T+ µγ
T.
= 2.M
−2W ( − x
Tγ) + D − x
Tµ . .
( − x
Tγ)Σ + 4(Σxγ
T+ γx
TΣ) + 1
W
(D − x
Tµ)Σ + 4(Σxµ
T+ µx
TΣ)
+ 6.M
−2(x
TΣx).
W γγ
T+ 1
W µµ
T+ γµ
T+ µγ
T.
⇔ H
x2h(x, W ) 2.M
−2=
= W
( − x
Tγ)
2Σ + 4( − x
Tγ).(Σxγ
T+ γx
TΣ) + 3(x
TΣx)γγ
T+ +
2( − x
Tγ)(D − x
Tµ)Σ + 4( − x
Tγ)(Σxγ
T+ γx
TΣ) + +
4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)(γµ
T+ µγ
T) + + 1
W
(D − x
Tµ)
2Σ + 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)µµ
T.
= W A + B + 1 W C.
where
A = ( − x
Tγ)
2Σ + 4( − x
Tγ).(Σxγ
T+ γx
TΣ) + 3(x
TΣx)γγ
T. B = 2( − x
Tγ)(D − x
Tµ)Σ + 4( − x
Tγ)(Σxγ
T+ γx
TΣ)+
+ 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)(γµ
T+ µγ
T).
C = (D − x
Tµ)
2Σ + 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)µµ
T.
For that h is a convex function according to x for all W ∈ [0, t
+], it is
necessary that the Hessian matrix H
x2h(x, W ) is semidefinite positive for all (x, W ) ∈ Q × [0, t
+]. That is equivalent to the semidefinite positivity of the matrix W A + B +
W1C for all (x, W ) ∈ Q × [0, t
+]. We have:
(1) A = ( − x
Tγ)
2Σ + 4( − x
Tγ).(Σxγ
T+ γx
TΣ) + 3(x
TΣx)γγ
T.
Given M, N two any symmetric matrix. Denote M N if the matrix M − N is a semidefinite positive matrix.
By using this notation, we deduce the following inequalities:
* Σ λ
minI d
n.
* 4( − x
Tγ)(Σxγ
T+ γx
TΣ) - is a symmetric matrix therefore diagonal- izable. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:
u
Tz ± s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
where u = 4( − x
Tγ)γ and z = Σx.
* γγ
T0.
Let α
x,γ,Σ:= u
Tz − q
(u
Tz)
2+ P
1≤i<j≤n
(u
iz
j− u
jz
i)
2. Hence, we deduce the following inequality:
A
( − x
Tγ)
2λ
min+ α
x,γ,ΣI d
n. (3) (2) B = 2( − x
Tγ)(D − x
Tµ)Σ + 4( − x
Tγ)(Σxγ
T+ γx
TΣ)
+ 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)(γµ
T+ µγ
T).
We have:
* Σ λ
minI d
n.
* γµ
T+ µγ
T− 2 k γ kk µ k .
* 4( − x
Tγ)(Σxγ
T+ γx
TΣ) + 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) - is a symmetric matrix, hence diagonaliazble. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:
z
Tv ± s
(z
Tv)
2+ X
1≤i<j≤n
(z
iv
j− z
jv
i)
2where z = Σx and v = 4( − x
Tγ)γ + 4(D − x
Tµ)µ.
Let β
x,γ,µ,Σ:= v
Tz − q
(v
Tz)
2+ P
1≤i<j≤n
(v
iz
j− v
jz
i)
2. Hence, we deduce the following inequality:
B
2( − x
Tγ)(D − x
Tµ)λ
min− 6(x
TΣx) k γ kk µ k + β
x,γ,µ,ΣI d
n. (4) (3) C = (D − x
Tµ)
2Σ + 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) + 3(x
TΣx)µµ
T. We have:
* Σ λ
minI d
n.
* 4(D − x
Tµ)(Σxµ
T+ µx
TΣ) - is a symmetric matrix, therefore diago- nalizable. We can show that it has (n − 2) - eigenvalues which is 0 and 2 - eigenvalues which are:
z
Tq ± s
(z
Tq)
2+ X
1≤i<j≤n
(z
iq
j− z
jq
i)
2where q := 4(D − x
Tµ)µ and z = Σx.
* µµ
T0.
Let θ
x,µ,Σ:= q
Tz − q
(q
Tz)
2+ P
1≤i<j≤n
(q
iz
j− q
jz
i)
2.
Hence, we deduce the following inequality:
C (D − x
Tµ)
2λ
min+ θ
x,µ,Σ. (5)
By using (3), (4), (5), we deduce the following inequality:
W A + B + 1
W C W
( − x
Tγ)
2λ
min+ α
x,γ,ΣI d
n+ (6) 2( − x
Tγ )(D − x
Tµ)λ
min− 6(x
TΣx) k γ kk µ k + β
x,γ,µ,ΣI d
n+ 1
W
(D − x
Tµ)
2λ
min+ θ
x,µ,ΣI d
n. where:
α
x,γ,Σ:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γ,µ,Σ:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µ,Σ:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ)γ.
v = 4( − x
Tγ)γ + 4(D − x
Tµ)µ.
q = 4(D − x
Tµ)µ.
z = Σx.
Let a = ( − x
Tγ)
2λ
min+ α
x,γ,Σ,
b = 2( − x
Tγ)(D − x
Tµ)λ
min− 6(x
TΣx) k γ kk µ k + β
x,γ,µ,Σ,
c = (D − x
Tµ)
2λ
min+ θ
x,µ,Σ.
Obviously, the semidefinite positivity of the Hessian matrix H
x2h(x, W ) is deduced by the positivity of W a + b +
W1c, for all W ∈ [0, t
+].
We deduce a sufficient condition as follows:
c > 0.
and 1 t
+≥
√ b
2− 4ac − b
2c or b
2≤ 4ac.
where:
a = ( − x
Tγ)
2λ
min+ α
x,γ,Σ.
b = 2( − x
Tγ)(D − x
Tµ)λ
min− 6(x
TΣx) k γ kk µ k + β
x,γ,µ,Σ. c = (D − x
Tµ)
2λ
min+ θ
x,µ,Σ.
α
x,γ,Σ:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γ,µ,Σ:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µ,Σ:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ)γ.
v = 4( − x
Tγ)γ + 4(D − x
Tµ)µ.
q = 4(D − x
Tµ)µ.
z = Σx.
which is satisfied by the assumption (3) of the theorem.
We deduce that f(x, W ) is a (-2) - concave function according to x on Q, for all W ∈ [0, t
+].
Proof of (ii). Given x
1, x
2∈ Q, W ∈ [0, t
+], α ∈ [0, 1]. We have:
f [αx
1+ (1 − α)x
2, W ] ≥
αf
−2(x
1, W ) + (1 − α)f
−2(x
2, W )
−21. (by the (-2) - concavity of f by (i))
⇐⇒ Φ
X(f [αx
1+ (1 − α)x
2, W ])
≥ Φ
Xαf
−2(x
1, W ) + (1 − α)f
−2(x
2, W )
−21(by the strict increasing of Φ
X). (7)
By using the lemma (2.1), we deduce that the function Φ
X(t
−21) is a concave function on (0, (t
∗)
−2) where t
∗= √
3. We will show that f
−2(x
1, W ) is on (0, (t
∗)
−2).
Proof. In fact, we have:
| f (x
1, W ) | =
− x
T1γ k x
T1Σ
12k
2√ W + D − x
T1µ
√ W k x
T1Σ
12k
2≥ 2 p
( − x
T1γ)(D − x
T1µ) p x
T1Σx
1(by Cauchy inequality)
> t
∗(by the assumption (4) of the theorem)
⇐⇒ 0 < f (x
1.W )
−2< (t
∗)
−2.
Hence, by using the concavity of Φ
X(t
−21) on (0, (t
∗)
−2), we have:
Φ
Xαf
−2(x
1, W ) + (1 − α)f
−2(x
2, W )
−21≥ αΦ
X(f(x
1.W )) + (1 − α)Φ
X(f(x
2, W )) .
Hence, by combining with (7), we deduce that Φ
X(f (x, W )) is a concave function according to x on Q, for all W ∈ [0, t
+].
Proof of (iii). Let g(x, W ) := Φ
X(f(x, W )). By (ii), we shown that g(x, W ) is a concave function according to x on Q, for all W ∈ [0, t
+].
Noting that E
W: g → E
W(g) is a linear transformation and the concavity is conserved by the linear transformations, we deduce that:
E
W[Φ
X(f(x, W ))] − is a concave function according to x on Q.
Proof of (iv). Given x, y two points which satisfy the constraint (2) and α ∈ [0, 1]. We have:
E
W[Φ
X(f (x, W ))] ≥ 1 − ǫ.
E
W[Φ
X(f (y, W ))] ≥ 1 − ǫ.
E
W[Φ
X(f (αx + (1 − α)y, W ))] ≥
≥ α E
W[Φ
X(f (x, W ))] + (1 − α) E
W[Φ
X(f (y, W ))] .
(by the concavity of E
W[Φ
X(f(x, W ))] by (iii)) .
We deduce that E
W[Φ
X(f (αx + (1 − α)y, W ))] ≥ 1 − ǫ, as well as αx + (1 − α)y satisfies the constraint (2). Hence, the feasible set of (1) is a convex set.
2.3. Independent joint chance constraints
Suppose that v
ifollows a normal mean-variance mixture with parameters (µ
i, Σ
i, γ
i), for 1 ≤ i ≤ K and the vectors v
iare independent. Suppose that 0 ∈ / Q.
The constraint in (1) can be rewritten as follows:
P { V x ≤ D } ≥ 1 − ǫ.
⇔
K
Y
i=1
P
v
iTx ≤ D
i≥ 1 − ǫ (by the independence).
⇔
K
Y
i=1
E
Wi"
Φ
Xi− x
Tγ
ik x
TΣ
i12k
2p W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2!#
≥ 1 − ǫ ( by using the transformation of (2) in the section (2.2)).
⇔
K
X
i=1
log E
Wi"
Φ
Xi− x
Tγ
ik x
TΣ
i12k
2p W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2!#!
≥ log(1 − ǫ).
(8) We have the following theorem:
Theorem 2.3. Consider the linear programming (1). Let
M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } the feasible set of (1). Suppose that
∀ i = 1, K, v
ifollows a normal mean-variance mixture with parameters (µ
i, Σ
i, γ
i) and the vectors v
iare independent.
Suppose that ∀ i = 1, K, we have:
(1) Σ
iis a definite positive matrix with 0 < λ
i,min≤ λ
i,max, where λ
i,maxis the biggest eigenvalue of Σ
iand λ
i,minis the smallest eigenvalue of Σ
i. (2) W
iis a positive variable in [t
i,min, t
i,max] with 0 ≤ t
i,min≤ t
i,max≤ t
+i. (3) For all x ∈ Q and i = 1, K, we have:
c
i> 0.
and 1 t
+i≥
p b
2i− 4a
ic
i− b
i2c
ior b
2i≤ 4a
ic
i.
where:
a
i= ( − x
Tγ
i)
2λ
i,min+ α
x,γi,Σi.
b
i= 2( − x
Tγ
i)(D
i− x
Tµ
i)λ
i,min− 6(x
TΣ
ix) k γ
ikk µ
ik + β
x,γi,µi,Σi. c
i= (D
i− x
Tµ
i)
2λ
i,min+ θ
x,µi,Σi.
α
x,γi,Σi:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γi,µi,Σi:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µi,Σi:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ
i)γ
i.
v = 4( − x
Tγ
i)γ
i+ 4(D
i− x
Tµ
i)µ
i. q = 4(D
i− x
Tµ
i)µ
i.
z = Σ
ix.
(4) For all x ∈ Q, we have:
2 p
( − x
Tγ
i)(D
i− x
Tµ
i)
√ x
TΣ
ix > √
3, ∀ i = 1, K.
( − x
Tγ) > 0.
D − x
Tµ > 0.
Hence, M is a convex set.
Proof. Based on the proof of theorem (2.2), we deduce that E
Wi"
Φ
Xi− x
Tγ
ik x
TΣ
i12k
2√ W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2!#
is a concave function, i.e
that is also a log-concave function.
Hence, P
Ki=1
log E
Wi"
Φ
Xi− x
Tγ
ik x
TΣ
i12k
2√ W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2!#!
is a con- cave function. We deduce that the feasible set of (1) is convex.
2.3.1. Dependent joint chance constraints with independent copula
Suppose that v
ifollows a normal mean-variance mixture distribution with parameters (µ
i, Σ
i, γ
i), for 1 ≤ i ≤ K. Suppose that 0 ∈ / Q.
The constraint in (1) can be rewritten as follows:
P { V x ≤ D } ≥ 1 − ǫ.
⇔ P (
K[
i=1
v
Tix ≤ D
i)
≥ 1 − ǫ.
⇔ E
W
E
1
SK i=1
Xi(x)≤
− x
Tγ
ik x
TΣ
i12k
2√Wi+
D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2
| [ W
i
≥ 1 − ǫ,
following the same procedure as (2).
⇔ E
W[Φ (g
1(x, W ), .., g
K(x, W ))] ≥ 1 − ǫ.
Here we suppose that W and X(x) are independent, with
W := (W
1, .., W
K), X (x) := (X
1(x), .., X
K(x)), X
i(x) := Z
iTA
Tix k x
TΣ
i12k
2, g
i(x, W ) := − x
Tγ
ik x
TΣ
i12k
2p W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2. (9)
where Φ is the distribution function of X.
Remark. In general, unfortunately, we cannot prove the concativity of the
function Φ (g
1(x, W ), .., g
K(x, W )), i.e E
W[Φ (g
1(x, W ), .., g
K(x, W ))].
Consequently, we suppose additionally that for i = 1, K, we have t
i,min≈ t
i,max. Hence, we deduce that E
W[Φ (g
1(x, W ), .., g
K(x, W ))]
≈ Q
Ki=1
(t
i,max− t
i,min) × Φ (g
1(x, w
1), .., g
K(x, w
K)), where w
iis an arbi- trary point on [t
i,min, t
i,max]. Then the constraint in (1) can be rewritten as follows:
Φ (g
1(x, w
1), .., g
K(x, w
K)) ≥ θ (10) where X(x) := (X
1(x), .., X
K(x)), X
i(x) :=
ZTiATixkxTΣi12k2
, g
i(x, W ) := − x
Tγ
ik x
TΣ
i12k
2√ W
i+ D
i− x
Tµ
i√ W
ik x
TΣ
i12k
2and Φ is the distribution function of X.
Suppose that there exists an archimedean copula C which does not de- pend on x such that
Φ (g
1(x, w
1), .., g
K(x, w
K)) = C[F
1(g
1(x, w
1)), .., F
K(g
K(x, w
K))], where F
ithe distribution function of X
i(x) which is a standard normal distribution and C(u) = ψ
(−1)P
Ki=1
ψ(u
i)
, where ψ is a generator of C. We reformulate
the constraint (10) as follows:
Φ (g
1(x, w
1), .., g
K(x, w
K)) ≥ θ
⇔ C [F
1(g
1(x, w
1)), .., F
K(g
K(x, w
K))] ≥ θ
⇔
K
X
i=1
ψ [F
i(g
i(x, w
i))] ≤ ψ(θ) (by the decreasing property of ψ)
⇔ F
i(g
i(x, w
i)) ≥ ψ
(−1)[α
iψ(θ)] , ∀ i = 1, K
K
X
i=1
α
i= 1
⇔ g
i(x, w
i)
−2≤ F
i(−1)ψ
(−1)[α
iψ(θ)]
−2, ∀ i = 1, K
K
X
i=1
α
i= 1. (11)
We have the following theorem:
Theorem 2.4. Consider the linear programming (1). Let
M := { x ∈ Q | P { V x ≤ D } ≥ 1 − ǫ } the feasible set of (1). Suppose that ∀ i = 1, K, v
ifollows a normal mean-variance mixture with parameters (µ
i, Σ
i, γ
i). Suppose that W
iis a positive random variable in [t
i,min, t
i,max] with t
i,min≈ t
i,max. Let
ZiTATixkxTΣi12k2
:= X
i(x), ∀ 1 ≤ i ≤ K and x ∈ Q.
Suppose that the multivariate copula of X(x) := (X
1(x), .., X
K(x)) is in- dependent of x. Moreover, W and X(x) are independent. Denote C the multivariate copula of X(x). Suppose that C is an archimedean copula where ψ is a generator. Suppose that θ ≥ Φ
N(0,1)( √
3).
Suppose that ∀ i = 1, K we have:
(1) Σ
iis a definite positive matrix with 0 < λ
i,min≤ λ
i,max, where λ
i,maxis the biggest eigenvalue of Σ
iand λ
i,minis the smallest eigenvalue of Σ
i.
(2) W
iis a random variable in [t
i,min, t
i,max] with 0 ≤ t
i,min≤ t
i,max≤ t
+i. (3) For all x ∈ Q and i = 1, K, we have:
c
i> 0.
and 1 t
+i≥
p b
2i− 4a
ic
i− b
i2c
ior b
2i≤ 4a
ic
i. where:
a
i= ( − x
Tγ
i)
2λ
i,min+ α
x,γi,Σi.
b
i= 2( − x
Tγ
i)(D
i− x
Tµ
i)λ
i,min− 6(x
TΣ
ix) k γ
ikk µ
ik + β
x,γi,µi,Σi. c
i= (D
i− x
Tµ
i)
2λ
i,min+ θ
x,µi,Σi.
α
x,γi,Σi:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γi,µi,Σi:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µi,Σi:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ
i)γ
i.
v = 4( − x
Tγ
i)γ
i+ 4(D
i− x
Tµ
i)µ
i. q = 4(D
i− x
Tµ
i)µ
i.
z = Σ
ix.
(4) For all x ∈ Q, we have:
2 p
( − x
Tγ
i)(D
i− x
Tµ
i)
√ x
TΣ
ix > √
3, ∀ i = 1, K.
( − x
Tγ) > 0.
D − x
Tµ > 0.
Hence, M is a convex set.
Proof. The proof of this theorem follows this outline:
(i) We show that g
i(x, w
i) is a (-2)-concave function according to x on Q,
∀ w
i∈ [t
i,min, t
i,max].
(ii) We show that F
i(−1)ψ
(−1)[α
iψ(θ)]
−2is a convex function according to α
ion [0, 1].
(iii) By using (i) and (ii), we deduce that the feasible set of (1) is a convex set.
Proof of (i). The proof follows from theorem (2.2).
Proof of (ii). Let H = ψ
(−1)[α
iψ(θ)]. By a direct calculation, we deduce the following formulation:
d
2H
dα
2i= − ψ(θ)
2× ψ
′′(H) ψ
′(H)
3By using the properties of a generator of an archimedean copula, we deduce that
ddα2H2i
≥ 0, i.e H is a convex function according to α
i(*).
We prove that u(x) := F
i(−1)(x)
−2is a concave function according to x if x ≥ F
i( √
3) (**).
Proof. In fact, let F
i(−1)(x) = v, we have:
u(x)
′′= 6v
−4− 2v
−21
2π
exp( − v
2) .
Hence, we deduce that u
′′(x) ≤ 0 if and only if x ≥ F
i( √ 3).
We prove that ψ
(−1)[α
iψ(θ)] ≥ F
i( √
3) (***).
Proof. In fact, by using the fact that 0 ≤ α
i≤ 1 and θ ≥ F
i( √
3), we deduce the proof.
We need the following lemma:
Lemma 2.5. [[15]] Let M
1, M
2⊂ R and M
1, M
2are convex sets. Suppose f
1: M
1→ M
2is a convex function on M
1and f
2: M
2→ R is a decreasing and concave function on M
2. Then, the composition function f
2o f
1is a convex function on M
1.
By applying (*),(**),(***) and the lemma (2.5), we deduce the convexity of F
i(−1)ψ
(−1)[α
iψ(θ)]
−2.
Proof of (iii). By using (i) and (ii), we deduce that all constraints are convex.
2.4. Dependent joint chance constraints with general copula
We rewrite the constraint of (1) in the previous section as follows:
Φ (g
1(x, w
1), .., g
K(x, w
K)) ≥ θ
⇔ C [F
1(g
1(x, w
1)), .., F
K(g
K(x, w
K))] ≥ θ
⇔ g
i(x, w
i)
−2≤ F
i(−1)ψ
(−1)[α
iψ(θ)]
−2, ∀ i = 1, K
K
X
i=1
α
i= 1.
In this section, we suppose that the copula C is a function C(x) of x, and for all x, C(x) is an Archimedean copula with generator ψ
x. The last constraint is rewritten as follows:
⇔ g
i(x, w
i)
−2≤ F
i(−1)ψ
(−1)x[α
iψ
x(θ)]
−2, ∀ i = 1, K
K
X
i=1
α
i= 1.
The only difference between this case and the independent copula case is that the element on the right-hand side F
i(−1)n
ψ
x(−1)[α
iψ
x(θ)] o
−2does
not depend on x, then that is a function depending only on α
i. Based on
the same proof of the previous section, we only need to find a family of
generators ψ
xsuch that ψ
x(−1)[α
iψ
x(θ)] is a convex function with respect to
(x, α
i). Unfortunately, we do not have the convexity in this case because of
α
i. However, if there exists a lower bound ǫ > 0 for the α
i(i.e α
i≥ ǫ), we can
prove that if θ ≈ 1, we have the convexity. We have the following theorem:
Theorem 2.6. Consider the linear programming (1). Let
M the feasible set of (1). Suppose that ψ
x(t) = g(x)(t
−g(x)1− 1) (a Clayton copula family with g(x) > 0). Suppose that g(x) is continuously differentiable up to second order (cf. definition (2.2)) where g
′′(x) ≤ 0 and there exists a δ > 0 such that g(x) ≥ δ, ∀ x ∈ Q. Suppose que θ ≥ Φ
N(0,1)( √
3).
Suppose that ∀ i = 1, K we have:
(1) Σ
iis a definite positive matrix with 0 < λ
i,min≤ λ
i,max, where λ
i,maxis the biggest eigenvalue of Σ
iand λ
i,minis the smallest eigenvalue of Σ
i. (2) For all x ∈ Q and i = 1, K, we have:
c
i> 0.
and 1 t
+i≥
p b
2i− 4a
ic
i− b
i2c
ior b
2i≤ 4a
ic
i.
where:
a
i= ( − x
Tγ
i)
2λ
i,min+ α
x,γi,Σi.
b
i= 2( − x
Tγ
i)(D
i− x
Tµ
i)λ
i,min− 6(x
TΣ
ix) k γ
ikk µ
ik + β
x,γi,µi,Σi. c
i= (D
i− x
Tµ
i)
2λ
i,min+ θ
x,µi,Σi.
α
x,γi,Σi:= u
Tz − s
(u
Tz)
2+ X
1≤i<j≤n
(u
iz
j− u
jz
i)
2.
β
x,γi,µi,Σi:= v
Tz − s
(v
Tz)
2+ X
1≤i<j≤n
(v
iz
j− v
jz
i)
2.
θ
x,µi,Σi:= q
Tz − s
(q
Tz)
2+ X
1≤i<j≤n
(q
iz
j− q
jz
i)
2. u = 4( − x
Tγ
i)γ
i.
v = 4( − x
Tγ
i)γ
i+ 4(D
i− x
Tµ
i)µ
i. q = 4(D
i− x
Tµ
i)µ
i.
z = Σ
ix.
(4) For all x ∈ Q, we have:
( − x
Tγ) > 0.
D − x
Tµ > 0.
Hence, there exists ω(ǫ, δ) depending on ǫ and δ such that ∀ θ ≥ ω(ǫ, δ), we have M is a convex set.
Proof. Based the same proof of theorem (2.4), we only need to prove that
U (α
i, x) := ψ
x(−1)[α
iψ
x(θ)] is convex with respect to (x, α
i), ∀ θ ≥ ω(ǫ, δ), for
some ω(ǫ, δ).
Let u
x: R → R a family of real functions depending on x such that there exists q
x∈ L ( R , R
n) and v
x∈ M
n×n( R ) such that q
x=
dxdu
xand v
x=
dxd22u
x(cf. definition (2.2)). For a function f : R
n→ R , we have the following equations:
d
dx [u
x(f(x))] = q
x(f(x)) + u
′x(f (x)).f
′(x). (12)
d
dx [q
x(f (x))] = v
x(f (x)) + q
x′(f (x)).f
′(x)
T. (13) Let J = ψ
x(−1)(θ); K
x:= ψ
(x−1); L
x=
dxdK
x; M
x=
dxd22K
x.
We deduce the following equations:
K
x(t) =
t g(x) + 1
−g(x)L
x(t) = K
x(t) t
t + g(x) − log t
g(x) + 1
g
′(x) M
x(t) =
"
t
t + g(x) − log t
g (x) + 1
2+ t
tg(x) + g(x)
2− t (t + g(x))
2#
× K
x(t)g
′(x)g
′(x)
T+ K
x(t)
t
t + g(x) − log t
g(x) + 1
g
′′(x) K
x′(t) = − K
x(t) g(x)
t + g (x) K
x′′(t) = K
x(t) g(x)
2+ g(x)
(t + g(x))
2L
′x(t) = K
x(t)
g(x) t + g(x) log
t g(x) + 1
− t(g(x) + 1) (t + g(x))
2g
′(x). (14)
We have:
dαd22i
U (α
i, x) = J
2K
x′′(α
iJ) > 0. Hence, the necessary and suffi- cient condition for the convexity of U (α
i, x) is the semidefinite positivity of the following symmetric matrix:
"
d
2dx
2d
2dα
i2−
d
2dxdα
id
2dxdα
i T#
o
[U (α
i, x)]
= Q
1× M
x(α
iJ) + Q
2× [L
′x(α
iJ )L
x(J)
T+ L
x(J )L
′x(α
iJ)
T] + Q
3× [L
′x(J )L
x(J)
T+ L
x(J )L
′x(J )
T] + Q
4× [L
x(J )L
x(J )
T]
+ Q
5× L
′x(α
iJ)L
′x(α
iJ)
T. (15) where
Q
1= J
2K
x′′(α
iJ )
1 − α
iK
x′(α
iJ ) K
x′(J )
Q
2= J K
x′(α
iJ) K
x′(J)
Q
3= α
iJ
2K
x′(α
iJ )K
x′′(α
iJ ) K
x′(J )
2Q
4= − α
iJ
2K
x′′(J)K
x′(α
iJ )K
x′′(α
iJ )
K
x′(J )
3− 2α
iJK
x′′(α
iJ )K
x′(α
iJ )
K
x′(J)
2− K
x′(α
iJ)
2K
x′(J)
2Q
5= − J
2. (16)
By using the equations of (14), we deduce that the equation (15) is equiv-
alent to the following equation:
"
d
2dx
2d
2dα
2i−
d
2dxdα
id
2dxdα
i T#
o
[U(α
i, x)]
= A × g
′′(x) + (B
1+ B
2+ B
3+ B
4+ B
5) × g
′(x)g
′(x)
T(17)
where
A = J
2K
x(α
iJ)
2g(x) + g(x)
2(α
iJ + g(x))
2"
1 − α
iα
iJ + g(x) J + g(x)
−g(x)−1#
×
α
iJ
α
iJ + g(x) − log α
iJ
g(x) + 1
. B
1= J
2K
x(α
iJ )
2g(x) + g(x)
2(α
iJ + g(x))
2"
1 − α
iα
iJ + g(x) J + g(x)
−g(x)−1#
×
"
α
iJ
α
iJ + g(x) − log α
iJ
g(x) + 1
2+ α
iJ α
iJ + g(x)
1
g(x) − 1 α
iJ + g(x)
# . B
2= JK
x(α
iJ )
2g(x)
α
iJ + g(x) log α
iJ
g(x) + 1
− α
iJ[g(x) + 1]
[α
iJ + g(x)]
2×
J J + g(x) − log
J
g(x)
+ 1
+ JK
x(α
iJ )
2× g(x)
J + g(x) log J
g(x) + 1
− J[g(x) + 1]
[J + g(x)]
2α
iJ
α
iJ + g(x) − log α
iJ
g(x) + 1
B
3= − 2α
iJ
2K
x(α
iJ )
2[g(x) + 1][J + g(x)]
2[α
iJ + g(x)]
3J
J + g(x) − log J
g(x) + 1
×
g(x) J + g(x) log
J g(x) + 1
− J[g(x) + 1]
[J + g(x)]
2. B
4= K
x(α
iJ)
2[J + g (x)]
[α
iJ + g(x)]
2×
J
[J + g(x)] − log J
g (x) + 1
2×
×
2α
iJ [g(x) + 1][J + g(x)]
α
iJ + g(x) − α
iJ
2[g(x) + 1]
2α
iJ + g(x) − J − g(x)
B
5= − J
2K
x(α
iJ )
2g(x)
α
iJ + g(x) log α
iJ
g(x) + 1
− α
iJ [g(x) + 1]
2[α
iJ + g(x)]
2 2(18)
When θ ≈ 1, we deduce that J ≈ 0. By using the Taylor development,
we can show the following equalities:
A ≤ 0, ∀ J ≥ 0.
B
1≈ J
4× α
2i(1 − α
i)[g(x) + 1]
g(x)
4. B
2≈ J
3× α
i+ α
2i2g(x)
4. B
3≈ − J
5× α
i[g(x) + 1]
g(x)
5. B
4≈ − J
4× 1
4g(x)
4. B
5≈ − J
4× α
2ig(x)
4.
Remark. Here, we denote A ≈ B if
AB→ 1, when J → 0.
By using the assumptions that g
′′(x) ≤ 0, α
i≥ ǫ and g(x) ≥ δ, we deduce that there exists a ω(ǫ, δ) depending on (ǫ, δ) such that for all θ ≥ ω(ǫ, δ), we have:
"
d
2dx
2d
2dα
2i−
d
2dxdα
id
2dxdα
i T#
o