A NNALES DE L ’I. H. P., SECTION C
J. G AUVIN R. J ANIN
Directional lipschitzian optimal solutions and
directional derivative for the optimal value function in nonlinear mathematical programming
Annales de l’I. H. P., section C, tome S6 (1989), p. 305-324
<
http://www.numdam.org/item?id=AIHPC_1989__S6__305_0>
© Gauthier-Villars, 1989, tous droits réservés.
L’accès aux archives de la revue « Annales de l’I. H. P., section C » (
http://www.elsevier.com/locate/anihpc) implique l’accord avec les condi- tions générales d’utilisation (
http://www.numdam.org/conditions). Toute uti- lisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques
http://www.numdam.org/
AND DIRECTIONAL DERIVATIVE FOR THE OPTIMAL VALUE FUNCTION IN NONLINEAR MATHEMATICAL
PROGRAMMING
J. GAUVIN
R. JANIN
Département
demathématiques appliquées, École Po4ytechnique, Montreal, Quebec,
Canada H3C 3A7Département
demathématiques,
Université dePoitiers,
86022 PoitiersCedex,
FranceThis paper
gives
very accurate conditions to haveLipschitz
behaviour for theoptimal
solutions of amathematical
programming problem
with naturalperturbations
in some fixed direction. This result is then used to obtain the directional derivative for theoptimal
value function.Key words: Mathematical
programming, Lipschitzian
solutions, optimal
valuefunction,
directional derivative.This research was
supported
by the Natural Sciences andEngineering
Research Council of Canada under Grant A-9273.1.
INTRODUCTION
We consider in this paper a nonlinear mathematical
programming problem
with naturalperturbations
P(tu):
s.t.where t >
0 , 1,J
are finite sets of indices and u £R
is a fixed direction for the
perturbations.
We arelooking
for the minimal
’assumptions
to have theLipschitz continuity
of any local
optimal
solutionx(tu)
of programP(tu)
near some
optimal
solutionx*
of programP(0).
The
Lipschitz
behaviour of theoptimal
solutions inparametric optimization
have been studiedby
many authors.We cite the works of Aubin
[1],
Cornet-Vial[2]
and Robinson[6]
where thisproperty
has been obtained underregularity
conditions related somehow with the
Mangasarian-Fromovitz regularity
condition. Thisregularity
conditions restricts the programP(0)
to be defined in the interior of the domain of feasibleperturbations; i.e.,
theperturbations
vwhere the programs
P(v)
have feasible solutions. Theorem 4.3 in Gauvin-Janin[3]
is a tentative to have thisLipschitz property
under moregeneral regularity
conditions.Our purpose in this paper is to
give
a more refined version of that result. Anexample
isgiven
which shows that we mayhave obtained the most accurate statement for a
Lipschitz directional continuity property
for the.optimal
solutions inmathematical programming. Finally
the result is used toobtain
a nice andsimple
formula for thedirectional
derivative
of theoptimal
value function of the mathematicalprogramming problem.
This last result comes tocomplete
andto refine Theorem 3.6 in Gauvin-Tolle
[4]
where this formulawas obtained with the
assumption,
amongothers,
that aLipschitz continuity property
was satisfied for theoptimal
solutions.
It should be noticed that program
P(tu)
asformulated
contains the case where the feasible solutions arerestrained to remain in some convex
polyhedron
because such set canalways
be definedby
linearinequalities
orequalities
which can be included in the above formulation.Also a mathematical program with
general
nonlinearperturbations
can be translated to the formulation with linearright-hand
sideperturbations only
as it has beenonce shown
by
R.T. Rockafellar(see
the Introduction in Gauvin-Janin[3]).
2. ASSUMPTIONS AND PRELIMINARIES
For an
optimal
solutionx*
ofP(O),
we denoteby
~(x~)
the set ofLagrange multipliers; i.e.,
themultipl iers ( a , ~ ) , ~ ~ R IuJ
, such thatThe
corresponding
set of Kuhn-Tuckermultipliers
is denotedby
with its recession cone
The first
assumption
is a verygeneral regularity
condition.
H1:
The set0 (x*)
isnonempty.
The second
assumption
is a condition on the choice of the direction ofperturbations.
H : The direction u satisfies 2
This
assumption implies
that thefamily {Vf. 1 (x*) I
i E J}
related with the
equality
constraints islinearly
independent (see (3)
of Remark 3.1 in Gauvin-Janin[3]).
Both
assumptions implies
that the set Q1 (x*,u)
ofoptimal
solutions for the linear programis
nonempty
and bounded even is unbounded(see
(4)
of Remark 3.1 inGauvin-Janin [3]). By duality,
thisimpl ies
that the linear programLP(x*,u):
subject
tois
f eas ibl e
and bounded. We denoteby Y ( x* , u )
the set ofoptimal
solutions for that program,by
the set of indices
corresponding
topossible binding inequality
constraints for someoptimal
solution andby
the subset of indices
corresponding
to nonnuloptimal multipliers. Finally
we denoteby
the
tangent subspace
atx*
to theinequality
constraints related with this last set of indicestogether
with theequality
constraints.If we let be the Hessian of the
Lagrangian
the third and last
assumption
is a weak second-order sufficientoptimality
condition related with the abovetangent subspace.
H3 :
For any y EE ,
y ~0,
there existsn 1 ( x* , u )
such that
This
condition
is weak in the sense that it does not need to hold for all À* e 0(x*,u)
or all À e 0(x*).
1 1
By arguments
similar of those in Theorem 2.2 in[3], assumption ~3 implies
thatx*
is astrict local
optimum
for theenlarged
program. n
min f
(x) ,
X e Ro s.t.
Since x*
is assumed to be anoptimum
of theoriginal
program
P(0),
we must have thatx*
is also a strictoptimum
ofP(0).
Therefore we have aneighborhood X(x*)
of x* where
f 0 (x*) f (x)
for any feasiblepoint
xof
P(O).
By
a localoptimal
solution ofP(tu)
nearx*
we mean anyoptimal
solutionx(tu)
of the restricted programmin
{
f(x:) I
x £R(tu) nX(x*) }
whereR(tu)
is the set of feasible solutions ofP(tu).
Under
assumptions
H1 andH2,
theproof
of Theorem 3.2 in Gauvin-Janin[3]
shows that it ispossible
to construct afeasible are
x(t)
cR(tu),
t e[0,1: [,
for some t 0 >0,
’such that x
(t) - x* .
Sincef0 (x (tu) ) >
_fo (x (t) ) ,
we have, for any clusterpoint x**
ofx(tu)
ast10,
f0 (x**) ~ lim sup
f plim
f 0(x (t) )
= f p(x* ) .
But since
x*
is theunique optimum
for therestricted
program P
(
0( x* ) ,
we mustneces sari ly
havex** - x*
andconsequently
It should be noticed from above that the
assumptions H1
and
H2 implies
that the programP(tu),
t F[0,
isfeasible even if v = 0 is at the
boundary
of the domain offeasible
perturbations
In that case the condition on the direction u in
H2
implies
that u must bepointing
toward theinterior
of dom R.Example ( 3 . 1 )
in Gauvin-Janin[3]
illustrates that situation.3. LIPSCHITZ DIRECTIONAL CONTINUITY FOR THE OPTIMAL SOLUTIONS
The next result on the
Lipschitz
directionalcontinuity
for the
optimal
solution is ageneralization
and arefinement of Theorem 4.3 in Gauvin-Janin
[3].
Theorem 1
Let
x*
be anoptimal
solution of programP(0)
withassumptions H,
Hand H satisfied.Then,
for any 123local
optimal
solutionx(tu)
ofP(tu)
nearx*,
we haveProof.
As
previously noticed, assumption
H 3implies
theexistence of a
neighborhood X(x*)
wherex*
is theunique optimum
of therestricted
programP(OJx*);
andby
assumption H
andH2’
we also have .for any
optimal solution x(tu)
of the restricted programp(tulx*)
which is then feasible for some nontrivialinterval
[O,to[.
Now let suppose the conclusion is false and let take any sequence
{t}, t 10,
such thatn n
for some cluster
point
y of the bounded setFrom Lemma 3.1 and Theorem 3.2 in Gauvin-Janin
[3],
wehave,
f or nlarge enough,
f or some 6 and f or any a ~0 1 (x*,u),
where
(x) =
0. We also haveTherefore we
have,
forsome ~
and some[ o , 1 ] ,
i
Since we have assumed that
we can divided all above
inequalities
andequalities by
u)-x*1
and take the limits to obtainn
But it exists a A £ n
1 (x*,u)
withÀ. 1
>0 ,
i £I*(x*,u) ,
such that
where K =
I*(x*,u)uJ.
Fromabove,
we then havewhich implies
thattherefore y £ E.
From
H3’
it existsn 1 (x* , u)
and a > 0 such thatFor n
large enough,
we can writeFor n
large enough
we also have-a/4
in the left-hand side ofinequality (1).
The twoprevious
inequalities put together
in the left-hand side of(1)
reduce thatinequality
towhich is in
contradiction
with what we have assumed at thebeginning.
o
The
following example
shows that Theorem 1 isperhaps
the most accurate statement for a
Lipschitz
directionalcontinuity property
for theoptimal solutions in mathematical programming.
Example
1min
f. 0 = -x~ 2
s. t.
For t =
0,
theoptimum
isx*
=x*
= 0 whereI(X*)
=1 2
( 1 , 2 ) .
Themul t ipl iers
arewith in this case 0
(x*)
=(
0}
thereforeassumption
H is
satisfied
for anydirection
u =(u ,u ) .
The set of 2 12optimal multipliers
isThe linear program becomes
y~
s.t.
y ~ u
"2 1
"2
52
for which we have
The
corresponding tangent subspace
isOn this
subspace,
the Hessian of theLagrangian
has valueFor the case where
0,
we haveÀ*
=(0,1)
£ 0(x*,u),
therefore H issatisfiedwith
1 3
- - -
In that case, the
optimal
solution ofP(tu)
isx(tu)
=( (0, tu~) )
for which the
Lipschitz continuity
holds.For the case where
u -u 0,
weonly
haveA*
==(1,0)
in 0
(x*,u);
therefore H is notsatisfied
since1 3
The
optimal
solutions ofP(tu)
are in that caseand the
Lipschitz continuity
does not hold.4. DIRECTIONAL DERIVATIVE FOR THE OPTIMAL VALUE FUNCTION We now consider the
optimal
value function of programP(tu):
p(tu) = inf { f o (x) I
x ~R(tu) }
where, as
previously, R(tu)
is the set of feasiblesolutions.
We need also to consider the localoptimal
valuefunction
for the restricted programinf
{ fo (x) J
x ewhere
X(x*)
is someneighborhood
of anoptimal
solutionx*
ofP(0) .
The next result is a more refined and a more accurate statement for the result of Theorem 3.6 in Gauvin-Tolle
[4].
Theorem 2
Let
x*
andx(tu)
beoptimal
solutionsrespectively
of program
P(0)
andP(tu)
with theassumptions
’(1) Q 1 (x*) :# ct>
and u satisfied(2) lim sup
+~ .Then the
optimal
value function has a directional derivativegiven by
Proof.
When t is small
enough,
we havef or some k . For any A c o
1 ( x* ) ,
we havefor some
xu (t) e [ x* , x (tu) ] .
Since o andwe
necessarily
have~~u~ (P(~) "P~ 0)
)/t ~ ~ ~~x*) ~ ’~~ ~’
This
maximum
is attained underassumption (1)
aspreviously
noticed.
On
tile otherhand,
wehave, by
Theorem 3.2 in Gollan[5]
or Theorem 3.2 in Gauvin-Janin
[3],
The result follows from both
inequalities.
a To
complete finally
the result of Theorem 3.6 in Gauvin- Tolle[4],
we can state thefollowing
theorem. Thefamily
R (tu) ,
t £[,
is said to beuniformly compact
if the closure ofR(tu)
iscompact (see
also theinf-boundedness
condition in Rockafellar[7]).
LetS(0)
be the set ofoptimal solution
f orP(0).
Theorem 3
Let
S ( 0 )
benonempty
andR ( tu )
beuni f ormly compact
on the nontrivial interval
[O,t o [. If,
for anyoptimum
x* E
S ( 0 ) ,
theassumptions H ,H
andH
aresatisfied,
then the
directional derivative
of theoptimal
valuefunction
exists and isgiven by
Proof.
Because
R ( tu )
isuni f ormly compact
andS(0)
nonempty, p (tu)
is lower semi-continuous at t = 0+(see
Lemma 2.1 in
Gauvin-Tolle [ 4 ] ) . By H,
eachx*
£S(0)
isa strict
optimum
ofP ( 0 ) ;
therefore it exists aneighborhood X (x* )
wherex*
is theunique optimum
of therestricted program
P ( 0 ~ x* ) . By
f initecovering,
the number ofoptimum points
inS(0)
must then be finite.By H1
and
H ,
aspreviously noticed,
the restricted programsare all feasible for t e
[0,t0 [,
for somet 0
> 0.Therefore the local
optimal
value function are also uppersemi-continuous
at t =0+ ;
therefore continuous at t = 0+.For t in some
nontrivial interval,
we then haveFor each
x* : S(0)~
we haveby
Theorem 1 that anyoptimal
solutionx (tu)
of the restricted programP(tu~x~)
is
Lipschitz continuous
at t = 0+. We thenapplied
TheoremI
2 for each local
optimal
value functionp (tu ~ x* )
toobtain
Ifinally
When the program
P(O)
is convex, the set0 (x*)
isidentical for all
x*~S(0);
therefore the above formula reduces towhich is the classical result of convex
programming
underthe Slater
regularity
condition(0 (x*) nonempty
andbounded or
equivalently Q (x*) = (0} ).
In the case where the
Lipschitz property
does not holdfor the
optimal solution,
we can still have a result on thedirectional derivative for the
optimal
value function if theoptimal
solutionssatisfy
the Holderiancontinuity; i.e.,
for any local
optimal
solutionx(tu)
ofP(tu)
near x*(see Corollary
4.1 in Gauvin-Janin[3]).
In that case, the formula for the directional derivative for the localoptimal
value function is
given by
where
D (x* ) = I v f i (x*) y ~ Q. i £ I (x* ) u { 0 };
vff x* ) y = 0,
i £J}
is the set of
critical directions
atx*
and whereis the subset of
multipliers satisfying
the second-order necessaryoptimality condition
for thatcritical
directiony.
This
nice formula is not sosimple
and sometime may bequite
difficult to evaluate! 1 Nevertheless the formula can be useful asillustrated by Example
1 where we have noticed that fordirection
u, withu 1 -u 2 0,
theoptimal
solutions
are not
Lipschitzian
but areHolderian.
For thatexample,
the
optimal
valuefunction
has a directional derivative with value
since u -u
0,
the valuegiven by
the formula of Theorem 212. .
is
which
disagree
with the real value since Theorem 2 does notapply
for that case. If we refer to the above valid formula(2),
we have the set of critical directions atx* =
0given by
for which we have
if and
only
A . Therefore1 2
and,
since u -u0,
the formulagives
the value1 2
which is in
agreement
with the real value for the directional derivative.Assumption H3
can beequivalently
formulatedby
This
assumption
needed for theLipschitzian property
inTheorem 1 is contained in the
following
less restrictivecondition
used toobtain
theHolderian property
in Theorem4.1 in
Gauvin-Janin [3]
We can
put together
Theorem 3 above andCorollary
4.1 inGauvin-Janin [ 3 ]
toobtain
thefollowing
nice and clearresult for the existence and value for the
directional
.
derivative
of theoptimal
valuefunction
inmathematical
programming. This
last theorem is related with asimilar
result in Rockafellar[7].
Theorem 4
Let
S(0)
benonempty
andR(tu)
beuniformly compact
in some nontrivial interval
[0,t~[.
At anyoptimum
x* £ S(0)
’ we assume thatH 1 and H2
aresatis f ied
withat least one of the
following
weaksecond-order sufficient optimal ity
conditions:Then the
directional derivative p’(0;u) = lim (p(tu)-p(O»jt
of the
optimal
valuefunction exists and
isgiven by
So
far;
we don’t know anyexample
where theoptimal
value function has directional derivative without at least
one of the condition
H
3 orH
4 satisfied.REFERENCES
[1] Aubin,
J.P.(1984). "Lipschitz
Behaviour ofSolutions
to Convex Minimization Problems". Math.Oper.
Res.9,
87-111.
[2] Cornet,B., Vial,
J.P.(1983). "Lipschitzian
Solutionsof Perturbed Nonlinear
Programming
Problems". SIAM J.Control and
Opt. 24,
1123-1137.[3] Gauvin, J., Janin,
R.(1987). "Directional
Behaviour ofOptimal
Solutions in NonlinearMathematical
Programming".
Math.Oper.
Res.(accepted
forpublication)
[4] Gauvin, J., Tolle,
J.W.(1977).
"DifferentialStability
in NonlinearProgramming".
SIAM J. Controland
Optimization 15,
294-311.[5] Gollan,
B.(1984).
"On theMarginal
Function in NonlinearProgramming".
Math.Oper.
Res.9,
208-221.[6] Robinson,
S.M.(1982).
"GeneralizedEquations
andtheir