• Aucun résultat trouvé

AND RELATED FUNCTIONS

N/A
N/A
Protected

Academic year: 2022

Partager "AND RELATED FUNCTIONS"

Copied!
12
0
0

Texte intégral

(1)

PROGRAMMING INVOLVING GENERALIZED D-TYPE-I RELATIVE TO BIFUNCTIONS

AND RELATED FUNCTIONS

VASILE PREDA, COSTEL B ˘ALC ˘AU and CRISTIAN NICULESCU

Optimality conditions are obtained for a nonlinear fractional multiobjective pro- gramming problem involving generalized d-type-I relative to bifunctions and re- lated functions. The place of the derivatives is taken by bifunctions having certain properties. Also some examples are given.

AMS 2010 Subject Classification: 90C29, 90C32, 90C56.

Key words: fractional multiobjective programming, optimality conditions, bifunc- tion, convexlike.

1. INTRODUCTION

Optimality conditions for nonlinear singleobjective or multiobjective op- timization problems involving generalized convex functions have been of much interest and many contributions have been made to this development (e.g., [7, 8]).

We consider the following multiobjective nonlinear fractional program- ming problem:

(VFP)

minfg(x)(x) =f

1(x)

g1(x), . . . ,fgp(x)

p(x)

subject to

hj(x)50, j= 1,2, . . . , m, x∈X0,

where X0 ⊆Rn is a nonempty open set, f = (f1, . . . , fp), g = (g1, . . . , gp) : X0 → Rp, h = (h1, . . . , hm) : X0 → Rm, gi(x) > 0 for all x ∈ X0 and each i = 1, . . . , p. We denote X = {x ∈ X0 | hj(x) 5 0, j = 1,2, . . . , m}, the feasible set of problem (VFP).

In this paper, we obtain optimality conditions result for (VFP) involving general classes of functions. The place of the derivatives is taken by bifunctions having certain properties. In a final section are examples and comments. Thus,

MATH. REPORTS14(64),1 (2012), 95–106

(2)

we extend our work [5] and generalize results obtained in the literature on this topic [4, 6].

2. DEFINITIONS AND PRELIMINARIES

Throughout this paper we use the following conventions for x, y∈Rn: x < y iffxi< yi for any i= 1, n,

x5y iffxi5yi for any i= 1, n, x≤y iffx5y and x6=y, x≮y is the negation ofx < y.

We denote Rn+={x∈Rn|x=0}.

Definition 2.1. A point x ∈ X is said to be a weak Pareto solution or weak minimum for (VFP) if f(x)g(x)f(x)g(x) for all x∈X.

Definition 2.2. A pointx∈X is said to be a local weak Pareto solution or local weak minimum for (VFP) if there is a neighborhood V(x) aroundx, such that

f(x)

g(x) ≮ f(x)

g(x) for all x∈X∩V(x).

Letfbi,bgi :X0×Rn→Rfori∈P ={1,2, . . . , p},andbhj :X0×Rn→R for j∈M ={1,2, . . . , m}, bifunctions.

Let ρ1 = (ρ11, . . . , ρ1p)T, ρ2 = (ρ21, . . . , ρ2p)T ∈ Rp, ρ3 = (ρ31, . . . , ρ3m)T ∈ Rm, andd:X0×X0→R+.

Definition 2.3. (f, g, h) is said to be of general (ρ1, ρ2, ρ3, d)-type I at x ∈ X0 relative to (fbi,bgi,bhj), i ∈ P, j ∈ M if there exist the functions η :X0 ×X0 → Rn, αi, βi, γj :X0×X0 → R+\{0}, i∈ P, j ∈ M such that for all x∈X0, we have

fi(x)−fi(x)=αi(x, x)fbi(x, η(x, x)) +ρ1id(x, x), ∀i∈P, gi(x)−gi(x)5βi(x, x)bgi(x, η(x, x))−ρ2id(x, x), ∀i∈P,

−hj(x)=γj(x, x)bhj(x, η(x, x)) +ρ3jd(x, x), ∀j∈M.

3. SUFFICIENT OPTIMALITY CRITERIA

In this section we obtain some sufficient conditions for a feasible solution x to be weak minimum for (VFP). The proofs of Theorems 3.1 and 3.2 follow the lines, for example, of the proofs of Theorems 1, respectively 2 from [5], and are therefore omitted.

(3)

Theorem 3.1.Let x∈X such that f(x)=0,and(f, g, h) be of general (ρ1, ρ2, ρ3, d)-type I at x.Also, we assume that there exists λ0 ∈Rp, v0 ∈Rm such that λ0Tρ1+v0Tρ3 =0,ρ2 =0and

p

X

i=1

λ0iαi(x, x)fbi(x, η(x, x)) +

m

X

j=1

vj0γj(x, x)bhj(x, η(x, x))=0, ∀x∈X,

gbi(x, η(x, x))50, ∀x∈X, ∀i∈P, v0Th(x) = 0, λ0 ≥0,v0=0.

Thenx is a weak minimum solution for (VFP).

Theorem 3.2. Let x∈X, u0i = fgi(x)

i(x),∀i∈P, and(f, g, h)be of general (ρ1, ρ2, ρ3, d)-type I at x. Also, we assume that there exists λ0 ∈Rp,v0 ∈Rm such that λ0Tρ1

p

P

i=1

λ0iu0iρ2i +v0Tρ3 =0,

p

X

i=1

λ0i αi(x, x)e fbi(x, η(x, x))−u0iβi(ex, x)bgi(x, η(x, x)) + +

m

X

j=1

vj0γj(x, x)e bhj(x, η(x, x))=0, ∀x∈X, v0Th(x) = 0,λ0 ≥0,u0 =0,v0 =0.

Thenx is a weak minimum solution for (VFP).

4. NECESSARY OPTIMALITY CONDITIONS

Let ϕ : X0 → Rk, ϕ = (ϕ1, ϕ2, . . . , ϕk), ϕb : X0 ×Rn → Rk, ϕb = (ϕb1,ϕb2, . . . ,ϕbk) and we consider thatϕbis a bifunction associated toϕ.

We say thatϕand ϕbsatisfy the hypothesis H1 at x∈X if (H1) for any y ∈Rnsuch thatϕ(x, y)b <0, there existstr&0 such thatϕ(x+try)< ϕ(x), for any r∈N.

We say thatϕand ϕbsatisfy the hypothesis H2 at x∈X if (H2) for any y ∈Rn such that ϕ(x, y)b <0, there exists t0 >0 such thatϕ(x+ty)< ϕ(x), for any t∈(0, t0).

Proposition 4.1. Ifϕandϕbsatisfy H2 atx, then they satisfy also H1 atx. Converse is not true.

Proof. Ifϕ and ϕb satisfy H2 atx, lety ∈ Rn such that ϕ(x, y)b < 0. It follows that there exists t0>0 such that ϕ(x+ty)< ϕ(x), for any t∈(0, t0).

Let tr = r+1t0 , for any r∈N. Then tr ∈(0, t0), for anyr∈N, and therefore ϕ(x+try)< ϕ(x), for anyr∈N. Sincetr&0,ϕandϕbsatisfy H1 atx.

To see that converse is not true, consider the following example.

(4)

LetX0=R,ϕ:R→R,ϕ(x) =

cos1x ifx6= 0,

0 ifx= 0, andϕb:R×R→R, ϕ(x, y) =b x−y. Letx= 0,andy ∈R such thatϕ(x, y)b <0, i.e., y >0.

Lettr= (2r+1)πy1 , for anyr∈N. It followsϕ(x+try) = cos(2r+ 1)π=

−1<0 =ϕ(x), for anyr ∈N, andtr&0. Thereforeϕandϕbsatisfy H1 atx.

For any t0 >0, let r ∈N such that r > 2πt1

0y and t= 2rπy1 . It follows t ∈ (0, t0), and ϕ(x+ty) = cos 2rπ = 1> 0 =ϕ(x). Therefore, ϕ and ϕb do not satisfy H2 at x.

Definition 4.1 [2]. A function f : X0 → Rk is convexlike if for any x, y∈X0 and 05λ51,there existsz∈X0 such that

f(z)5λf(x) + (1−λ)f(y).

Remark4.1. The convex and the preinvex functions are convexlike func- tions.

Lemma 4.1 [3]. Let S be a nonempty set in Rn and ψ : S → Rk be a convexlike function. Then eitherψ(x)<0 has a solutionx∈SorλTψ(x)=0 for all x∈S,for some λ∈Rk, λ≥0, but both alternatives are never true.

DenoteM(x) ={j∈M |hj(x) = 0}.

Lemma 4.2. Let x be a (local) weak minimum of (VFP). We suppose that f(x) = 0, there exists lim

x→xhj(x) < 0, for j ∈ M\M(x), and one of the following properties holds:

a)fi,fbi,−gi,−bgi, (i∈P), hj,bhj, (j∈M(x)) satisfy H1 at x, and there exists r0 ∈ N large enough such that tir0 = ti0r0 = tj00r0, for any i ∈ P and j ∈ M(x), where (tir)r∈N, (ti0r)r∈N, (i ∈ P), (tj00r )r∈N, (j ∈ M(x)) are the sequences from H1 for fi,fbi, −gi,−bgi, (i ∈ P), and hj,bhj, (j ∈ M(x)), respectively;

b) fi,fbi, −gi,−bgi, (i ∈ P), hj,bhj, (j ∈ M(x)) satisfy H1 at x, and tir =ti0r =tj00r ,for anyr ∈N,i∈P andj ∈M(x);

c) there exists j0 ∈ M(x) such that fi,fbi, −gi,−bgi, (i ∈ P), hj,bhj, (j∈M(x)\{j0})satisfy H2 at x, and hj0,bhj0 satisfy H1 at x;

d) there exists i0 ∈P such that fi,fbi , (i ∈P), −gi,−bgi, (i ∈P\{i0}), hj,bhj, (j∈M(x))satisfy H2 at x, and−gi0,−bgi0 satisfy H1 at x;

e) there exists i0 ∈ P such that fi,fbi , (i∈ P\{i0}), −gi,−bgi, (i ∈ P), hj,bhj, (j∈M(x))satisfy H2 at x, andfi0,fbi0 satisfy H1 at x;

f)fi,fbi,−gi,−bgi, (i∈P),andhj,bhj, (j∈M(x))satisfy H2 at x ;

(5)

g) fg,fb

bg, hj,bhj, (j ∈ M(x)) satisfy H1 at x, and there exists r0 ∈ N large enough such that tr0 = tjr0, for any j ∈ M(x), where (tr)r∈N,(tjr)r∈N, (j ∈ M(x)) are the sequences from H1 for fg,fb

bg, and hj,bhj, (j ∈ M(x)), respectively;

h) fg,fb

bg,hj,bhj, (j∈M(x))satisfy H1 at x, and tr=tjr,for any r∈N, and j∈M(x);

i)there exists j0 ∈M(x) such that fg,fb

bg,hj,bhj, (j ∈M(x)\{j0}) satisfy H2 at x, and hj0,bhj0 satisfy H1 at x;

j) fg,fb

bg,hj,bhj, (j ∈M(x))satisfy H2 at x.

Then the system

(4.1)





fbi(x, y)<0, i∈P, bgi(x, y)>0, i∈P, bhj(x, y)<0, j∈M(x), has no solution y∈Rn.

Proof. We proceed by contradicting. We assume that (4.1) has a solution y ∈Rn. Ify= 0, we get a contradiction to H1 or H2.

Now, lety6= 0. Forj ∈M\M(x) we have lim

x→xhj(x)<0, therefore there existset >0 such that

(4.2) hj(x+ty)<0, for anyt∈(0,et ).

a) According to H1, we have

fi(x+tiry)< fi(x), i∈P, (4.3)

gi(x+ti0ry)> gi(x), i∈P, (4.4)

and

(4.5) hj(x+tj00r y)< hj(x), j∈M(x), for any r ∈N.

There existsr0∈Nlarge enough such thattir0 =ti0r0 =tj00r0, for anyi∈P and j ∈ M(x). Let tr0 = tir0 = ti0r0 = tj00r0. Since tir & 0, ti0r & 0, tj00r & 0, for any i∈P and j∈M(x), andr0 is large enough, we havetr0 ∈(0,et).

Sinceg(x)>0 for allx∈X0 and f(x)=0, from (4.3), (4.4) written for r =r0, we get

(4.6) f(x+tr0y)

g(x+tr0y) < f(x) g(x).

(6)

Since x ∈ X, we have h(x) 50, and now from (4.5) written for r =r0 we get

(4.7) hj(x+tr0y)<0, for any j∈M(x).

Now, from (4.2), tr0 ∈ (0,et), and (4.7) we obtain that for any j ∈ M, hj(x+tr0y)<0, therefore,x+tr0y∈X.

This and (4.6) contradicts the fact that x is a (local) weak minimum of (VFP).

b) We have b)⇒ a).

c) From H2, for anyi∈P and j ∈M(x)\{j0}, there existti0,ti00,tj000 >0 such that

fi(x+ty)< fi(x), for any t∈(0, ti0), gi(x+ty)> gi(x), for any t∈(0, ti00), and

hj(x+ty)< hj(x), for any t∈(0, tj000 ).

Let t0 = min

i∈P, j∈M(x)\{j0}{ti0, ti00, tj000 }. It follows t0 > 0 and, for any t ∈ (0, t0),

fi(x+ty)< fi(x), i∈P, gi(x+ty)> gi(x), i∈P, hj(x+ty)< hj(x), j∈M(x)\{j0}.

From H1, there existstjr0 &0 such that hj0(x+tjr0y) < hj0(x), for any r ∈N.

Sincetjr0 &0 andt0 >0, there existsr0∈N such thattjr0 ∈(0, t0), for any r > r0.

Lettr=tjr−r0 0, for any r ∈N. It followstr&0 and, for anyr ∈N, fi(x+try)< fi(x), i∈P,

gi(x+try)> gi(x), i∈P, hj(x+try)< hj(x), j ∈M(x).

Now we apply b).

d), e) Similar to c).

f) It follows from Proposition 4.1 and c), d) or e).

g) Sincey is solution for (4.1), we get ( f(x,y)b

bg(x,y) <0,

bhj(x, y)<0, j ∈M(x).

(7)

According to H1, we have for any r ∈ N, fg(x+t(x+try)

ry) < f(x)g(x) and hj(x+ tjry)< hj(x),j∈M(x).

Further the proof is similar to a).

h) We have h)⇒ g).

i) Similar to c), applying h) instead of b).

j) It follows from Proposition 4.1 and i).

Theorem 4.1(Fritz John type necessary optimality conditions).Letxbe a (local) weak minimum of (VFP). We suppose that there exists lim

x→xhj(x)<0 forj∈M\M(x),f(x)=0,

fbi(x, y)

i∈P,(−bgi(x, y))i∈P, bhj(x, y)

j∈M(x)

is a convexlike function ofy onRn,and one of propertiesa)–j) from Lemma4.2 holds. Then there existλ0,u0 ∈Rp,v0 ∈Rm such that

(4.8)

p

X

i=1

λ0ifbi(x, y)−

p

X

i=1

u0ibgi(x, y) +

m

X

j=1

vj0bhj(x, y)=0 for ally∈Rn,

(4.9) v0Th(x) = 0,

(4.10) (λ0, u0, v0)≥0.

Proof. From Lemma 4.2, the system (4.1) has no solution y ∈ Rn. From Lemma 4.1, there exists λ0, u0 ∈ Rp, vj0 ∈ R (j ∈ M(x)), such that (λ0, u0,(v0j)j∈M(x))≥0, and

(4.11)

p

X

i=1

λ0ifbi(x, y)−

p

X

i=1

u0ibgi(x, y) + X

j∈M(x)

vj0bhj(x, y)=0 for ally∈Rn. We putv0j = 0 forj∈M\M(x). From (4.11) we get (4.8). Finally, (4.9) and (4.10) follow obviously.

We consider the following multiobjective programming problem:

(VP)

minϕ(x) = (ϕ1(x), . . . , ϕp(x)) subject to

( hj(x)50, j= 1,2, . . . , m, x∈X0.

In the same way as Lemma 4.2, one can prove the following lemma.

Lemma 4.3.Let xbe a (local) weak minimum of (VP). We suppose that forj∈M\M(x), there exists lim

x→xhj(x)<0and one of the following properties holds:

(8)

a) ϕi,ϕbi, (i ∈ P) , and hj,bhj, (j ∈ M(x)) satisfy H1 at x, and there exists r0 ∈N large enough such thattir0 =tj0r0,for any i∈P and j ∈M(x), where (tir)r∈N(i ∈ P),(tj0r)r∈N, (j ∈ M(x)) are the sequences from H1 for ϕi,ϕbi , (i∈P), and hj,bhj, (j∈M(x)), respectively;

b) ϕi,ϕbi, (i∈P) , and hj,bhj, (j ∈M(x)) satisfy H1 at x, and tir =tj0r, for any r∈N,i∈P and j∈M(x);

c)there existsj0 ∈M(x)such thatϕi,ϕbi, (i∈P) ,hj,bhj, (j∈M(x)\{j0}) satisfy H2 at x, and hj0,bhj0 satisfy H1 at x;

d) there exists i0 ∈P such that ϕi,ϕbi, (i∈ P\{i0}), hj,bhj, (j ∈M(x)) satisfy H2 at x, and fi0,fbi0 satisfy H1 at x;

e)ϕi,ϕbi, (i∈P), andhj,bhj, (j∈M(x))satisfy H2 at x;

f) ϕ,ϕ,b hj,bhj, (j ∈ M(x)) satisfy H1 at x, and there exists r0 ∈ N large enough such that tr0 =tjr0,for any j∈M(x), where (tr)r∈N, (tjr)r∈N, (j ∈ M(x)) are the sequences from H1 for ϕ,ϕ, andb hj,bhj, (j ∈ M(x)), respectively;

g)ϕ,ϕ,b hj,bhj, (j ∈M(x)) satisfy H1 at x, and tr =tjr,for any r ∈N, and j∈M(x);

h) there existsj0 ∈M(x) such that ϕ,ϕ,b hj,bhj, (j ∈M(x)\{j0}) satisfy H2 at x, and hj0,bhj0 satisfy H1 at x;

i)ϕ,ϕ,b hj,bhj, (j ∈M(x)) satisfy H2 at x.

Then the system (

ϕbi(x, y)<0, i∈P, bhj(x, y)<0, j ∈M(x) has no solution y∈Rn.

Lemma 4.4. Let xbe a (local) weak minimum of(VP). We suppose that there exists lim

x→xhj(x)<0forj∈M\M(x), ((ϕbi(x, y))i∈P, (bhj(x, y))j∈M(x))is a convexlike function of y onRn, and one of properties a)–i)from Lemma4.3 holds. Then there exist λ0 ∈Rp,vj0 ∈R, (j∈M(x))such that

p

X

i=1

λ0iϕbi(x, y) + X

j∈M(x)

vj0bhj(x, y)=0 for all y∈Rn, (λ0,(vj0)j∈M(x))≥0.

Proof. It follows from Lemma 4.3 and Lemma 4.1.

Now we define the H1 constraint qualification (H1CQ).

(9)

Definition4.2. We say thathsatisfy the H1CQ atxifhj,bhj, (j ∈M(x)) satisfy H1 at x, and there exists y ∈ Rn such that bhj(x, y) < 0, for any j ∈M(x).

Lemma 4.5. Let x be a (local) weak minimum of (VP). We suppose that there exists lim

x→xhj(x) < 0 for j ∈ M\M(x), h satisfy the H1CQ at x, ((ϕbi(x, y))i∈P,(bhj(x, y))j∈M(x)) is a convexlike function of y on Rn, and one of properties a)–i)from Lemma4.3holds. Then there existλ0 ∈Rp+,v0 ∈Rm+, such that

p

X

i=1

λ0iϕbi(x, y) +

m

X

j=1

vj0bhj(x, y)=0 for any y∈Rn, v0Th(x) = 0, λ0Te= 1,where e= (1,1, . . . ,1)T ∈Rp.

Proof. Using Lemma 4.4, there exist eλ∈Rp, evj ∈R, (j ∈ M(x)) such that (λ,e (vej)j∈M(x))≥0, and

(4.12)

p

X

i=1

λeiϕbi(x, y) + X

j∈M(x)

evjbhj(x, y)=0 for any y∈Rn. Now we prove thateλ6= 0. We proceed by contradicting.

Ifeλ= 0, from (4.12) we have

(4.13) X

j∈M(x)

evjbhj(x, y)=0 for any y∈Rn.

From (eλ,(evj)j∈M(x))≥0 andeλ= 0 it follows (evj)j∈M(x) ≥0. Using this and the fact thathsatisfy the H1CQ atx, we get P

j∈M(x)

evjbhj(x, y)<0, which contradicts (4.13).

Thereforeeλ6= 0 and from (eλ,(evj)j∈M(x))≥0 we geteλTe >0.

Let λ0 = 1

eλTeλe and v0j = vej

λeTe, (j ∈ M(x)). It follows λ0=0, vj0 = 0, (j∈M(x)), λ0Te= 1, and from (4.12), we get

(4.14)

p

X

i=1

λ0iϕbi(x, y) + X

j∈M(x)

v0jbhj(x, y)=0 for any y∈Rn.

We put vj0 = 0, (j ∈ M\M(x)). It follows v0 ∈ Rm+, v0Th(x) = 0, and from (4.14) we get

p

X

i=1

λ0iϕbi(x, y) +

m

X

j=1

vj0bhj(x, y)=0 for anyy ∈Rn.

(10)

For eachu= (u1, . . . , up)T ∈Rp, we consider (VFPu)

min(f1(x)−u1g1(x), . . . , fp(x)−upgp(x)) subject to

hj(x)50, j∈M, x∈X0.

Lemma 4.6.Ifxis a(local)weak minimum for(VFP), thenxis a(local) weak minimum for (VFPu0), where u0= fg(x)(x).

Proof. Ifx is a local weak minimum for (VFP), there is a neighborhood V(x) around x, such that fg(x)(x) ≮ u0 for all x ∈ X∩V(x). Since g(x) > 0 for all x ∈X∩V(x), it follows (f1(x)−u01g1(x), . . . , fp(x)−u0pgp(x))≮0 = (f1(x)−u01g1(x), . . . , fp(x)−u0pgp(x)) for all x ∈ X∩V(x) and thus x is a (local) weak minimum for (VFPu0).

Ifx is a weak minimum for (VFP), we repeat this proof takingV(x) = Rn.

Theorem 4.2 (Karush-Kuhn-Tucker type necessary optimality condi- tions). Let xbe a(local)weak minimum of(VFP)andu0= f(x)g(x). We suppose thatf(x)=0,there exists lim

x→xhj(x)<0for j∈M\M(x),hsatisfy the H1CQ at x, ((fbi(x, y)−u0ibgi(x, y))i∈P,(bhj(x, y))j∈M(x)) is a convexlike function of y on Rn, and for ϕi=fi−u0igi,ϕbi =fbi−u0ibgi, (i∈P), andhj,bhj, (j ∈M(x)) one of properties a)–i) from Lemma 4.3 holds. Then there exist λ0 ∈ Rp+, v0 ∈Rm+,such that

p

X

i=1

λ0i(fbi(x, y)−u0ibgi(x, y)) +

m

X

j=1

vj0bhj(x, y)=0 for any y∈Rn,v0Th(x) = 0,λ0Te= 1,u0=0.

Proof. From Lemma 4.6 we have that x is a (local) weak minimum of (VFPu0). Sincef(x)=0,x∈X⊆X0 and g(x)>0 for anyx∈X0, it follows u0=0. Finally, we apply Lemma 4.5 for (VFPu0).

Remark 4.2. Lemmas 4.2, 4.3, 4.4, 4.5 and Theorems 4.1, 4.2 also hold when we replace the assumption “there exists lim

x→xhj(x)<0 forj∈M\M(x)”

with “hj is a continuous function atx forj ∈M\M(x)”.

5. SOME EXAMPLES AND COMMENTS

Now we give some examples.

(11)

Example5.1. Ifϕis a differentiable function atx, then we takeϕ(x, y) =b yT∇ϕ(x) = lim

t→0t−1[ϕ(x+ty)−ϕ(x)].

Example 5.2. If ϕ is a semidifferentiable function at x, then we take ϕ(x, y) =b ϕ+(x, y) = lim

t→0+t−1[ϕ(x+ty)−ϕ(x)].

Example 5.3. We suppose that there exists D+ϕ(x, y) and ϕ(x, y)b = D+ϕ(x, y) = lim sup

t→0+

t−1[ϕ(x+ty)−ϕ(x)].

In Examples 5.1, 5.2, 5.3,ϕand ϕbsatisfy both H1 and H2 at x.

Example5.4. Also we can take ϕ(x, y)b =Dϕ(x, y) = lim inf

t→0+ t−1[ϕ(x+ ty)−ϕ(x)].

In this case,ϕand ϕbsatisfy H1, but, generally, not H2 at x.

Definition 5.1 [1]. A function f : Rn → R is said to be subinvex at x ∈Rn with respect to a given η:Rn×Rn→Rn if there exists an element ξ ∈ Rn such that for all y ∈ Rn, f(y)−f(x) = ξTη(y, x). This ξ is called an η-subgradient of f at x. The set of η-subgradients at x is called the η- subdifferential off atxand is denoted by∂ηf(x). Hence a function is subinvex atx∈Rnwith respect toηif∂ηf(x)6=∅. The functionf is said to be subinvex if it is subinvex at all x∈Rn.

Letf :Rn→R be subinvex with respect to a givenη:Rn×Rn→Rn at x∈Rn, i.e.,∂ηf(x)6=∅. The function f is said to satisfy the assumption (B) at x if there exists a v∈∂ηf(x) such thatD+f(x, y)5vTy,∀y ∈Rn.

Lemma 5.1 [1, Lemma 2.1]. Let f : Rn → R be subinvex such that 0∈/ ∂ηf(x), then the setF0(v) ={y|vTy <0} 6=∅ for every v∈∂ηf(x).

Now, we give an example ofϕand ϕbfor which there existsy∈Rn such that ϕ(x, y)b <0.

Example5.5. Letϕ:Rn→Rbe subinvex with respect toη :Rn×Rn→ Rn atx∈Rn such that ϕsatisfies the assumption (B) at x and 0 ∈/ ∂ηf(x).

Let ϕ(x, y) =b D+ϕ(x, y). From the assumption (B) and Lemma 5.1 it follows that there exist v∈∂ηf(x) andy∈Rn such thatϕ(x, y)b 5vTy <0.

Remark5.1. Instead ofM(x) ={j∈M |hj(x) = 0}, we may define M(x) =n

j∈M | lim

x→xhj(x) does not exist or lim

x→xhj(x)=0o

and Lemmas 4.2, 4.3, 4.4, 4.5 and Theorems 4.1, 4.2 also hold without the assumption “there exists lim

x→xhj(x)<0 for j∈M\M(x)”.

(12)

REFERENCES

[1] J. Dutta and V. Vetrivel, Mathematical programming with a class of non-smooth func- tions. J. Syst. Sci. Compl.15(2002), 52–60.

[2] K.H. Elster and R. Nehse,Optimality Conditions for Some Nonconvex Problems. Springer Verlag, New York, 1980.

[3] M. Hayashi and H. Komiya, Perfect duality for convexlike programs. J. Optim. Theory Appl.38(1982) 179–189.

[4] S.K. Mishra, S.Y. Wang and K.K. Lai,Multiple objective fractional programming involv- ing semilocally type I-preinvex and related functions. J. Math. Anal. Appl.310 (2005), 626–640.

[5] C. Niculescu, Optimality and duality in multiobjective fractional programming involving ρ-semilocally type I-preinvex and related functions. J. Math. Anal. Appl. 335 (2007), 7–19.

[6] V. Preda,Optimality and duality in fractional multiple objective programming involving semilocally preinvex and related functions. J. Math. Anal. Appl.288 (2003), 365–382.

[7] V. Preda, I.M. Stancu-Minasian and A. B˘at˘atorescu,Optimality and duality in nonlinear programming involving semilocally preinvex and related functions. J. Inf. Optim. Sci.17 (3)(1996), 585–596.

[8] V. Preda and I.M. Stancu-Minasian,Duality in multiple objective programming involving semilocally preinvex and related functions. Glas. Mat.32 (52)(1997), 153–165.

Received 11 June 2010 University of Bucharest

Faculty of Mathematics and Computer Science 14 Academiei Street

Bucharest 70109, Romania University of Pite¸sti

Faculty of Mathematics and Computer Science 1 Tˆargul din Vale Street

Pite¸sti 110040, jud. Arge¸s, Romania

Références

Documents relatifs

Key words: colinvex, colin…ne, generalized di¤erential, optimization problem, protoconvex function, pseudoconvex function, pseudolinear function, quasiconvex function..

We consider a distributed optimal control problem governed by a semilinear parabolic equation, where constraints on the control and on the state are given.. Aiming to show the

Aghezzaf (2005) Second Order Duality in Multiobjective Programming Involving Generalized Type I Functions, Numerical Functional Analysis and Optimization, 25:7-8, 725-736,

Here, we extend the concepts of (F, α, ρ, d)-type I and generalized (F, α, ρ, d)-type I functions to the continuous case and we use these concepts to establish various

Recently, Hachimi and Aghezzaf [7] considered a class of functions called generalized (F, α, ρ, d)-type I functions for a multiobjective pro- gramming problem with

The following theorem gives a second order optimality conditions for (MOP), under weaker convexity assumptions on the objective function than the ones current in Theorem 3.4..

In this paper we consider a new class of (F, α , ρ ,d)-type I functions to second order and relative to this class we establish several sucient optimality conditions and mixed

For a multiobjective fractional programming problem involving ρ-semilocally preinvex and related functions, we prove some optimality conditions and a weak duality theorem for a