• Aucun résultat trouvé

Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems R

N/A
N/A
Protected

Academic year: 2022

Partager "Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems R "

Copied!
12
0
0

Texte intégral

(1)

ContentslistsavailableatScienceDirect

Neurocomputing

journalhomepage:www.elsevier.com/locate/neucom

Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems R

Jan Harold Alcantara, Jein-Shan Chen

Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan

a rt i c l e i nf o

Article history:

Received 5 February 2019 Revised 15 April 2019 Accepted 29 May 2019 Available online 31 May 2019 Communicated by Dr Q Wei Keywords:

NCP-function

Natural residual function Complementarity problem Neural network Stability

a b s t ra c t

Inthispaper,weconsiderafamilyofneuralnetworksforsolvingnonlinearcomplementarityproblems (NCP). Theneural networks areconstructedfromthe merit functions basedonthree classes ofNCP- functions:thegeneralizednaturalresidualfunctionanditstwosymmetrizations.Inthispaper,wefirst characterizethe stationarypointsoftheinducedmerit functions.Growthbehaviorofthecomplemen- tarityfunctionsisalsodescribed,asthiswillplayanimportantroleindescribing thelevelsetsofthe meritfunctions.Inaddition,the stabilityofthesteepestdescent-basedneuralnetworkmodel forNCP isanalyzed.Weprovidenumericalsimulationstoillustratethetheoreticalresults,andalsocomparethe proposedneuralnetworkswithexistingneuralnetworksbasedonotherwell-knownNCP-functions.Nu- mericalresultsindicatethattheperformanceoftheneuralnetworkisbetterwhentheparameterpas- sociatedwiththeNCP-functionissmaller.TheefficiencyoftheneuralnetworksinsolvingNCPsisalso reported.

© 2019ElsevierB.V.Allrightsreserved.

1. Introductionandmotivation

Given a function F: IRn→IRn, the nonlinear complementarity problem(NCP)istofindapointx∈IRnsuchthat

x≥0, F

(

x

)

0,

x,F

(

x

)

=0, (1)

where

·,·

is the Euclidean inner product and ≥ means the component-wiseorder onIRn.Throughout thispaper,we assume thatFiscontinuously differentiable,andletF=(F1,...,Fn)T with Fi:IRn→IRfori=1,...,n.

For decades,substantial research effortshave beendevoted in thestudyofnonlinearcomplementarityproblemsbecauseoftheir wide range of applications in many areas such as optimization, operationsresearch, engineering,andeconomics[8,9,12,48].Some sourceproblemsofNCPsincludemodels ofequilibriumproblems in the aforementioned fields and complementarity conditions in constrainedoptimizationproblems[9,12].

There are many methods in solving the NCP (1). In general, thesesolution methodsmay be categorizedinto two classes,de- pendingon whetheror not they makeuse ofthe so-calledNCP- function(seeDefinition2.1).Some techniquesthat usuallyexploit NCP-functions include merit function approach [11,19,26], nons-

R The research is supported by Ministry of Science and Technology, Taiwan.

Corresponding author.

E-mail addresses: 80640 0 05s@ntnu.edu.tw (J.H. Alcantara), jschen@math.ntnu.edu.tw (J.-S. Chen).

mooth Newton method [10,45], smoothing methods [4,31], and regularization approach [17,37]. Onthe other hand, interior-point method[29,30]andproximal pointalgorithm[33]aresome well- knownapproachestosolve(1)whichdonotutilizeNCP-functions ingeneral.TheexcellentmonographofFacchineiandPang[9]pro- vides a thorough survey and discussion of solution methods for complementarityproblemsandvariationalinequalities.

Theabove numericalapproachescan efficientlysolvetheNCP;

however, itis often desirable inscientific andengineeringappli- cations to obtain a real-time solution. One promising approach thatcanprovidereal-timesolutionsistheuseofneuralnetworks, whichwerefirstintroducedinoptimizationbyHopfieldandTank inthe1980s[13,38].Neuralnetworksbasedoncircuitimplemen- tation exhibit real-time processing. Furthermore, prior researches show that neural networkscan be used efficiently in linear and nonlinear programming, variational inequalities and nonlinear complementarity problems [2,7,14,15,20,23,42–44,47,49] and as wellasinotherfields[25,28,34,36,39,40,46,50,51,55].

Motivated by the preceding discussion, we construct a new family ofneural networks basedon recently discovered discrete- type NCP-functions tosolve NCPs. Neuralnetworksbased on the Fischer-Burmeister(FB) function[23] andthegeneralizedFischer- Burmeister function [2] have already been studied. The latter NCP-functions,which havebeenextensively usedin thedifferent solutionmethods,arestronglysemismoothfunctions,whichoften provideefficientperformance[9].Inthispaper,weexploretheuse ofsmoothNCP-functionsasbuildingblocksoftheproposedneural https://doi.org/10.1016/j.neucom.2019.05.078

0925-2312/© 2019 Elsevier B.V. All rights reserved.

(2)

networks. Moreover, the NCP-functions we consider herein have piecewise-definedformulas,asopposedtotheFBandgeneralized FB functionswhich havesimple formulations.Inturn, thesubse- quent analysis is more complicated. Nevertheless, we show that theproposedneuralnetworksmayofferpromisingresultstoo.The analysis andnumerical reports in thispaper, on the other hand, pavethewayfortheuseofpiecewise-definedNCP-functions.

This paper is organized as follows: In Section 2, we revisit equivalentreformulationsoftheNCP(1)usingNCP-functions.We also elaborate on the purpose and limitations of the paper. In Section 3, we review some mathematical preliminaries related to nonlinear mappings andstabilityanalysis.We alsosummarize some important properties of the three classes of NCP-functions weusedinconstructingtheneuralnetworks.InSection 4,wede- scribethegeneralpropertiesoftheneuralnetworks,whichinclude thecharacterizationofstationarypointsoftheinducedmeritfunc- tions. In Section 5,we lookat the growth behavior ofthe three classes of NCP-functions considered. This result will be used to provetheboundednessofthelevelsetsoftheinducedmeritfunc- tions. We also prove some stabilityproperties of theneural net- works.InSection6,wepresenttheresultsofournumericalsimu- lations.Conclusionsandsomerecommendationsforfuturestudies arediscussedinSection7.

Throughoutthe paper,IRn denotes the spaceofn-dimensional realcolumnvectors,IRm×n denotesthespaceofm×nrealmatri- ces, andAT denotes the transposeofA∈IRm×n.For anydifferen- tiable functionf: IRn→IR,

f(x) meansthe gradientoff atx.For

any differentiable mapping F=(F1,...,Fm)T:IRn→IRm,

F(x)= [

F1(x) · · ·

Fm(x)]∈IRn×m denotesthetransposedJacobian ofF at x. We assume that p is an odd integer greater than 1,unless otherwisespecified.

2. Overviewandcontributionsofthepaper

In this section, we give an overviewof this research. We be- ginby lookingatequivalentreformulationsofthenonlinearcom- plementarityproblem(1)usingNCP-functions,whichisdefinedas follows.

Definition2.1. Afunction

φ

:IR×IR→IRiscalledanNCP-function ifitsatisfies

φ (

a,b

)

=0 ⇐⇒ a≥0, b≥0, ab=0.

Thewell-knownnatural-residualfunctiongivenby

φ

NR

(

a,b

)

=a

(

ab

)

+=min

{

a,b

}

is an example ofan NCP-function, whichis widely used in solv- ingNCP.Recently, in[3],thediscrete-typegeneralizationof

φ

NR is proposedanddescribedby

φ

NRp

(

a,b

)

=ap

(

ab

)

p+ wherep>1isodd integer. (2) It is shown in [3] that

φ

NRp is twice continuously differentiable.

However, its surfaceis not symmetric,which mayresult todiffi- culties indesigning andanalyzingsolutionmethods [16].To con- querthis, twosymmetrizationsofthe

φ

NRp are presentedin[1].A

naturalsymmetrizationof

φ

NRp isgivenby

φ

Sp−NR

(

a,b

)

=

ap

(

ab

)

p if a>b, ap=bp if a=b, bp

(

ba

)

p if a<b.

(3)

The above NCP-function is symmetric, but is only differentiable on

{

(a,b)

|

a=bora=b=0

}

.Itwashowevershownin[16]that

φ

Sp−NR issemismooth andis directionally differentiable.The second symmetrizationof

φ

NRp isdescribedby

ψ

Sp−NR

(

a,b

)

=

apbp

(

ab

)

pbp if a>b, apbp=a2p if a=b,

apbp

(

ba

)

pap if a<b, (4)

whichpossessesbothdifferentiabilityandsymmetry.Thefunctions

φ

NRp ,

φ

Sp−NR and

ψ

Sp−NR are three classes of the four discrete-type familiesofNCP-functions whichare recentlydiscovered, together with the discrete-type generalization of the Fischer-Burmeister functiongivenby

φ

Dp−FB

(

a,b

)

=

x2+y2

p

(

x+y

)

p. (5) Acomprehensivediscussionoftheirpropertiesispresentedin[16]. ToseehowanNCP-function

φ

canbeusefulinsolvingNCP(1),

wedefine:IRn→IRnby

(

x

)

=

φ (

x1,F1

(

x

))

..

φ (

xn,.Fn

(

x

))

. (6)

ItiseasytoseethatxsolvesNCP(1)ifandonlyif(x)=0(see alsoProposition 4.1 (a)).Thus, the NCPis equivalentto the non- linearsystem ofequations (x)=0.Meanwhile, if

φ

isan NCP-

function,then

ψ

:IR×IR→IR+givenby

ψ (

a,b

)

:= 1

2

| φ (

a,b

) |

2 (7)

isalsoanNCP-function.Accordingly,ifwedefine:IRn→IR+by

(

x

)

= n

i=1

ψ (

xi,Fi

(

x

))

=1

2

(

x

)

2, (8)

then the NCP can be reformulated as a minimization problem minxIRn(x). Hence, given by (8) is a merit function for the NCP, that is, its global minimizer coincides with the solution of theNCP. Consequently, itis onlynatural toconsider thesteepest descent-basedneuralnetwork

dx

(

t

)

dt =−

ρ∇ (

x

(

t

))

, x

(

t0

)

=x0, (9) where

ρ

>0 is a time-scaling factor. The above neural network (9) is also motivated by the onesconsidered in [23] andin [2], where the NCP functions used are the Fischer-Burmeister (FB) functiongivenby

φ

FB

(

a,b

)

=

a2+b2

(

a+b

)

, (10)

andthegeneralizedFischer–Burmeisterfunctionsgivenby

φ

FBp

(

a,b

)

=

(

a,b

)

p

(

a+b

)

where p

(

1,+∞

)

, (11) respectively. We aim to compare the neural networks based on thegeneralizednatural-residualfunctions(2),(3)and(4)withthe well-studiednetworksbasedontheFBfunctions(10)and(11).

Oneofthe contributionsofthis paperliesonestablishing the theoreticalpropertiesofthegeneralizednaturalresidualfunctions.

Theseare fundamentalindesigning NCP-basedsolution methods, andinthispaper,weusetheneuralnetworkapproach.Basicprop- erties of these functions are already presented in [16]. The pur- poseof thispaperis to elaborate some more propertiesandap- plications of the newly discovered discrete-type classes of NCP- functions given by (2), (3) and (4). Specifically, we look at the propertiesoftheirinducedmeritfunctionsgivenby(8).First,itis importantforustodeterminethecorrespondencebetweentheso- lutionsofNCP(1)andthestationarypointsof.Fromtheabove discussion(also see Proposition 4.1(d)), we alreadyknowthat an NCP solution is a stationary point. On the other hand, we also wantto determinewhich stationarypointsof are solutions to the NCP. Forcertain NCP functionssuch as the Mangasarianand

(3)

Solodovfunction[19],FBfunction[11]andgeneralizedFBfunction [5],astationarypointofthemeritfunctionwasshowntobeaso- lutionto theNCPwhenFismonotone ora P0-function.Itshould bepointedoutthattheseNCP-functionspossessthefollowingnice properties:

(P1)

a

ψ

(a,b)·

b

ψ

(a,b)≥0forall(a,b)∈IR2;and

(P2)For all (a, b)∈IR2,

a

ψ

(a,b)=0⇐⇒

b

ψ

(a,b)=0⇐⇒

φ

(a,b)=0.

However, thesepropertiesarenotpossessedby

φ

NRp ,

φ

Sp−NR and

ψ

Sp−NR,whichleadstosomedifficultiesinthesubsequentanalysis.

Hence, we seekforother conditions which will guaranteethat a stationarypointisan NCPsolution.Furthermore,we alsowantto lookatthegrowthbehaviorofthefunctions(2),(3)and(4).This willplay akeyroleincharacterizingthelevelsets oftheinduced merit functions. It must be noted that since the NCP functions

φ

Sp−NRand

ψ

Sp−NR arepiecewise-definedfunctions,thentheanalyses oftheirgrowthbehaviorandthepropertiesoftheirinducedmerit functionsaremoredifficult,ascomparedwiththecommonlyused FBfunctions(10)and(11)whichhavesimpleformulations.

Anotherpurpose ofthispaperisto discussthestabilityprop- ertiesoftheneuralnetworksbased on

φ

NRp,

φ

Sp−NR and

ψ

Sp−NR.We

furtherlook intodifferent examples to seethe influence ofpon theconvergenceof trajectoriesof theneural network tothe NCP solution.Finally,we comparethenumericalperformance ofthese threetypesofneural networkswithtwo well-studiedneural net- worksbasedonthe FBfunction [23] andgeneralizedFBfunction [2].

We recallthata solutionxissaidto bedegenerateif

{

i

|

xi= Fi(x)=0

}

isnotempty.Notethatifxisdegenerateand

φ

isdif-

ferentiableatx,then

(x)issingular.Consequently,oneshould notexpectalocallyfastconvergenceofnumericalmethodsbased onsmooth NCP-functionsif thecomputed solution isdegenerate [9,18].Because ofthedifferentiability of

φ

NRp,

φ

Sp−NR and

ψ

Sp−NR on

thefeasibleregionoftheNCPproblem,itisalsoexpectedthatthe convergenceofthetrajectoriesoftheneural network(9)toade- generatesolutioncouldbeslow.Hence,inthispaper,wewillgive particularattentiontonondegenerateNCPs.

3. Preliminaries

In this section, we review some special nonlinear mappings, some properties of

φ

NRp,

φ

Sp−NR and

ψ

Sp−NR, as well as some tools

fromstabilitytheory indynamical systemsthatwill becrucial in ouranalysis.Webeginwithrecallingconceptsrelatedtononlinear mappings.

Definition3.1. LetF=(F1,...,Fn)T:IRn→IRn.Then, themapping Fissaidtobe

(a) monotoneif

xy,F(x)F(y)

≥0forallx,y∈IRn. (b)strictly monotone if

xy,F(x)F(y)

>0forall x,y∈IRn

andx=y.

(c)strongly monotone with modulus

μ

>0 if

xy,F(x)F(y)

μ

xy

2forallx,y∈IRn.

(d) a P0-function if max

1≤i≤n xi=yi

(xiyi)(Fi(x)Fi(y))≥0 for all x, y∈IRnandx=y.

(e) aP-functionifmax

1≤i≤n(xiyi)(Fi(x)Fi(y))>0forallx,y∈IRn andx=y.

(f) a uniform P-function with modulus

κ

>0 if max

1in(xiyi)(Fi(x)Fi(y))

κ

xy

2,forallx,y∈IRn.

FromDefinition3.1,thefollowingone-sidedimplicationscanbe obtained:

Fisstronglymonotone⇒FisauniformP-function⇒Fisa P0-function.

ItisknownthatFismonotone(resp.strictly monotone)ifand onlyif

F(x)ispositivesemidefinite(resp.positivedefinite)forall x∈IRn.Inaddition,F isaP0-functionifandonlyif

F(x) isaP0- matrixforallx∈IRn;that is,itsprincipalminorsarenonnegative.

Further,if

F(x)isaP-matrix(thatis,itsprincipalminorsarepos-

itive) forall x∈IRn,thenFisa P-function.However, wepoint out that aP-functiondoesnot necessarilyhavea Jacobianwhich isa P-matrix.

The following characterization of P-matrices and P0-matrices willbeusefulinouranalysis.

Lemma3.1. A matrixM∈IRn×n isa P-matrix (resp.a P0-matrix) if andonlyifwheneverxi(Mx)i≤0(resp.xi(Mx)i<0)foralli,thenx= 0.

Proof. Pleasesee[6].

Thefollowingtwolemmassummarizesome propertiesof

φ

NRp,

φ

Sp−NR and

ψ

Sp−NR thatwillbeusefulinoursubsequentanalysis.

Lemma3.2. Letp>1beanoddinteger.Then,thefollowinghold.

(a) Thefunction

φ

NRp istwicecontinuouslydifferentiable.Itsgradi- entisgivenby

∇φ

NRp

(

a,b

)

=p

ap1

(

ab

)

p2

(

ab

)

+

(

ab

)

p2

(

ab

)

+

.

(b) Thefunction

φ

Sp−NR istwice continuously differentiableon the set:={(a,b)|a=b}.Itsgradientisgivenby

∇φ

Sp−NR

(

a,b

)

=

p[ap1

(

ab

)

p1,

(

ab

)

p1]T if a>b, p[

(

ba

)

p1,bp1

(

ba

)

p1]T if a<b. Further,

φ

Sp−NR is differentiable at (0,0) with

∇φ

Sp−NR(0,0)= [0,0]T.

(c) Thefunction

ψ

Sp−NR istwicecontinuouslydifferentiable.Itsgra- dientisgivenby

∇ψ

Sp−NR

(

a,b

)

=

⎧ ⎨

p[ap1bp

(

ab

)

p1bp, apbp1

(

ab

)

pbp1+

(

ab

)

p1bp]T if a>b, p[ap1bp,apbp1]T=pa2p1[1,1]T if a=b, p[ap1bp

(

ba

)

pap1+

(

ba

)

p1ap, apbp1

(

ba

)

p1ap]T if a<b.

Proof. Pleasesee[3,Proposition2.2],[1,Propositions2.2and3.2], and[16,Proposition4.3].

Lemma3.3. Letp>1 bea positiveoddinteger. Then,thefollowing hold.

(a) If

φ

{ φ

NRp,

φ

Sp−NR

}

, then

φ

(a, b)>0 ⇐⇒ a>0,b>0. On the otherhand,

ψ

Sp−NR(a,b)≥0onIR2.

(b)

a>

φ

0NRp (a,onb)

{

·(

a,b

φ

b)NRp

|

(aa,>b)b>0ora>b>2a

}

,

=0 on

{

(a,b)

|

abora>b=2aora>b=0

}

,

<0 otherwise,

a

φ

Sp−NR(a,b)·

b

φ

Sp−NR(a,b)>0 on

{

(a,b)

|

a>b>0

}

{

(a,b)

|

b>a>0

}

,and

a

ψ

Sp−NR(a,b)·

b

ψ

Sp−NR(a,b)>0onthefirstquadrantIR2++.

(4)

(c)If

φ

{ φ

NRp,

φ

Sp−NR

}

, then

a

φ

(a,b)·

b

φ

(a,b)=0 provided that

φ

(a,b)=0. On the other hand,

ψ

Sp−NR(a,b)=0⇐⇒

∇ψ

Sp−NR(a,b)=0. In particular, we have

a

ψ

Sp−NR(a,b)·

b

ψ

Sp−NR(a,b)=0providedthat

ψ

Sp−NR(a,b)=0. Proof. Pleasesee[16,Propositions3.4,4.5,and5.4].

Next, we recall some materials about first order differential equations(ODE):

x˙

(

t

)

=H

(

x

(

t

))

, x

(

t0

)

=x0∈IRn (12) whereH:IRn→IRnisamapping.Wealsointroducethreekindsof stabilitythatwe willconsiderlater.Thesematerialscan befound inODEtextbooks;see[27].

Definition 3.2. A point x=x(t) is called an equilibrium point or a steady state of the dynamic system (12) if H(x)=0. If there is a neighborhood ⊆IRn of x such that H(x)=0 and H(x)=0

x\{x},thenxiscalledanisolatedequilibriumpoint.

Lemma 3.4. Assume that H: IRn→IRn is a continuous mapping.

Then, foranyt0≥0andx0∈IRn,thereexistsa localsolutionx(t)for (12)with t∈[t0,

τ

) forsome

τ

>t0.If,in addition,His locally Lip- schitzcontinuous atx0,thenthesolutionis unique;ifHis Lipschitz continuousinIRn,then

τ

canbeextendedto.

Definition3.3. (StabilityinthesenseofLyapunov)Letx(t)beaso- lutionfor(12).AnisolatedequilibriumpointxisLyapunovstable if foranyx0=x(t0) andany

ε

>0, there exists a

δ

>0 such that

x(t)x

<

ε

foralltt0 and

x(t0)x

<

δ

.

Definition3.4. (Asymptoticstability)Anisolatedequilibriumpoint x is said tobe asymptotically stableifin additionto beingLya- punov stable, it has the property that x(t)→x as t→∞ for all

x(t0)x

<

δ

.

Definition 3.5. (Lyapunovfunction)Let⊆IRn be an openneigh- borhoodofx¯.Acontinuously differentiablefunctionW:IRn→IRis said to be a Lyapunovfunction at thestate x¯over the set for Eq.(12)if

W

(

x¯

)

=0, W

(

x

)

>0,

x

\{

x¯

}

.

dW

(

x

(

t

))

dt =

W

(

x

(

t

))

TH

(

x

(

t

))

≤0,

x

. Lemma3.5.

(a) AnisolatedequilibriumpointxisLyapunovstableifthereex- istsaLyapunovfunctionoversomeneighborhoodofx. (b) Anisolatedequilibriumpointxisasymptoticallystableifthere

isa Lyapunovfunctionover someneighborhoodofxsuch that dW(x(t))

dt <0for all x\{x}.

Definition 3.6. (Exponential stability) An isolated equilibrium pointxisexponentiallystableifthereexistsa

δ

>0suchthatar- bitrary point x(t)of (12)withthe initial conditionx(t0)=x0 and

x(t0)x

<

δ

iswell-definedon[0,+∞)andsatisfies

x

(

t

)

x

2ceωt

x

(

t0

)

x

tt0,

where c>0 and

ω

>0 are constants independent of the initial point.

Thefollowingresultwillalsobehelpfulinourstabilityanalysis.

Lemma3.6. LetFbelocallyLipschitzian.IfallV

F(x)arenonsingu-

lar,thenthereisaneighborhoodN(x)ofxandaconstantCsuchthat foranyyN(x)andanyV

F(y),Visnonsingularand

V1

C Proof. Pleasesee[32,Propositions3.1].

4. Neuralnetworkmodel

In thissection, we describe the properties ofthe neural net- work(9)basedonthefunctions

φ

NRp ,

φ

Sp−NR and

ψ

Sp−NR.Beforethis,

wesummarizefirstsome importantpropertiesofasdefinedin (8)forgeneralNCP-functions.Proposition4.1(a)isinfactLemma 2.2in[19].Ontheotherhand,Proposition4.1(b)and(e)aretrue forallgradientsystems(9).

Proposition4.1. Let:IRn→IR+bedefinedasin(8),with

φ

being

anyNCP-function,andlet

ψ

beas in(7).SupposethatFiscontinu-

ouslydifferentiable.Then,

(a) (x)≥0 for all x∈IRn. If the NCP (1) has a solution, x is a globalminimizerof(x)ifandonlyifxsolvestheNCP.

(b) (x(t))isanonincreasingfunctionoft,wherex(t)isasolution of(9).

(c) Letx∈IRn,andsupposethat

φ

isdifferentiableat(xi,Fi(x))for eachi=1,...,n.Then

(

x

)

=

a

ψ (

x,F

(

x

))

+

F

(

x

)

b

ψ (

x,F

(

x

))

(13)

where

a

ψ (

x,F

(

x

))

:=[

a

ψ (

x1,F1

(

x

))

,. . .,

a

ψ (

xn,Fn

(

x

))

]T,

b

ψ (

x,F

(

x

))

:=[

b

ψ (

x1,F1

(

x

))

,. . .,

b

ψ (

xn,Fn

(

x

))

]T. (d) Letxbe asolution totheNCPsuchthat

φ

isdifferentiableat

(xi,Fi(x))foreachi=1,...,n.Then,xisastationarypointof .

(e) Everyaccumulationpointofa solution x(t) ofneuralnetwork (9)isanequilibriumpoint.

Proof. (a)Itisclearthat≥0.Noticethat(x)=0ifandonlyif (x)=0,whichoccursifandonlyif

φ

(xi,Fi(x))=0foralli.Since

φ

isan NCP-function, thisis equivalentto having xi≥0, Fi(x)≥0 and xiFi(x)=0. Thus, (x)=0 if and only if x≥0, F(x)≥0 and

x,F(x)

=0.Thisprovespart(a).

(b)Thedesiredresultfollowsfrom d

(

x

(

t

))

dt =

(

x

(

t

))

Tdxdt =

(

x

(

t

))

T

(

ρ(

x

(

t

)))

=−

ρ∇ (

x

(

t

))

2≤0 forallsolutionsx(t).

(c)Theformulaisclearfromchainrule.

(d)First, note that fromEq. (7),we have

∇ψ

(a,b)=

φ

(a,b)·

∇φ

(a,b).Thus,ifxisasolutiontotheNCP,itgives

∇ψ

(xi,Fi(x))= 0forall i=1,...,n.Then, itfollowsfromformula(13)in part(c) that

(x)=0.Thatis,xisastationarypointof.

(e)Pleaseseepage232of[41].

We adopt the neural network (9) with (x)= 12

(x)

2, where is given by (6) with

φ

{ φ

NRp,

φ

Sp−NR,

ψ

Sp−NR

}

. The func-

tion corresponding to

φ

NRp,

φ

Sp−NR and

ψ

Sp−NR is denoted, re-

spectively,by NRp ,S1p−NR andpS2−NR. Theircorresponding merit functions will be denoted by NRp, S1p−NR and S2p−NR, respec- tively. We note that by formula (13) and the differentiability of

{

NRp,S1p−NR,S2p−NR

}

(seeProposition4.2),theneuralnetwork (9)canbeimplementedonhardwareasinFig.1.

Wefirstestablishtheexistenceanduniquenessofthesolutions ofneuralnetwork(9).

Proposition 4.2. Let p>1 be an odd integer. Then, the following hold.

(a) NRp andS2p−NR arebothcontinuouslydifferentiableonIRn. (b) S1p−NR iscontinuouslydifferentiableontheopenset=

{

x

IRn

|

xi=Fi(x),

i=1,2,...,n

}

.

Références

Documents relatifs

In the second step, the kinematic contact variables, namely the contact distance between a slave node and its master segment d n and the relative tangential velocity, have to

The traveling salesman problem was formed in the 30s of the twentieth century. This is one of the most popular combinatorial optimization problems. Its main idea is to find the

Schur ~ in his well known theory of the class H~, develops an algebraic algorithm, equivalent to a repeated use of Schwarz's lemma, which can be conveniently

For any value of λ in the interval − D &lt; λ &lt; 0 we establish the existence of finitely many (and at least one) nontrivial solutions, whereas for any λ &gt; 0 we establish

The first example is of a completely regular space X in which every bounded subset is finite but such that the repletion uX contains an infinite compact subset.. This enables us

Condition (1.19) states that if there is a positive shipment of the semi-product transacted from industrial producer i to industrial consumers j, then the sum of the marginal

For solving linear complementarity problems LCP more attention has recently been paid on a class of iterative methods called the matrix-splitting.. But up to now, no paper has

L’accès aux archives de la revue « Rendiconti del Seminario Matematico della Università di Padova » ( http://rendiconti.math.unipd.it/ ) implique l’accord avec les