• Aucun résultat trouvé

ShaohuaPan ,Jein-ShanChen j -centrumlocationproblem TwounconstrainedoptimizationapproachesfortheEuclidean

N/A
N/A
Protected

Academic year: 2022

Partager "ShaohuaPan ,Jein-ShanChen j -centrumlocationproblem TwounconstrainedoptimizationapproachesfortheEuclidean"

Copied!
16
0
0

Texte intégral

(1)

Two unconstrained optimization approaches for the Euclidean j-centrum location problem

Shaohua Pan

a,*

, Jein-Shan Chen

b

aSchool of Mathematical Sciences, South China University of Technology, Guang zhou, 510640, China

bDepartment of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

Abstract

Consider the single-facility Euclideanj-centrum location problem inRn. This problem is a generalization of the clas- sical Euclidean 1-median problem and 1-center problem. In this paper, we develop two efficient algorithms that are par- ticularly suitable for problems wherenis large by using unconstrained optimization techniques. The first algorithm is based on the neural networks smooth approximation for the plus function and reduces the problem to an unconstrained smooth convex minimization problem. The second algorithm is based on the Fischer–Burmeister merit function for the second- order cone complementarity problem and transforms the KKT system of the second-order cone programming reformula- tion for the problem into an unconstrained smooth minimization problem. Our computational experiments indicate that both methods are extremely efficient for large problems and the first algorithm is able to solve problems of dimensionnup to 10,000 efficiently.

Ó2007 Published by Elsevier Inc.

Keywords: The Euclideanj-centrum problem; Smoothing function; Merit function; Second-order cone programming

1. Introduction

Given a positive integer numberjin the interval½1;m, the single-facility Euclideanj-centrum problem in Rnconcerns locating a new facility so as to minimize the sum of thejlargest weighted Euclidean distances to the existingm facilities. Letai2Rn represent the position of theith existing facility and x2Rn denote the unknown position of the new facility. Let

fiðxÞ ¼xi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1ai1Þ2þ ðx2ai2Þ2þ þ ðxnainÞ2 q

; i¼1;2;. . .;m

0096-3003/$ - see front matter Ó2007 Published by Elsevier Inc.

doi:10.1016/j.amc.2006.12.014

* Corresponding author.

E-mail addresses:shhpan@scut.edu.cn(S. Pan),jschen@math.ntnu.edu.tw(J.-S. Chen).

www.elsevier.com/locate/amc

(2)

be the weighted Euclidean distance between the new facility and theith existing facility, wherexi>0 is the associated weight. Then the problem can be formulated as the following minimization problem:

minx2RnUjðxÞ:¼Xj

l¼1

f½lðxÞ; ð1Þ

wheref½1ðxÞ;f½2ðxÞ;. . .;f½mðxÞare the functions obtained by sortingf1ðxÞ;f2ðxÞ;. . .;fmðxÞin nonincreasing or- der, namely for anyx2Rn,

f½1ðxÞPf½2ðxÞP Pf½jðxÞP Pf½mðxÞ:

There are many different kinds of facility location problems and for which there have been proposed var- ious methods; see[9,10,14]and the related literature in the web-site maintained by EWGLA (Euro Working Group on Locational Analysis). The single-facility Euclideanj-centrum problem studied here is to generalize the classical Euclidean 1-median problem (corresponding to the case j=m) and 1-center problem (corre- sponding to the case j= 1) from a view of solution concept. To our knowledge, the j-centrum concept was first defined by Slater[25]and Andreatta and Mason[3,4]for the discrete single-facility location problem, and later in [23]was extended to general location problems covering discrete and continuous decisions. In addition, Tamir et al. [22,28] recently did some excellent works related to thej-centrum location problem, especially the rectilinearj-centrum problem.

The current research for Euclidean facility location problems mainly focuses on the 1-median problem, for which many practical and efficient algorithms have been designed since Weiszfeld presented a simple iterative algorithm in 1937. These include the hyperboloid approximation procedure[11], the interior point algorithms [1,29,30], the smoothing Newton methods[19,20]and the merit function approach[7]. However, the solution methods for thej-centrum location problem are rarely seen in the literature except[2,21], where the problem of minimizing the jlargest Euclidean norms is only mentioned as a special example of a second-order cone programming and consequently can be solved by an interior point method. The main purpose of this paper is to develop two efficient algorithms for the single-facility Euclidean j-centrum problem in Rn, which can be used to handle the cases wherenis large (say, in the order of thousands).

Note that problem(1)is a nonsmooth convex optimization problem. Due to the nondifferentiability of the objective function, the gradient-based algorithms can not be used to solve the problem directly. To overcome this difficulty, we reduce(1)as a nonsmooth problem only involving the plus function maxf0;tg, and then uti- lize the neural network smoothing function[6]to give a convex approximation. Based on the approximation problem, we propose a globally convergent quasi-Newton algorithm. In addition, we reformulate(1)as a stan- dard primal second-order cone programming (SOCP) problem to circumvent its nondifferentiability. This SOCP reformulation is completely different from the ones in[2,21], and particularly we here use a merit func- tion approach rather than an interior point method to solve it. More specifically, we convert its Karush–

Kuhn–Tucker (KKT) system into an equivalent unconstrained smooth minimization problem by the Fischer–Burmeister merit function[8]for second-order cone complementarity problems (SOCCPs), and then solve this smooth optimization problem with a limited-memory BFGS method. In contrast to interior point methods, our unconstrained optimization methods do not require an interior point starting, and moreover have the advantages of requiring less work per iteration, thereby rendering themselves to large problems.

The rest of this paper proceeds as follows. In Section2, we derive a smooth approximation to(1). Based on the smooth approximation, we design a globally convergent quasi-Newton algorithm in Section3. In Section 4, we present the detailed process of reformulating(1)as a standard primal SOCP problem. Based on its KKT system and the Fischer–Burmeister merit function for SOCCPs, we develop another quasi-Newton algorithm in Section5. In Section6, we report our preliminary computational results and compare our algorithms with the SeDuMi 1.05 (a primal-dual interior point algorithm for the SOCP and the semidefinite programming).

The results show that our first algorithm is the most effective by CPU time and able to solve problems of dimension n up to 10,000, whereas our second algorithm is comparable with even superior to the SeDuMi for the moderate problems (say, in the order of hundreds). Finally, we conclude this paper in Section 7.

In this paper, unless otherwise stated, all vectors are column vectors. We useIdto denote theddidentity matrix and 0dto denote a zero vector inRd. To represent a large matrix with several small matrices, we use

(3)

semicolons ‘‘;’’ for column concatenation and commas ‘‘,’’ for row concatenation. This notation also applies to vectors. For a convex functionf :Rn!R, we letof(x) denote the subdifferential offatx. Note thatfis minimized atxoverRnif and only if 02ofðxÞ. For the calculus rules on subdifferentials of convex functions, please refer to[16]or[24].

2. Smooth approximation

In this section, we reduce problem(1)to a nonsmooth optimization problem only involving the plus func- tion maxf0;tg, and then give a smooth approximation via the neural networks smoothing function proposed by[6]. We also demonstrate that the smooth approximation can be generated by regularizing problem(1)with a binary entropy function.

First, it is not hard to verify that the functionUjðxÞin (1)can be expressed by UjðxÞ ¼max Xm

i¼1

kifiðxÞ:Xm

i¼1

ki¼j;06ki61;i¼1;2;. . .;m

( )

; ð2Þ

since, on the one hand, for any feasible pointk= (k1,. . .,km)Tof the maximization problem in(2), there always holds

Xm

i¼1

kifiðxÞ ¼ki1f½1ðxÞ þki2f½2ðxÞ þ þkimf½mðxÞ

6ki1f½1ðxÞ þ þkikf½kðxÞ þkikþ1f½kðxÞ þ þkimf½kðxÞ 6ki1f½1ðxÞ þ þkikf½kðxÞ þ ðjki1 kikÞf½kðxÞ 6UjðxÞ

on the other hand, there exists an feasible point~k¼~k1;. . .;~kmT

such that~k¼~k1;. . .;~kmT

, where~ki¼1 if fi(x) belongs to thejlargest function in the collectionffiðxÞgmi¼1, and otherwise~ki¼0. We note that the linear programming problem in(2)has the following dual problem:

minw;g jwþXm

i¼1

gi

s:t: giPfiðxÞ w; i¼1;2;. . .;m;

giP0;i¼1;2;. . .;m;

ð3Þ

and moreover, they both have nonempty feasible sets for anyx2Rnwhereg¼ ðg1; ;gmÞTandwandgiare the Lagrange multipliers associated with the constraintsPm

i¼1ki¼jandki61, respectively. From the duality theory of linear programming,UjðxÞcan then be represented by the dual problem(3). However, we observe that each pair of constraints giPfiðxÞ w and giP0 in (3) can be replaced with one constraint giPmaxffiðxÞ w;0g, whereas the objective function of(3) is increasing componentwise ing. Therefore,

UjðxÞ ¼min

w2R jwþXm

i¼1

maxf0;fiðxÞ wg

( )

; ð4Þ

Thus, problem(1)reduces to a nonsmooth problem only involving the plus function:

w2R;x2Rminn jwþXm

i¼1

maxf0;fiðxÞ wg

( )

: ð5Þ

To the best of our knowledge, the equivalent formulation(5)has not been found in the literature though the derivation procedure above is very simple.

In[6], Chen and Mangasarian presented a class of smooth approximations to the plus function maxf0;tg.

Among these smooth approximations, the neural networks smooth function and the Chen–Harker–Kanzow–

Smale smooth function are most commonly used. In this paper we will use the neural networks smoothing function defined by

uðt;eÞ ¼eln½1þexpðt=eÞ ðe>0Þ: ð6Þ

(4)

By uðt;eÞ, we define the function Uðw;x;eÞ:¼jwþXm

i¼1

uðfiðx;eÞ w;eÞ; ð7Þ

where

fiðx;eÞ ¼xi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2 q

; i¼1;2;. . .;m:

Then, byLemma 1below, we can reformulate problem(1)as an unconstrained smooth convex minimization problem

w2R;x2RminnUðw;x;eÞ: ð8Þ

Lemma 1. The functionUðw;x;eÞdefined as in(7)has the following properties:

(i) For any w2R;x2Rn, and e1;e2satisfying0<e1<e2, we have

Uðw;x;e1Þ<Uðw;x;e2Þ; ð9Þ

(ii) For any x2Rn ande>0, UjðxÞ6min

w2RUðw;x;eÞ6UjðxÞ þmðln 2þ1Þe; ð10Þ (iii) For any e>0,Uðw;x;eÞis continuously differentiable and strictly convex.

Proof

(i) For anyt2R, by a simple computation, we have ouðt;eÞ

oe ¼ln½1þexpðt=eÞ expðt=eÞ 1þexpðt=eÞt

e>0 and

ouðt;eÞ

ot ¼ expðt=eÞ 1þexpðt=eÞ>0:

The two inequalities imply that for anyw2R;x2Rn, ande1;e2satisfying 0<e1<e2, Uðw;x;e1Þ<jwþXm

i¼1

uðfiðx;e2Þ w;e1Þ<jwþXm

i¼1

uðfiðx;e2Þ w;e2Þ ¼Uðw;x;e2Þ:

(ii) For anyt2Rande>0, it is easy to verify that maxf0;tg6uðt;eÞ6maxf0;tg þeln 2:

This implies that for anyw2R;x2Rn ande>0,

maxf0;fiðx;eÞ wg6uðfiðx;eÞ w;eÞ6maxf0;fiðx;eÞ wg þeln 2; i¼1;. . .;m;

whereas

maxffiðxÞ w;0g6maxffiðx;eÞ w;0g6maxffiðxÞ w;0g þe; i¼1;2;. . .;m:

The two sides imply that jwþXm

i¼1

maxf0;fiðxÞ wg6Uðw;x;eÞ6jwþXm

i¼1

maxf0;fiðxÞ wg þmðln 2þ1Þe:

From the inequality and(4), the conclusion immediately follows.

(5)

(iii) For anye>0, clearly, Uðw;x;eÞis continuously differentiable. Now we prove that it is strictly convex.

From(7),

rUðw;x;eÞ ¼

jPm

i¼1

kiðw;x;eÞ

Pm

i¼1

kiðw;x;eÞxffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiiðxaiÞ kxaik2þe2

p 0 BB

@

1 CC

A; ð11Þ

where

kiðw;x;eÞ ¼ exp½ðfiðx;eÞ wÞ=e

1þexp½ðfiðx;eÞ wÞ=e; i¼1;2;. . .;m: ð12Þ

Let

^kiðw;x;eÞ ¼kiðw;x;eÞ k2iðw;x;eÞ; i¼1;2;. . .;m;

and Q¼Xm

i¼1

kiðw;x;eÞxi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q InðxaiÞðxaiÞT

kxaik2þe2

! þ

^kiðw;x;eÞx2i

eðkxaik2þe2ÞðxaiÞðxaiÞT 2

64

3 75:

Then, it follows from(11)that

r2Uðw;x;eÞ ¼

Pm

i¼1

^kiðw;x;eÞ

e Pm

i¼1

^kiðw;x;eÞxiðxaiÞT e ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

kxaik2þe2

p

Pm

i¼1

^kiðw;x;eÞxiðxaiÞT e ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

kxaik2þe2

p Q

0 BB

@

1 CC A:

For anye>0 and z¼ ðz0;zÞ 2Rnþ1 withz6¼0, we have zTr2Uðw;x;eÞz¼e1z20Xm

i¼1

^kiðw;x;eÞ 2e1z0

Xm

i¼1

^kiðw;x;eÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q xiðxaiÞTzþzTQz

¼e1Xm

i¼1

^kiðw;x;eÞ z202z0

xiðxaiÞTz ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q þ xiðxaiÞTz

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2 q

0 B@

1 CA 2 2

64

3 75

þXm

i¼1

kiðw;x;eÞxi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q kzk2 ðxaiÞTz

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2 q

0 B@

1 CA 0 2

B@

1 CA

PXm

i¼1

kiðw;x;eÞxi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q kzk2 ðxaiÞTz

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2 q

0 B@

1 CA 2 2

64

3 75

PXm

i¼1

kiðw;x;eÞxi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q kzk2 kzk2 xai

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2 q

0 2

B@

1 CA>0

and moreoverzTr2Uðw;x;eÞz¼0 if and only ifz= 0. The first inequality is due to the nonnegativity of

^kiðw;x;eÞ, the second follows from the Cauchy–Schwartz inequality and the nonnegativity ofkiðw;x;eÞ, and the last is obvious. Here, the proof of lemma is completed. h

(6)

To close this section, we give a different view into the smooth approximation (8). Let hðtÞ ¼tlntþ ð1tÞlnð1tÞ. Introduce the binary entropy functionPm

i¼1hðkiÞas a regularizing term to the objective of lin- ear programming problem in(2) and construct the regularization problem

max Xm

i¼1

kifiðx;eÞ eXm

i¼1

hðkiÞ:Xm

i¼1

ki¼j; 06ki61; i¼1;2;. . .;m

( )

: ð13Þ

Clearly,(13)is a strictly convex programming problem and satisfies the Slater constraint qualification. Hence, from the duality theory of convex program, the optimal value of(13)is equivalent to that of its dual problem.

By a simple computation,(13)has the following dual problem minw2RUðw;x;eÞ:

Compared with the previous Eq.(8), this means that the smooth approximation in(8)is actually equivalent to the following binary entropy regularization problem

minx2Rn max Xm

i¼1

kifiðx;eÞ eXm

i¼1

hðkiÞ:Xm

i¼1

ki¼j; 06ki61; i¼1;2;. . .;m

( )

:

We note that Shi[26]constructed a similar regularization problem to derive a smoothing function for the sum of thejlargest components, but he did not provide an explicit expression for his smoothing function so that the function value must be determined by numerical computations.

3. Algorithm based on the neural networks smoothing function

In what follows, we present an algorithm for solving problem(1)based on the smooth approximation(8), followed by a global convergence result.

Algorithm 1. Letr2 ð0;1Þ,ð^w0;^x0Þ 2Rnþ1 ande0>0 be given. Setk:= 0.

For k¼0;1;2;. . .;do

1. Use an unconstrained optimization method withðw^k;^xkÞas the starting point to solve

w2R;x2RminnUðw;x;ekÞ; ð14Þ

and write its minimizer asðwk;xkÞ.

2. Setekþ1¼rek;ðw^k;^xkÞ ¼ðwkþ1;xkþ1Þ, and then go back to Step 1.

End

Lemma 2. Let x be any point in Rn and define the index set

IðxÞ:¼ fi2 f1;2;. . .;mgjfiðxÞPf½jðxÞg: ð15Þ

Then

oUjðxÞ ¼ X

i2IðxÞ

qiofiðxÞ: X

i2IðxÞ

qi¼j;06qi61 fori2IðxÞ;qi¼0 fori62IðxÞ

( )

: ð16Þ

Proof. Let#jðyÞ ¼Pj

l¼1y½l, wherey½1;. . .;y½mare the numbersy1;. . .;ymsorted in nonincreasing order. Then, it follows from (2)that

#jðyÞ ¼max Xm

i¼1

kiyi:Xm

i¼1

ki¼j; 06ki61; i¼1;2;. . .;m

( )

¼dðyjCÞ;

(7)

wheredðkCÞdenotes the support function of the setCand C¼ k2Rm Xm

i¼1

ki¼j; 06ki61; i¼1;2;. . .;m

( )

:

Therefore, by[24, Corollary 23.5.3], o#jðyÞ ¼argmax Xm

i¼1

kiyi:Xm

i¼1

ki¼j; 06ki61;i¼1;2;. . .;m

( )

:

It is easily shown that the set on the right side of the last equality is exactly X

j2JðyÞ

qjej: X

j2JðyÞ

qj¼j; 06qj61 forj2JðyÞ;qj¼0 forj62JðyÞ

( )

;

wherefe1;e2;. . .;emg denote the canonical basis ofRm and JðyÞ ¼ fj2 f1;2;. . .;mgjyjPy½jg:

Thus, from[16, Theorem 4.3.1], we immediately obtain(16). h

Lemma 3. Letfðwk;xkÞgbe the sequence generated byAlgorithm1. Then, any limit points offxkgare optimal solutions to problem(1).

Proof. Let ðw;xÞ be a limit point of the sequence fðwk;xkÞg. Without loss of generality, we assume that fðwk;xkÞg ! ðw;xÞwhenktends to +1. We now prove thatx*is an optimal solution to problem(1). First, from(10)and the fact that ðwk;xkÞis a solution of (14), it follows that

UjðxkÞ6Uðwk;xk;ekÞ6UjðxkÞ þmðln 2þ1Þek: ð17Þ By the continuity offiðxÞ, we thus have

UjðxÞ ¼ lim

k!þ1Uðwk;xk;ekÞ ¼jwþXm

i¼1

maxf0;fiðxÞ wg;

which can be rewritten as UjðxÞ ¼jwþ X

i2IðxÞ

maxf0;fiðxÞ wg þ X

i62IðxÞ

maxf0;fiðxÞ wg:

Here,I(x*) is defined by(15). The last equation implies that

fiðxÞ w60 for all i62IðxÞ: ð18Þ

In addition, from the fact thatðwk;xkÞis a solution of (14), we have

rUðwk;xk;ekÞ ¼

jPm

i¼1

kki

Pm

i¼1

kkixiðxkaiÞ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

kxaik2þe2

p 0 BB

@

1 CC

A¼0; ð19Þ

wherekki ¼kiðwk;xk;ekÞfori¼1;2;. . .;m. Combining with the definition ofkiðwk;xk;ekÞin(12), it is clear that Pm

i¼1kki ¼jand 0<kki <1 fori¼1;2;. . .;m. This means that the sequencefkkigfor everyi¼1;2;. . .;mhas a convergent subsequence. Without loss of generality, we suppose that

k!þ1lim kki ¼ki; i¼1;2;. . .;m:

(8)

Then, Xm

i¼1

ki ¼j; 06ki 61 fori¼1;2;. . .;m: ð20Þ

Next we proveki ¼0 for i62IðxÞ. By (18), we consider the following two cases to prove it.

Case 1: there exits an index i062IðxÞsuch thatfi0ðxÞ w¼0. In this case, there must hold f½1ðxÞ w>0;f½2ðxÞ w>0;. . .;f½jðxÞ w>0:

By the continuity offiðxÞ, we have for all sufficiently largek f½1ðxk;ekÞ wk >0;f½2ðxk;ekÞ wk>0;. . .;f½jðxk;ekÞ wk>0:

This implies that

k!þ1lim

exp½ðf½lðxk;ekÞ wkÞ=ek

1þexp½ðf½lðxk;ekÞ wkÞ=ek¼1; l¼1;2;. . .;j:

Thus, from(17)and the definition ofkki, we have X

i62IðxÞ

ki ¼j lim

k!þ1

X

i2IðxÞ

exp½ðfiðxk;ekÞ wkÞ=ek 1þexp½ðfiðxk;ekÞ wkÞ=ek¼0:

Consequently,ki ¼0 fori62IðxÞ.

Case 2:fiðxÞ w<0 for alli62IðxÞ. Now, when kis sufficiently large, fiðxk;ekÞ wk<0 for alli62IðxÞ

due to the continuity of fi(x). Thus, from the definition ofkki, we readily haveki ¼0 for i62IðxÞ.

Thus, combining with the Eq.(20), we have X

i2IðxÞ

ki ¼j; 06ki 61 fori2IðxÞ; andki ¼0 fori62IðxÞ: ð21Þ Note that fori2IðxÞ,

mi ¼ lim

k!þ1

xiðxkaiÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxkaik2þe2k

q 2ofiðxÞ:

Therefore, it follows from the Eqs.(19) and (21)that X

i2IðxÞ

kimi ¼ lim

k!þ1

Xm

i¼1

kkixiðxkaiÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxaik2þe2

q ¼0:

Compared with(16), this indicates that 02oUjðxÞ, and accordinglyx*is an optimal solution to problem(1).

Here, we complete the proof of lemma. h

Theorem 1. Let {xk}be the sequence generated byAlgorithm1. If x*is the unique optimal solution of problem (1), then we havelimk!þ1xk¼x.

Proof. For anykP1, by Lemma 1,

Uðw1;x1;e1Þ>Uðw1;x1;ekÞPUðwk;xk;ekÞPUjðxkÞ: ð22Þ Consider thatUjðxÞis coercive, and so the level setL¼ fx2RnjUjðxÞ6Uðw1;x1;e1Þgis bounded. However, from(22), we havefxkg L. This shows that the sequence {xk} is bounded. Sincex*is the unique solution of problem (1), we have from Lemma 2that limk!þ1xk ¼x. h

(9)

In practice, we would stopAlgorithm 1once some stopping criteria are satisfied. Moreover, we will use a first-order, or gradient-based, unconstrained minimization algorithm to solve the problem(14)in Step 1. In what follows, we describe a more practical version ofAlgorithm 1by choosing a limited-memory BFGS algo- rithm to solve the problem(14)in Step 1 because this algorithm can solve very large unconstrained optimiza- tion problem efficiently.

Algorithm 2. Let r2 ð0;1Þ,s1;s2>0,ðw^0;^x0Þ 2Rnþ1 ande0>0 be given. Setk:= 0.

Fork¼0;1;2;. . .;untilek 6s1,do

1. Use a version of the limited-memory BFGS algorithm with ðwk;xkÞ as the starting point to solve (14) approximately, and obtain anðwk;xkÞsuch thatkrUðwk;xk;ekÞk6s2.

2. Setekþ1¼rek;ðw^k;^xkÞ ¼ðwkþ1;xkþ1Þ, and then go back to Step 1.

End

4. Second-order cone program reformulation

In this section, we will reformulate the single-facilityj-centrum problem(1)as a standard primal second- order cone program. First, from the discussions in the second paragraph of Section2, we know that problem (1)is equivalent to the optimization problem

minw;m;g jwþXm

i¼1

gi

s:t: kxiðmaiÞk6giþw; i¼1;2;. . .;m;

giP0

ð23Þ

wherem¼ ðm1;. . .;mnÞT2Rn. This problem can be rewritten as minðjmÞwþXm

i¼1

xiti s:t:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðm1a11Þ2þ ðm2a12Þ2þ þ ðmna1nÞ2 q

6t1; ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðm1a21Þ2þ ðm2a22Þ2þ þ ðmna2nÞ2 q

6t2;

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðm1am1Þ2þ ðm2am2Þ2þ þ ðmnamnÞ2 q

6tm; xitiwP0; i¼1;2;. . .;m:

ð24Þ

Let

xitiw¼ti; i¼1;2;. . .;m;

mjaij¼uij; i¼1;2;. . .;m; j¼1;2;. . .;n: ð25Þ

Then the constraints of problem(24)become x1t1t1¼x2t2t2¼ ¼xmtmtm¼w;

tiP0; i¼1;2;. . .;m;

u211þu212þ þu21n6t21; t1P0;

u221þu222þ þu22n6t22; t2P0;

...

... ... u2m1þu2m2þ þu2mn6t2m; tmP0;

8>

>>

><

>>

>>

:

(10)

and

u11þa11¼u21þa21¼ ¼um1þam1; u12þa12¼u22þa22¼ ¼um2þam2;

... ...

...

... u1nþa1n¼u2nþa2n¼ ¼umnþamn: 8>

>>

><

>>

>>

: Let

zi¼ ðti;ui1;ui2;. . .;uinÞ 2Rnþ1; ci¼ j

mxi;0n;1j m

2Rnþ2; i¼1;2;. . .;m ð26Þ

and define the second-order cone

Knþ1¼ ðnf 1;n2Þjn12Rþ;n22Rn; and kn2k6n1g:

Then,

xi:¼ ðzi;tiÞ 2Knþ1Rþ; i¼1;2;. . .;m;

and furthermore, it follows from(25)that cT1x1þcT2x2þ þcTmxm¼ ðkmÞwþXm

i¼1

xiti:

Therefore, the problem in (24)turns into the form of min Xm

i¼1

cTixi

s:t:

ðx1t1x2t2Þ ðt1t2Þ ¼0;

ðx1t1x3t3Þ ðt1t3Þ ¼0;

...

ðx1t1xmtmÞ ðt1tmÞ ¼0;

u11u21¼a21a11; u11u31¼a31a11;

...

u11um1¼am1a11; u12u22¼a22a12; u12u32¼a32a12;

...

u12um2¼am2a12;

...

u1nu2n¼a2na1n; u1nu3n¼a3na1n;

...

u1numn¼amna1n;

ðx1;x2; . . . ;xmÞ 2 ðKnþ1RþÞ ðKnþ1RþÞ:

8>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

><

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

>>

:

ð27Þ

(11)

Let

N ¼mðnþ2Þ; N1¼ ðm1Þðnþ1Þ; K¼ ðKnþ1RþÞ ðKnþ1RþÞ;

x¼ ðx1; . . . ;xmÞ 2RN; c¼ ðc1; . . . ;cmÞ 2RN; b¼ ð0m1;b1; . . . ;bnÞ 2RN1; ð28Þ

where

bj¼ ða2ja1j;a3ja1j;. . .;amja1jÞT 2Rm1; for j¼1;2;. . .;n:

Then problem(27)can be recast into a standard primal SOCP:

min cTx

s:t: Ax¼b; x2K; ð29Þ

where

ð30Þ

in which the elementBklis a row vector inRnþ2 defined by

Bkl¼

ðx1;0;. . .;0;1Þ; if l¼1;

ðxl;0;. . .;0;1Þ; if l¼kþ1;

0nþ2; otherwise;

8>

<

>:

and the elementAklis a row vector inRnþ2with theðdk=me þ2Þth element being 1 and the others 0 ifl= 1, otherwise ifl¼kþ1 theðdk=me þ2Þth element being -1 and the others 0, and otherwise it is a zero vector in Rnþ2. Here, dk=meis the largest positive integer not over than k/m.

It is not difficult to verify thatAdefined in(30)has a full row rankN1. We here want to point out that the similar reformulation techniques developed as above were also used by[7,17]for facility location problems in R2, where the resulting SOCP problems are solved by a merit function approach and an interior point method, respectively. In the next section, we will develop an algorithm for problem(1)by following the same line as[7].

5. Algorithm based on the SOCP reformulation

In this section, we use the Fischer–Burmeister merit function developed by[8]for SOCCPs to transform the KKT system of the SOCP (29) into an equivalent unconstrained smooth minimization problem, and then based on this minimization problem, present an algorithm for solving problem(1).

(12)

The KKT optimality conditions of (29), which are sufficient but not necessary for optimality, are hx;yi ¼0; x2K;y2K;

Ax¼b; y¼cATfd forsomefd2RN1; ð31Þ

wherey¼ ðy1; . . . ;ymÞandyi¼ ðsi;wiÞwithsi2Rnþ1andwi2Rþ fori¼1;2;. . .;m. Choose anyd2RN sat- isfying Ad=b. (If no such d exists, then (29) has no feasible solution). Let AN2RNðNN1Þ be any matrix whose columns span the null space of A. Then x satisfies Ax=b if and only if x¼dþANfp for some fp2RNN1. Thus, the KKT optimality conditions in(31)can be rewritten as the following SOCCP:

hx;yi ¼0; x2K; y2K;

x¼FðfÞ; y¼GðfÞ; ð32Þ

where

f¼ ðfp;fdÞ; FðfÞ:¼dþANfp; GðfÞ:¼cATfd: ð33Þ Alternatively, consider that anyf2RN can be decomposed into the sum of its orthogonal projection on the column space ofATand the null space ofA, so the following form can be used in place of(33)

FðfÞ:¼dþ ðIATðAATÞ1AÞf; GðfÞ:¼cATðAATÞ1Af: ð34Þ In the sequel, unless otherwise stated,FðfÞandGðfÞare both defined by (34).

In[8], Chen extended the merit function, the squared norm of the Fischer–Burmeister function for the non- linear complementarity problems, to SOCCPs, and then developed a unconstrained optimization technique for SOCCPs by use of the merit function. For anyu¼ ðu1;u2Þ;v¼ ðv1;v2Þ 2RRn, define theirJordan prod- uctassociated withKnþ1 by

uv:¼ ðhu;vi;v1u2þu1v2Þ:

The identity element under this product isð1;0;. . .;0Þ 2Rnþ1. Writeu2to meanuuandu+vto mean the usual componentwise addition of vectors. It is well known that u22Knþ1 for all u2Rnþ1. Moreover, if u2Knþ1, there is a unique vector inKnþ1, denoted byu1/2, such thatðu1=2Þ2¼u1=2u1=2. Then,

/ðu;vÞ:¼ ðu2þv2Þ1=2uv ð35Þ

is well-defined for allðu;vÞ 2Rnþ1Rnþ1and mapsRnþ1Rnþ1 toRnþ1. It was shown in[15]that /ðu;vÞ ¼0() hu;vi ¼0; u2Knþ1; v2Knþ1:

Note thatK1 coincides with the set of nonnegative realsRþ, and in that case /ðu;vÞin(35)reduces to the Fischer–Burmeister NCP-function[12,13]. Thus,

wðx;yÞ:¼1 2

Xm

i¼1

k/ðzi;siÞk2þ/2ðti;wiÞ

ð36Þ is a merit function for the SOCCP(32), wherezi;si2Rnþ1andti;wi2Rþ. Consequently,(32)can rewritten as an equivalent global minimization problem as below:

min

f2RNWðfÞ ¼wðFðfÞ;GðfÞÞ: ð37Þ

Moreover, we know from[8]thatwis smooth and every stationary point of(32)solves the SOCP(29).

Now we establish an algorithm for the problem (1) by solving the unconstrained smooth minimization problem (32)with a limited-memory BFGS method.

Algorithm 3. Lets1;s2;s3>0 andf02RNbe given. Generate a steepest descent directionD0¼ rWðf0Þand seek a suitable stepsize a0. Let

f1:¼f0þa0D0; x1:¼Fðf1Þ; y1:¼Gðf1Þ:

(13)

Fork¼1;2;. . .;untilWðfkÞ6s1 andjðxkÞTykj6s2,do

IfðrWðfkÞ rWðfk1ÞÞTðfkfk1Þ6s3kfkfk1k krWðfkÞ rWðfk1Þk,then Dk ¼ rWðfkÞ;

else

Dkis generated by the limited-memory BFGS method.

End

Setfk :¼fkþakDk;xk :¼FðfkÞ;yk :¼GðfkÞ, whereakis a stepsize. Letk:¼kþ1.

End

Remark 1. Suppose that f* is the final iteration generated by Algorithm 3 and x¼FðfÞ. Then, xð2:nþ1Þ þa1 is the optimal solution of problem(1)and the optimal value iscTx.

6. Numerical experiments

We implemented Algorithm 2 in Section 3 and Algorithm 3 in Section5 with our codes and compared them with SeDuMi 1.05 [27] (A high quality software packages with Matlab interface for solving SOCP and semidefinite programming problems). Our computer codes are all written in Matlab, including the eval- uation ofwðx;yÞandrwðx;yÞinAlgorithm 3. All the numerical experiments were done at a PC with CPU of 2.8 GHz and RAM of 512 MB. To solve the unconstrained minimization problem (14)inAlgorithm 2and generate the directionDkand the suitable stepsize in Algorithm 3, we choose a limited-memory BFGS algo- rithm with Armijo line-search and 5 limited-memory vector-updates [5], where for the scaling matrix H0¼cIN we use the recommended choice of c¼pTq=qTq [18, P. 226], where p:¼ffold and q:¼ rWðfÞ rWðfoldÞ. This choice is found to work better than the choice used by [8] for our problems. To evaluateFandGinAlgorithm 3, we use LU factorization of AAT. In particular, given such a factorization LU¼AAT, we can computex¼FðfÞands¼GðfÞfor eachfvia two matrix-vector multiplications and two forward/backward solves:

Lu¼Af; Uv¼n; w¼ATv; x¼dþfw; s¼cw: ð38Þ

For the vectordsatisfying Ad¼b, we compute it as a solution of mindkAdbk using Matlab’s linear least square solver ‘‘lsqlin’’.

Throughout the computational experiments, we use the following parameters inAlgorithm 2:

e0¼1; r¼0:1; s1¼1:0e6; s2¼1:0e3:

ForAlgorithm 3, we use the following parameters:

s1¼1:0e8; s2¼1:0e5; s3¼1:0e4:

For SeDuMi, we use all the default values except that the parameterepsis set to be 1.0e–6. The starting points forAlgorithms 2 and 3are chosen to be^x0¼0n;w^0¼0 andf0¼0N, respectively.

The test problems are generated randomly by the following pseudo-random sequence:

w0¼7; wiþ1¼ ð445wiþ1Þmod 4096; i¼1;2;. . .; wi¼wi=40:96; i¼1;2;. . .;

The elements ofaifori¼1;2;. . .;mare successively set to w1;w2;. . .; in the order:

a1ð1Þ;a1ð2Þ;. . .;a1ðnÞ;a2ð1Þ;a2ð2Þ;. . .;a2ðnÞ;. . .;amð1Þ;amð2Þ;. . .;amðnÞ;

and the weightxiis set to be aið1Þ=10 if modði;10Þ ¼0, and otherwisexiis set to be 1.

The numerical results are summarized inTables 1–3. In these tables,nandmspecify the problem dimen- sions,Obj.denotes the objective value of(1)at the final iteration,Iterindicates the iteration number andTime represents the CPU time in seconds for solving each problem, and forAlgorithm 3, it also includes the time to make the LU decomposition ofAAT and find the feasible pointd.

(14)

The results inTable 1show how the iteration number ofAlgorithms 2 and 3and SeDuMi varies withjfor the problems of the same dimension (m¼50 andn¼1000). We can see from this table that the value ofjhas a remarkable influence on their iteration number, and whenjcloses tom, they will decrease, but whenjcloses to 1, they will increase, and especially that ofAlgorithms 2increase greatly.

The results listed inTables 1 and 2show that the algorithms presented in this paper perform very well and they are able to obtain good accuracy for all test problems. Particularly,Algorithm 2consistently uses less CPU time thanAlgorithm 3and SeDuMi. For the moderate problems wherenandmare in the order of hun- dreds,Algorithm 3is comparable with SeDuMi, and moreover, we can see fromTable 2that the CPU time of Algorithm 3increases slower than that of SeDuMi with the dimension of problem increasing. In addition, we

Table 1

Numerical results for the problems with differentj ðm¼50; n¼1000)

j Algorithm 2 Algorithm 3 SeDuMi

Obj. Iter Time Obj. Iter Time Obj. Iter Time

1 5.941976e+3 195 2.65 5.941976e+3 2118 1434 5.941978e+4 12 48.3

5 2.509660e+4 24 0.28 2.509660e+4 344 229.5 2.509660e+4 10 50.1

10 3.021423e+4 258 3.39 3.021423e+4 667 445.7 3.021423e+4 13 51.6

15 3.516542e+4 364 4.40 3.516542e+4 570 396.5 3.516542e+4 13 50.7

25 4.480229e+4 392 4.79 4.480228e+4 458 317.1 4.480229e+4 13 54.3

35 5.418417e+4 425 4.90 5.418417e+4 361 253.8 5.418417e+4 13 50.6

40 5.880483e+4 414 4.90 5.880483e+4 381 270.0 5.880484e+4 13 51.5

45 6.338237e+4 240 3.46 6.338237e+4 438 326.1 6.338238e+4 13 51.4

50 6.790952e+4 31 0.33 6.790952e+4 354 274.2 6.790952e+4 7 27.7

Table 2

Performance comparison ofAlgorithms 2, 3and SeDuMi

Problem Algorithm 2 Algorithm 3 SeDuMi

m n j Obj. Iter Time Obj. Iter Time Obj. Iter Time

100 100 10 1.441721e+4 109 1.15 1.441720e+4 993 235.3 1.441721e+4 15 55.7

100 200 10 1.732435e+4 92 1.14 1.732435e+4 1089 453.1 1.732435e+4 16 208.9

100 300 10 2.156258e+4 58 0.81 2.156258e+4 2216 1454 2.156258e+4 14 662.3

100 400 10 1.841160e+4 177 2.51 1.841159e+4 1851 1618 1.841160e+4 14 2555

100 500 10 3.492155e+4 40 0.67 3.492156e+4 704 710.5 * * *

100 600 10 3.358388e+4 154 3.51 3.358388e+4 1930 2370 * * *

100 700 10 3.435271e+4 45 0.78 3.435271e+4 2135 3064 * * *

100 800 10 3.512760e+4 57 1.20 3.512760e+4 2710 4404 * * *

100 900 10 3.958261e+4 26 0.53 3.958261e+4 2152 4056 * * *

100 1000 10 5.960709e+4 27 0.59 5.960709e+4 258 570.9 * * *

Table 3

Performance ofAlgorithm 2on very large problems

m n j Obj. iter CPU m n j Obj. iter CPU

1000 1000 10 83.2309391e+4 493 173.8 1000 2000 10 1.17181049e+5 568 515.1

1000 4000 10 1.59968897e+5 424 1126 1000 6000 10 2.02300923e+5 512 2158

1000 8000 10 2.17901326e+5 295 1498 1000 10000 10 2.61377885e+5 518 3206

2000 1000 20 1.71789468e+5 368 286.2 2000 2000 20 2.36133844e+5 319 633.2

2000 3000 20 2.94584768e+5 356 1327 2000 4000 20 3.20657339e+5 515 3430

2000 5000 20 3.77768347e+5 479 3508 2000 6000 20 4.08304039e+5 664 5058

3000 1000 30 2.56936690e+5 384 486.7 3000 2000 30 3.55199292e+5 630 2020

3000 3000 30 4.41767230e+5 462 2422 3000 4000 30 4.80986010e+5 404 4214

4000 1000 40 3.42205909e+5 617 1039 4000 3000 40 5.90447512e+5 405 3957

(15)

should point out that the evaluations ofwðx;yÞandrwðx;yÞcoded in Matlab increased the CPU time ofAlgo- rithm 3greatly since it is a main part of the algorithm.

The results inTable 3show thatAlgorithm 2can solve very large problems in a reasonable amount of CPU time. However,Algorithm 3 and SeDuMi failed for these problems due to excessive CPU time or memory requirement.

We conclude thatAlgorithm 2is better thanAlgorithm 3and SeDuMi for very large problems since it con- sumes less memory. For moderate problems,Algorithm 3and SeDuMi are comparable.

7. Conclusions

In this paper, we have presented two kinds of unconstrained optimization techniques for the single-facility Euclideanj-centrum location problem based on a simplified unconstrained formulation and a SOCP reformu- lation, respectively. The first method is actually a primal one whereas the second is a primal-dual one. Preli- minary numerical experiments show that the two methods can obtain desirable accuracy for all test problems and the first method is extremely efficient for those problems wherenis large andmis moderate. Though the two methods are developed for the single-facility location problem, they can be extended to the multi-facilities Euclideanj-centrum location problem.

Acknowledgement

The author’s work is partially supported by the Doctoral Starting-up Foundation (B13B6050640) of GuangDong Province. The author’s work is partially supported by National Science Council of Taiwan.

References

[1] K.D. Andersen, E. Christiansen, A.R. Conn, M.L. Overton, An efficient primal-dual interior-point method for minimizing a sum of Eucidean norms, SIAM Journal on Optimization 1 (2000) 243–262.

[2] F. Alizadeh, D. Goldfarb, Second-order cone programming, Mathematical Programming, Series B 95 (2003) 3–51.

[3] G. Andreatta, F.M. Mason,j-Eccentricity and absolutej-centrum of tree, European Journal of Operational Research 19 (1985) 114–

117.

[4] G. Andreatta, F.M. Mason, Properties ofj-centrum in a network, Networks 15 (1985) 21–25.

[5] R.H. Byrd, P. Lu, J. Nocedal, C. Zhu, A limited memory algorithm for bound constrained optimization, SIAM Journal of Scientific Computing 16 (1995) 1190–1208.

[6] C.H. Chen, O.L. Mangasarian, A class of smoothing functions for nonlinear and mixed complementarity problems, Computational Optimization and Applications 5 (1996) 97–138.

[7] J.-S. Chen, A merit function approach for a facility location problem, 2005.

[8] J.-S. Chen, P. Tseng, An unconstrained smooth minimization reformulation of the second-order cone complementarity problem, Mathematical Programming, Series B 95 (2005) 3–51.

[9] Z. Drezner, Facility Location. A Survey of Applications and Methods, Springer-Verlag, Berlin, 1995.

[10] Z. Drezner, H. Hamacher, Facility Location: Applications and Theory, Springer-Verlag, Berlin, 2002.

[11] J.W. Eyster, J.A. White, W.W. Wierwille, On solving multifacility location problems using a hyperboloid approximation procedure, AIIE Transaction 5 (1973) 1–6.

[12] A. Fischer, A special Newton-type optimization methods, Optimization 24 (1992) 269–284.

[13] A. Fischer, Solution of the monotone complementarity problem with locally Lipschitzian functions, Mathematical Programming 76 (1997) 513–532.

[14] R.L. Francis, L.F. McGinnis Jr., J.A. White, Facility Layout and Location: An Analytic Approach, Prentice-Hall, Englewood Cliffs, NJ, 1991.

[15] M. Fukushima, Z.Q. Luo, P. Tseng, Smoothing functions for second-order cone complementarity problems, SIAM Journal on Optimization 12 (2002) 436–460.

[16] J.B. Hiriart-Urruty, C. Lemare`chal, Convex Analysis and Minimization Algorithm, Springer-Verlag, Berlin Heidelberg, 1993.

[17] Y.-J. Kuo, H.D. Mittelmann, Interior point methods for second-order cone programming and OR applications, Computational Optimization and Applications 28 (2004) 255–285.

[18] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999.

[19] L.Q. Qi, G.L. Zhou, A smoothing Newton method for minimizing a sum of Euclidean norms, SIAM Journal on Optimization 11 (2000) 389–410.

[20] L.Q. Qi, D.F. Sun, G.L. Zhou, A primal-dual algorithm for minimizing a sum of Euclidean norms, Journal of Computational and Applied Mathematics 138 (2002) 34–63.

(16)

[21] M.S. Lobo, L. Vandenberghe, S. Boyd, H. Lebret, Applications of second-order cone programming, Linear Algebra and its Applications 284 (1998) 193–228.

[22] W. Ogryczak, A. Tamir, Minimizing the sum of theklargest functions in linear time, Information Processing Letters 85 (2003) 117–

122.

[23] W. Ogryczak, M. Zawadzki, Conditional median: a parametric solution concept for location problems, Annals of Operations Research 110 (2002) 167–181.

[24] R.T. Rockafella, Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970.

[25] P.J. Slater, Centers to centroids in a graph, Journal of Graph Theory 2 (1978) 209–222.

[26] S.Y. Shi, Smooth convex approximations and its applications. Master thesis, Department of Mathematics, National University of Singapore, 2004.

[27] J.F. Sturm, Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones, Optimization methods & software 11 (12) (1999) 625–653.

[28] A. Tamir, Thek-centrum multi-facility location problem, Discrete Applied Mathematics 109 (2001) 293–307.

[29] G.L. Xue, J.B. Rosen, P.M. Pardalos, An polynomial time dual algorithm for the Euclidean multifacility location problem, Operations Research Letters 18 (1996) 201–204.

[30] G.L. Xue, Y. Ye, An efficient algorithm for minimizing a sum of Euclidean norms with applications, SIAM Journal on Optimization 7 (1997) 1017–1036.

Références

Documents relatifs

L’accès aux archives de la revue « Rendiconti del Seminario Matematico della Università di Padova » ( http://rendiconti.math.unipd.it/ ) implique l’accord avec les

We solve the equations arising in the context of normal random variables in Section 3 which leads to a Second Order Cone Program (SOCP). As an application the problem of

An efficient numerical method for solving a symmetric positive definite second-order cone linear complementarity problem (SOCLCP) is proposed.. The method is shown to be more

For example, the projection formula onto a cone facilitates designing algorithms for solving conic programming problems [–]; the distance formula from an element to a cone is

It is well known that second-order cone (SOC) programming can be regarded as a special case of positive semidefinite programming using the arrow matrix.. This paper further studies

Due to the important role played by σ -term in the analysis of second-order cone, before developing the sufficient conditions for the existence of local saddle points, we shall

We show that the func- tion possesses similar desirable properties of the FB SOC complementarity function for local convergence; for example, with the function the second-order

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,