• Aucun résultat trouvé

Rewriting integer variables into zero-one variables : some guidelines for the integer quadratic multi-knapsack problem

N/A
N/A
Protected

Academic year: 2021

Partager "Rewriting integer variables into zero-one variables : some guidelines for the integer quadratic multi-knapsack problem"

Copied!
19
0
0

Texte intégral

(1)

HAL Id: hal-00909562

https://hal.archives-ouvertes.fr/hal-00909562

Preprint submitted on 26 Nov 2013

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Rewriting integer variables into zero-one variables : some guidelines for the integer quadratic multi-knapsack

problem

Dominique Quadri, Eric Soutif

To cite this version:

Dominique Quadri, Eric Soutif. Rewriting integer variables into zero-one variables : some guidelines

for the integer quadratic multi-knapsack problem. 2007. �hal-00909562�

(2)

Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision

CNRS UMR 7024

CAHIER DU LAMSADE 26 4

Juillet 2007

Rewriting integer variables into zero-one variables : some guidelines for the integer quadratic multi-knapsack problem

Dominique Quadri, Eric Soutif

(3)

Rewriting integer variables into zero-one variables: some guidelines for the integer quadratic multi-knapsack problem

Dominique Quadri

(LAMSADE, Université Paris Dauphine, Place du Maréchal de Lattre de Tassigny, 75775 Paris Cedex 16, France)

Eric Soutif

(CEDRIC, Conservatoire National des Arts et Métiers, 292 rue Saint Martin, 75141 Paris Cedex 03, France)

Abstract: This paper is concerned with the integer quadratic multidimensional

knapsack problem (QMKP) where the objective function is separable. Our objective is

to determine which expansion technique of the integer variables is the most

appropriate to solve (QMKP) to optimality using the upper bound method proposed

by Quadri et al. (2007). To the best of our knowledge the upper bound method

previously mentioned is the most effective method in the literature concerning

(QMKP). This bound is computed by transforming the initial quadratic problem into a

0-1 equivalent piecewise linear formulation and then by establishing the surrogate

problem associated. The linearization method consists in using a direct expansion

initially suggested by Glover (1975) of the integer variables and in applying a

piecewise interpolation to the separable objective function. As the direct expansion

results in an increase of the size of the problem, other expansions techniques may be

utilized to reduce the number of 0-1 variables so as to make easier the solution to the

linearized problem. We will compare theoretically the use in the upper bound process

of the direct expansion (I) employed in Quadri et al. (2007) with two other basic

expansions, namely: (II) a direct expansion with additional constraints and (III) a

binary expansion. We show that expansion (II) provides a bound which value is equal

to the one computed by Quadri et al (2007). Conversely, we provide the proof of the

non applicability of expansion (III) in the upper bound method. More specifically, we

will show that if (III) is used to rewrite the integer variables into 0-1 variables then a

(4)

linear interpolation can not be applied to transform (QMKP) into an equivalent 0-1 piecewise linear problem.

Keywords: integer quadratic knapsack problem; separable objective function; direct expansion; binary expansion; piecewise interpolation.

Primary Code:

Secondary Code:

Introduction

This paper deals with the integer quadratic multi-knapsack problem (QMKP) where the objective function is separable. Problems of this structure arise in numerous industrial and economic situations, for instance in production planning [12], reliability allocation [10] and finance [5]. These include the main application of (QMKP) which is in the portfolio management area where the investments are independent, see [4]

and [5]. Nevertheless, solving (QMKP) efficiently will constitute a starting point to solve the more general and realistic portfolio management problem where the investments are dependent, i.e. the objective function is non separable.

The integer quadratic multi-knapsack problem (QMKP) where the objective function is separable consists in maximizing a concave separable quadratic integer function subject to m linear capacity constraints. It may be stated mathematically as follows:

( )

⎪⎪

⎪⎪

=

=

=

=

=

∑ ∑

=

= =

n j

x

n j

u x

m i

b x a

x f x

d x c x

f QMKP

j

j j n

j ij j i

n j

n

j j j j

j j j

, , 1 integer,

, , 1 ,

0

, , 1 ,

t.

s.

) ( max )

( 1

1

1 2

K K K

(5)

where the coefficients cj , dj , aij , bi are nonnegative. The bounds uj of variables xj are pure integers, with u

j

( c

j

/ 2 d

j

) . Indeed, the separable objective function is concave which implies that for all function f

j

,

x*j

(

cj/2dj

) , where x

*j

is the optimal solution of the program

x fj

( )

xj

j 0

max

.

The problem (QMKP) which is a NP-hard problem [3] is a generalization of both the integer quadratic knapsack problem [2] and the 0-1 quadratic knapsack problem where the objective function is subject to only one constraint [1].

Since, (QMKP) is NP-hard, one should not expect to find a polynomial time algorithm for solving it exactly. Hence, we are usually interested in developing branch-and- bound algorithms. A key step in designing an effective exact solution method for such a maximization problem is to establish a tight upper bound on the optimal value.

Basically, the available upper bound procedures for (QMKP) may be classified as attempting either to solve efficiently the LP-relaxation of (QMKP) (see [2] and [8]) or to find a good upper bound, of better quality than the LP-relaxation of (QMKP), transforming (QMKP) into a 0-1 linear problem easier to solve (see [4] and [9]). To the best of our knowledge, the upper bound method we have proposed in a previous work [11] is better than the existing methods (Djerdjour, Mathur and Salkin algorithm [4], a 0-1 linearization method, a classical LP-relaxation of (QMKP)) from both a qualitative and a computational standpoint. We have first used a direct expansion of the integer variables, originally suggested by Glover [7], and apply a piecewise interpolation to the objective function: an equivalent 0-1 linear problem is thus obtained. The second step of the algorithm consists in establishing and solving the surrogate relaxation problem associated to the equivalent linearized formulation.

Nevertheless the transformed linear formulation encounters numerous 0-1 variables because of the direct expansion used (denoted by expansion (I) in the following).

Consequently, other expansions techniques may be utilized to reduce the number of

0-1 variables so as to make easier the solution to the linearized problem. Let us

(6)

consider the three basic expansions for rewriting the integer variables of (QMKP) into 0-1 variables:

• Expansion (I): direct expansion

uk=j1yjk =xj,

{ }

j

j

u

x ∈ 1 K

,

y

jk

{ } 0 , 1

n j = 1 K

,

k = 1 K u

j

• Expansion (II): direct expansion with additional constraint

uk=j1ky'jk =xj,

uk=j1y'jk 1

{

j

}

j

u

x ∈ 1 K

,

y

'jk

{ } 0 , 1

n j = 1 K

,

k = 1 K u

j

• Expansion (III): binary expansion.

⎡ ⎤

klog(=1uj) 2kzjk =xj ,

⎡ ⎤

klog(=1uj) 2kzjk uj

{

j

}

j

u

x ∈ 1 K

,

z

jk

{ } 0 , 1 j = 1 K n

,

k = 1 Klog( u

j

)

The purpose of this note is to evaluate the impact of the use of the above expansion

techniques, on the upper bound computation developed in [11]. More specifically, we

will determinate which expansion is the most appropriate to be used in the upper

bound method for (QMKP) [11]. We will compare theoretically the use of the direct

expansion (I) with the direct expansion with additional constraints (II) and with the

binary expansion (III). We will show that the use of (II) provides a bound which value

is equal to the one computed in [11]. Conversely, we provide the proof of the non

applicability of both (III) and a linear interpolation to transform (QMKP) into an

equivalent 0-1 piecewise linear problem.

(7)

The paper is organized as follows. The next section summarizes the upper bound method developed in [11] detailing the direct expansion (I) of integer variables and the piecewise interpolation. In Section 3, the direct expansion with additional constraints (II) is applied to (QMKP) so as to compute the upper bound suggested by Quadri et al. [11]. Section 4 is dedicated to the binary expansion (III). We finally conclude in Section 5.

In the remainder of this paper, we adopt the following notations: letting (P) be an integer or a 0-1 program, we will denote by (

P

) the continuous relaxation problem of (P). We let Z[P] be the optimal value of the problem (P) and Z[

P

] the optimal value of (

P

). Finally ⎡ ⎤

x

(resp. ⎣ ⎦

x

) will denote the smallest (resp. highest) integer greater (resp. lower) than or equal to x.

Section 2. Direct expansion of the integer variables

In this section we summarize the upper bound method for (QMKP) proposed by Quadri et al. [11]. First, an equivalent formulation is obtained by using a direct expansion (I) of the integer variables x

j

as originally proposed by Glover [7] and by applying a piecewise interpolation to the initial objective function as discussed in [4].

The direct expansion of the integer variables x

j

consists in replacing each variables x

j

by a sum of u

j

0-1 variables y

jk

such that

uk=j1yjk =xj

. Since the objective function f is separable a linear interpolation can then be applied to each objective function term f

j

. Consequently, (QMKP) is equivalent to the 0-1 piecewise linear program (MKP):

( )

( )

⎪ { }

⎪ ⎩

⎪⎪ ⎨

=

=

∈ ≤ =

=

∑ ∑ ∑ ∑

= =

= =

1 1

, 1 , 0

, , 1 t. ,

s.

) ( max )

(

1 1

1 1

j jk

n

j i

u

k jk

ij n

j u

k jk jk

,u , ,n, k ,

j y

m i

b y a

y s y

l

MKP

j

j

K K

K

where, ∑

uk=j1yjk =xj

,

sjk = fjk fj,k1

and

fjk =cjkdjk2

.

(8)

In the second step of the algorithm, a surrogate relaxation is applied to the LP- relaxation of (MKP). The resultant formulation (KP, w) is the surrogate relaxation problem of (MKP) and can be written as:

( )

[ ]

{ }

⎪ ⎪

⎪⎪ ⎨

=

=

∑ ∑ ∑ ∑ ∑ ≤ ∑

= = = =

= =

1 1

, 1 , t. 0

s.

max )

, (

1 1 1 1

1 1

j jk

n j

m

i i i

u

k jk

m

i i ij

n j

u

k jk jk

,u , ,n, k ,

j y

b w y

a w

y s w

KP

j

j

K K

As proved by Glover [7], (KP, w) is a relaxation of (MKP). For any value of

w≥0

[ ]

KP w

Z ,

the optimal value of ( )

KP,w

constitutes an upper bound for

Z

[ ]

MKP

. But the

value of the bound

Z

[ ]

KP,w

depends on the choice of the surrogate multiplier employed. It is proved in [10] that if w* is chosen as the optimal solution of the dual of ( )

MKP

then the optimal value of (KP,w*) is a tight upper bound for (QMKP) and it is obtained in a very competitive CPU time.

Nevertheless, the direct expansion used in the above upper bound procedure results in an increase of the size of the linearized problem. Indeed, the number of 0-1 variables is equal to ∑

nj=1

u

j

whereas it is well known that fewer variables are included in the program so less running time is consumed. The purpose of the next sections is to try to reduce the equivalent linearized problem size, using other expansion techniques and to evaluate the impact of such techniques on the computation of the upper bound proposed in [11].

Section 3. Direct expansion of integer variables with additional constraints

In this section we apply to the integer variables of (QMKP) the direct expansion with

additional constraints (II). That is each variable x

j

is replaced by the following

expression ∑

uk=j1

k y

'jk

where y

'jk

{ } 0 , 1

,

k =1,…,u

j

and j=1,…,n. Since the integer

variables are now replaced by 0-1 variables, we transform the resultant 0-1 quadratic

(9)

program into a 0-1 linear problem as follows: we replace each objective function term f

j

(x

j

) by

'

1 jk u

k jk

y f

=j

where f

jk

= c

j

k – d

j

k² .

The problem (QMKP) is thus equivalent to the following problem (MKP

2

):

( )

( )

⎪ { }

⎪ ⎪

⎪ ⎪

⎪⎪ ⎪

⎪ ⎪

=

=

=

=

=

∑ ∑

∑ ∑

=

= =

= =

1 1

,..., 1 ,

1 , 0

1

, , 1 , t.

s.

maximize

) (

' 1

'

1 1

'

1 1

'

2

jk j u

k jk n

j i

u

k jk

ij

n j

u

k jk jk

,u , ,n, k ,

j

n j

y y

m i

b y a

y f h(y')

MKP

j

j

j

K K

K

Following the upper bound method developed in [11] we then establish the surrogate problem (KP

2

,w) associated to (MKP

2

). The problem (KP

2

,w) can be written as:

( )

( )

⎪ { }

⎪ ⎪

⎪ ⎪

⎪ ⎪

⎪ ⎪

=

=

=

=

∑ ∑ ∑

∑ ∑

=

= = =

=

= =

1 1

,..., 1 ,

1 , 0

1

, t.

s.

maximize

) , (

' 1

' 1

1 1

' 1

1 1

'

2

jk j u

k jk

n

j i

m

i i u

k jk

ij i m

i

n j

u

k jk jk

,u , ,n, k ,

j

n j

y y

b w y

a w

y f h(y')

w KP

j

j j

K K

Utilizing the optimal solution ω * of the dual of ( MKP

2

) as the surrogate multiplier, the optimal value of the LP-relaxation of (KP

2

, ω * ) provides an upper bound of (QMKP). The following proposition and its corollary show that this upper bound, computed in this section through the use of expansion (II), is equal to the one computed in [11].

Proposition 3.1 Let w be any real surrogate multiplier, w 0. The following result

holds:

(10)

[ ] [ KP , w Z KP 2 , w ] .

Z =

Proof 3.1

• We first show that: Z [ KP 2 , w ] [ ] Z KP , w .

Let y ' be an optimal solution for ( KP 2 , w ) . We derive from y ' a solution y feasible for ( ) KP, w , such that h ( y ' ) ≤ g ( y ) .

Let j { 1 , K , n } . We denote:

- x

j

=

ku=j1

k y

'jk

. Obviously x

j

[ ] 0 , u

j

;

- p

j

= ⎣ ⎦ x

j

, where ⎣ ⎦ x stands for the highest integer lower than or equal to the real x. p

j

is the integer part of x

j

. Note that p

j

< u

j

since

j

j

u

x ≤ ;

- ε

j

= x

j

p

j

, the fractional part of x

j

: x

j

= p

j

+ ε

j

. Note that

[ [ 0 , 1 .

∈ ε

j

We define the solution y as follows:

{ }

⎪ ⎩

⎪ ⎨

≤ +

=

=

=

+

2 for 0

1 for 1

, , ,

1

, 1

j j

jk

j p

j

j jk

u k p

y y

p k y

n

j K

j

ε

We immediately get: j { 1 , K , n } ,

ku=j1

k y

'jk

= x

j

=

k

u=j1

y

jk

. This implies that y is feasible for ( ) KP, w since we have:

∑ ∑ ∑

∑ ∑

∑ ∑ ∑

=

= = =

= =

= = =

⎥ ⎥

⎢ ⎢

⎟ ⎟

⎜ ⎜

= ⎛

⎥ ⎥

⎢ ⎢

= ⎡

⎥ ⎥

⎢ ⎢

⎟ ⎟

⎜ ⎜

m

i i i m

i

u

k

jk n

j ij i m

i

j n

j ij i m

i

u

k jk n

j ij i

,w KP y

b w

y k a w

x a w y

a w

j j

1

1 1

' 1

1 1

1 1 1

)) 2 ( for feasible is

' (since

Let us now show that h ( y ' ) ≤ g ( y ) . It suffices to prove that:

(11)

{ } ∑

=

=

j j

u

k

jk jk u

k

jk

jk

y s y

f n j

1 1

,

'

, , 1 K Let j { 1 , K , n } . We first note that:

( ) ( )

( )

, , 1

, 1 , 1

1 , 1

, 1

1

1

+

= +

= +

=

+

=

⎥ +

⎥ ⎦

⎢ ⎢

⎡ −

=

⎥ +

⎥ ⎦

⎢ ⎢

= ⎡ ∑ ∑

j j

j j

j j

j j

p j j p j j

j p j p

j p

k

k j jk j

p j p

k jk u

k

jk jk

f f

f f

f f s

s y

s

ε ε

ε ε

In the following we name Λ

j

the quantity ( ) 1 ε

j

f

j,pj

+ ε

j

f

j,pj+1

. We now

show that Λ

j

constitutes an overestimation of ∑

u=j

k

jk jk

y f

1

'

if we suppose as

previously mentioned that .

1 '

j j u

k

jk

p

y k

j

= + ε

=

For a given index j, let us consider the following linear problem ( ) P

j

and its dual problem ( ) D

j

:

( )

⎪ ⎪

⎪ ⎪

⎪ ⎪

⎪ ⎪

=

+

=

=

=

=

=

j jk

u

k

j j jk u

k jk

u

k

jk jk

j

u k

y

p y k

y

y f y

h

P

j

j

j

, , 1

0 1

t.

s.

) ' ( maximize

' 1

' 1

'

1 '

K ε

( ) ( )

⎪⎩

⎪ ⎨

=

+ + +

0

, , 1

t.

s.

minimize

α β

α α ε β

j jk

j j

j

k f k u

p

D K

The solution y ~ defined by (

j p j

y

j

= 1 − ε

~

,

,

jp j

y

, j+1

= ε

~ , 0 ~ y

jk

= for kp

j

and 1 kp

j

+ ) is clearly feasible for ( ) P

j

and of value Λ

j

. The

complementary slackness conditions suggest to consider the following

solution ( ) α ~ , β ~ for ( ) D

j

:

(12)

⎪⎩

⎪ ⎨

=

− +

=

+

+ j

j

j j

p j p

j

p j j p j j

f f

f p f

p

, 1 ,

1 ,

~

,

) 1

~ ( β α

We now prove that ( ) α ~ , β ~ is feasible for ( ) D

j

. Since its value in ( ) D

j

is

clearly Λ

j

, the feasibility of ( ) α ~ , β ~ for ( ) D

j

will imply by duality in linear programming that Λ

j

= Z [ ] P

j

. ( ) α ~ , β ~ is feasible for ( ) D

j

if and only if:

~ ≥ 0

α and k { 1 , K , u

j

} α ~ + k β ~ f

jk

. Noting that f

j,0

= 0 , we only have to prove the following result:

{ 0 , , u

j

} ,

k ∈ K

∀ , α ~ + k β ~f

jk

Let us denote Π

j

( k ) = ( p

j

+ 1 − k ) f

j,pj

+ ( kp

j

) f

j,pj+1

f

jk

. We thus have to prove that:

{ 0 , , u

j

} ,

k ∈ K

∀ , Π

j

( k ) ≥ 0

Using the definition of f

jk

( f

jk

= c

j

kd

j

k

2

), we find with some easy algebra:

) 1 ( )

1 2 ( )

( =

2

− + + +

Π

j

k d

j

k p

j

d

j

k d

j

p

j

p

j

Thus, ) Π

j

(k is a degree 2 polynomial in k. Its discriminant is equal to d

2j

, so )

j

(k

Π admits two roots: p

j

and p

j

+1. Since the coefficient of k

2

in Π

j

(k ) , d

j

, is nonnegative, Π

j

(k ) is also nonnegative for k outside the roots, namely for kp

j

and kp

j

+ 1 . Since we only consider integer values of k, we get:

{ 0 , , u

j

} ,

k ∈ K

∀ Π

j

( k ) ≥ 0 , which implies that ( ) α ~ , β ~ is feasible for ( ) D

j

and

therefore that Λ

j

is an overestimation of ∑

u=j

k

jk jk

y f

1

'

:

ku=j1

f

jk

y

'jk

Λ

j

=

ku=j1

s

jk

y

jk

.

This last inequality implies that h ( y ' ) ≤ g ( y ) and therefore that:

[ KP 2 , w ] [ ] Z KP , w .

Z ≤ .

(13)

• We now show that: Z [ ] [ KP , w Z KP 2 , w ] .

The proof is analogous to the one of the previous point. Let y be an optimal solution for ( ) KP, w . We derive from y a solution y ' feasible for ( KP 2 , w ) ,

such that g ( y ) = h ( y ' ) .

Since ( ) KP, w is linear continuous knapsack problem, y admits at most one fractional component,

0 0,k

y

j

. If such a fractional variable

0 0,k

y

j

exists, we denote:

=

=

0 0

0 1

, uj

k k j

j

y

x , p

j0

= ⎣ ⎦ x

j0

and ε

j0

= x

j0

p

j0

.

j0

p and

j0

ε are respectively the integer and fractional parts of

j0

x . Since

0

0 j

j

u

x ≤ ,

0

0 j

j

u

p < . We begin to define ' y :

{ } { }

⎪⎪⎩

⎪⎪ ⎨

+

=

=

=

+

1 ,

, , 1

0 1

0 0 0

0 0 1

0 0 0 0

' , '

, '

,

j j j k

j

j p

j

j p

j

p p u k

y y y

j j

K ε

ε

For all j { 1 , K , n } { } j

0

(or for all j { 1 , K , n } if y has no fractional component, i.e. if y is integer), y

jk

belongs to { } 0 , 1 ( k ) and therefore

=

=

j

u

k jk

j

y

x

1

is an integer lower than or equal to u

j

. We then finish to define '

y :

{ } { } { } { }

{ }

⎪⎩

⎪ ⎨

=

=

=

=

− ≥

∀ if 0 , 0 1 , ,

, , 1

0

and 1

, 1 , if

, ,

1

'

' '

, 0

j jk

j

j j jk

x j j

u k

y x

x u k

y y

j x n

j

j

K K K

We immediately get:

(14)

{ } { }

( ) ( )

⎪ ⎪

⎪ ⎪

=

= +

= +

+

=

=

=

=

∑ ∑

∑ ∑

= =

= =

1 1

, for

, ,

, 1

0 0

0 0

0 0 0 0

0 0

0

1 1

, '

, 0

1 1

' 0

j j

j j

u

k

u

k k j j

j j j j

j j

k j

u

k

u

k jk j

jk

y x

p p

p y

k j j

y x

y k j

n j

ε ε

ε K

which implies that y ' verifies the surrogate constraint of ( KP 2 , w ) (since y verifies the one of ( KP , w ) ). Since { } ∑

u=j

k

y

jk

n j

1

'

1

, , ,

1 K , ' y is feasible for )

, 2 ( KP w .

Let us now prove that g ( y ) = h ( y ' ) . - For j = j

0

, we have:

(

0

)

0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0

0 0

1 , ,

' , 1 , '

, , 1

' , ,

1

j j p j j p j

p j p j p j p j u

k

k j k j

j j

j j

j j j

f f

y f

y f y

f

Λ

= +

=

+

=

+

= + +

ε ε

Now: ( )

=0 0 0

=

=0 0

+

0 0+ 0

= Λ

0

1 1

1 , , ,

,

j j

j u

k

p

k

j j p j k j k

j k

j

y s s

s ε

This equality holds since ( KP , w ) is a linear continuous knapsack problem: for a given index j, all the variables y

jk

, k { 1 , K , u

j

} , have

the same coefficient in the knapsack constraint ( ∑

m=

i ij i

a w

1

) and the coefficients of these variables in the objective function g(y) are in decreasing order (because of the concavity of function f

j

):

uj j j

j

s s

s

1

2

≥ K ≥

,

.

Therefore the following result holds: ∑

=0 0 0

=

=0 0 0

1

, , 1

' , ,

j

j u

k

k j k j u

k

k j k

j

y s y

f .

- For jj

0

, we have: ∑

ku=j1

f

jk

y

'jk

= f

j,xj

and

ku=j1

s

jk

y

jk

=

kx=j1

s

jk

= f

j,xj

.

This last equality holds for the same reason as in the previous

(15)

paragraph (the coefficients of variables y

jk

in the unique knapsack constraint are identical and the coefficients of these variables in the objective function are in decreasing order). This implies that:

=j

=

u=j

k

jk jk u

k

jk

jk

y s y

f

1 1

'

.

We thus get: g ( y ) = h ( y ' ) , wich implies the result Z [ ] [ KP , w Z KP 2 , w ] .

†

Corollary 3.2 The upper bound obtained by the surrogate linearization [11] (using expansion (I)) and the upper bound obtained by the surrogate linearization using expansion (II) are equal: Z [ KP , w * ] [ = Z KP

2

, ω * ] , where w* and ω * respectively stand for the optimal surrogate multiplier of the LP-relaxation of problems (MKP) and (MKP2).

Proof 3.2 The result of Proposition 3.1 is true whatever the surrogate multiplier w is used, so it remains true if we successively consider w* and ω * as a surrogate

multiplier. †

We thus have proved in this Section that solving (QMKP) with a linearization technique and a surrogate relaxation may be equivalently done with expansion (I) or with expansion (II) of the integer variables. We proved that expansions (I) and (II) are equivalent in the sense that they provide the same upper bound. However, expansion (II) involves n added constraints without improving the quality of the upper bound in comparison with expansion (I).

Section 4. Binary expansion of integer variables

This section is dedicated to the use of a binary expansion (referred as expansion (III))

of this integer variables in the upper bound procedure developed in [11]. Such

expansion consists in rewriting each integer variable x

j

(j=1,…,n) as:

(16)

⎡ ⎤

=

=

log( )

0

2

uj

k

jk k

j

z

x where z

jk

{ } 0 , 1 (1)

Using (1) involves only ∑

jn=1

log

2

( u

j

) ⎤ 0-1 variables instead of ∑

jn=1

u

j

0-1 variables when a direct expansion is applied.

As previously mentioned, the aim of this study is to compare other expansion techniques in the upper bound process developed in [11]. Since, the variables are now binary the next step of the algorithm of Quadri et al. [11] concerns with a piecewise linear interpolation of the objective function so as to obtain an equivalent 0-1 linear problem with only ∑

jn=1

log

2

( u

j

) ⎤ 0-1 variables.

The following proposition shows the non applicability of both expansion (III) and a linear interpolation to transform (QMKP) into an equivalent 0-1 piecewise linear problem through the use of only ∑

j=n1

log

2

( u

j

) ⎤ 0-1 variables.

Proposition 4.1 Considering the integer quadratic multi-knapsack problem (QMKP).

If a binary expansion is used to rewrite the integer variables of (QMKP) as

⎡ ⎤

n=

j

u

j 1

2

( )

log 0-1 variables then an equivalent 0-1 piecewise linear problem can not be established applying a linear interpolation.

Proof 4.1 Assume that it is possible to replace each function f

j

(x

j

) = c

j

x

j

– d

j

x

j

², for all j from 1 to n, by a linear function g

j

(z

j

) using exactly

log2(uj)

⎤ 0-1 variables. We denote by (H) this assumption.

If (H) is true then it should exist coefficients g

jk

such that:

⎡ ⎤

{ }

1 (2)

, )

(

) ( log

1

2 2

j j

u

k

jk jk j

j j j j

j x c x d x g z x u

f

j

∈ K

=

=

=

(17)

with

log( )2 ,

{ }

0,1 (3)

0

=

=uj

k

jk jk k

j z z

x

.

The assumption (H) must be satisfied for all problem data. Let us set u

j

= 3, c

j

= 10 and d

j

= 4.

Consequently,

- if x

j

= 1 (cf. (2) and (3)) then g

j1

= 6 (i), z

j1

= 1 and z

j2

= 0.

- if x

j

= 2 (cf. (2) and (3)) then g

j2

= 4 (ii), z

j1

= 0 and z

j2

= 1.

- if x

j

= 3 then (cf. (3)) then z

j1

= z

j2

= 1 which implies (cf. (2)) that 3c

j

– 9 d

j

= g

j1

+ g

j2

= -6 (iii).

The equations system (I), (II) and (III) has clearly no solution. Consequently there is a

contradiction with (H).

†

Proposition 4.1 shows the non applicability of a binary expansion of the integer variables for (QMKP) so as to transform the initial problem into a 0-1 linear program.

Consequently, the upper bound method proposed in [11] can not be applied together with expansion (III).

Section 5. Concluding remarks

In this paper we have theoretically compared the use of three techniques to rewrite integer variables into zero-one variables in an upper bound procedure for (QMKP) developed by Quadri et al. [11], which provides, to the best of our knowledge a bound closer to the optimum than the existing methods. More specifically, we have compared a direct expansion of the integer variables, originally employed in [11] with a direct expansion with additional constraints (II) and with a binary expansion (III).

We have proved that (II) provides the same upper bound as the one computed in [11]

whereas it involves n additional constraints. We therefore do not expect an

improvement of the upper bound computational time. Finally, we provide a proof of

Références

Documents relatifs

We have used the Jenkins s approach in order to design algorithms to solve multiparametric 0-1-IP problems relative to the right-hand-side vector (Crema [6]), the min sum

Recently we have used the Jenkins s approach in order to design algorithms to solve multiparametric 0-1-ILP problems relative to the right-hand-side vec- tor (Crema [2]), the

Finally note that the method PQCR dominates all the previously mentioned methods concerning continuous relaxation bound values, as it has an average bound of 0.42% on Vision

For the balanced assignment problem and the balanced subset problem, the result tables are similar to the balanced TSP (i.e. time to optimal with the lifted formulation and the

We propose in this present study, using our tight upper bound [17] (Quadri et al., 2007) to develop an exact solution method to solve (QMKP ).. Our algorithm also

Hence we search an optimal configuration of the production scheme which is modeled by integer variables, the number of injector and producer wells, the number of branches, completed

In order to prove that the algorithm terminates we just have to prove that there are only a finite number of situation where we go back to step 1, step 6 and step 9.. I There is

The original Omega Test uses an ad-hoc routine for computing the integer solutions of linear equation systems, while we rely on Hermite normal form for this task.. Consequently,