• Aucun résultat trouvé

BRANCHING ALGORITHM TO REDUCE AND SOLVE THE KNAPSACK PROBLEM IN POLYNOMIAL TIME

N/A
N/A
Protected

Academic year: 2021

Partager "BRANCHING ALGORITHM TO REDUCE AND SOLVE THE KNAPSACK PROBLEM IN POLYNOMIAL TIME"

Copied!
14
0
0

Texte intégral

(1)

HAL Id: hal-02508428

https://hal.archives-ouvertes.fr/hal-02508428

Preprint submitted on 14 Mar 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

SOLVE THE KNAPSACK PROBLEM IN POLYNOMIAL TIME

Aboudramane Traore

To cite this version:

Aboudramane Traore. BRANCHING ALGORITHM TO REDUCE AND SOLVE THE KNAPSACK PROBLEM IN POLYNOMIAL TIME. 2020. �hal-02508428�

(2)

Page 1 sur 13

BRANCHING ALGORITHM TO REDUCE AND SOLVE THE KNAPSACK PROBLEM IN POLYNOMIAL TIME

Author

: ABOUDRAMANE TRAORE

EMAIL: pytha1999@gmail.com TEL: +223 95507491

Informatics and telecommunications engineering department

Ecole Nationale d’Ingénieurs – Abderhamane Baba Touré National School of Engineers - Abderhamane Baba Touré

410, Av. Van Vollenhoven BP 242 – Tél: (223) 20 22 27 36 – Fax: (223) 20 21 50 38 / Bamako – MALI.

(3)

Page 2 sur 13

I. ABSTRACT:

This document is written in order to make available to experts and amateurs in theoretical mathematical and computer sciences my idea on the reduction of research intervals of the knapsack problem, which is a problem, cited among the 21 problems of Richard Karp

This method is a branching algorithm, which is based on the results of the TOTAL WEIGHT / INITIAL WEIGHT ratio to determine the type of bag, therefore consequently considerably reducing its search interval according to the coefficient of this ratio, and by associating this

reduction algorithm with different existing exact resolution methods, we solve the KP in polynomial time

With this method, we only have four kinds of bags that I named

Perfect sack

Secondary sack

medium sack

Rare sack (personally this sack must not exists because it is very bad [object’s weight is not proportional versus their profits]).

Moreover, all these bags have their search intervals, which I would specify before giving my prototype of the algorithm and deciding on the conjecture P VS NP

II. INTRODUCTION :

a) INTRODUCTION OF KNAPSACK PROBLEM:

The knapsack problem or rucksack problem is a

problem in combinatorial optimization: Given a set

of items, each with a weight and a value, determine

(4)

Page 3 sur 13

the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as

possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.

The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatory, computer science, complexity theory, cryptography, applied

mathematics, and daily fantasy sports.

The knapsack problem has been studied for more than a century, with early works dating as far back as 1897.[1] The name "knapsack problem" dates back to the early works of mathematician Tobias Dantzig (1884–1956),[2] and refers to the

commonplace problem of packing the most valuable or useful items without overloading the luggage.

Knapsack example (same as DP)

Item Profit Weight

1 11 1

2 21 11

3 31 21

4 33 23

5 43 33

6 53 43

7 55 45

8 65 55

• Maximum weight 110

(5)

Page 4 sur 13

It was already demonstrate that knapsack problem is NP-complete so I will not take back this demonstration.

b) INTRODUCTION OF P VS NP PROBLEM:

If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian Path

Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a

solution, I can easily check that it is correct but I cannot so easily find a solution.

c) NOTE:

In addition, one of the ways to solve this problem would be to find a polynomial algorithm that solves an NP-complete problem. I would rely on that to give my opinion on P vs NP

III. THE DIFFERENT RESEARCH INTERVALS:

1. 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍< 𝟕

𝟓 then the good combination is ∶ 𝑛𝑝 = 𝑁 − 1

2. 𝟕

𝟓< 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 <𝑵

𝟐 𝑡ℎ𝑒𝑛 𝑡ℎ𝑒 𝑔𝑜𝑜𝑑 𝑐𝑜𝑚𝑏𝑖𝑛𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝑎𝑚𝑜𝑛𝑔 𝑡ℎ𝑖𝑠 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 [𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍

𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍; 𝑁 − 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍] 3. 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍

𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍= 𝑵

𝟐 with N

2 ≠ 2 then the good combination is among this interval [𝑵

𝟐− 𝟏 ; 𝑵

𝟐 + 𝟐]

4. 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍> 𝑵

𝟐 then the good combination is among this interval [ 𝑊𝐸𝐼𝐺𝐻𝑇 𝑡𝑜𝑡𝑎𝑙

2 ∗ 𝑊𝐸𝐼𝐺𝐻𝑇 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 ; N 2− 1]

(6)

Page 5 sur 13

IV. CALCULATION OF THE RELATION(𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍)/

(𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍):

Even if it seem to be simple to calculate the relation

total weight/ initial weight Some time we must round the result to the near integer. This estimation is fixed to 0.4(0,4) that means:

If { 𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 [𝒅𝒆𝒄𝒊𝒎𝒂𝒍] − 𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙[𝒊𝒏𝒕𝒆𝒈𝒆𝒓] > 𝟎. 𝟒}

Then {𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 = 𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 [𝒊𝒏𝒕𝒆𝒈𝒆𝒓] + 𝟏}

Else { 𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙= 𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙

𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙[𝒊𝒏𝒕𝒆𝒈𝒆𝒓] }

Note:

For practical cases, I base myself on the fact that the weight and the profits of the objects are proportional

V. THE DIFFERENT BAGS AND THEIR SEARCH INTERVAL:

1)

Perfect sack

: (𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍

𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍< 𝟕

𝟓 then the good combination is ∶ 𝑛𝑝 = 𝑁 − 1)

With a capacity almost equal to the total weight of objects, this bag has a large storage capacity, which reduces the list of possibilities. For this bag which is of the order of N-1 among N and therefore an exhaustive search of all the elements of this order gives in polynomial time the optimal combination hence its name of IDEAL BAG.

2)

Secondary sack

:

𝟕

𝟓< 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 <𝑵

𝟐 𝑡ℎ𝑒𝑛 𝑡ℎ𝑒 𝑔𝑜𝑜𝑑 𝑐𝑜𝑚𝑏𝑖𝑛𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝑎𝑚𝑜𝑛𝑔 𝑡ℎ𝑖𝑠 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 [𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍

𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍; 𝑁 − 𝑾𝑬𝑰𝑮𝑯𝑻 𝒕𝒐𝒕𝒂𝒍 𝑾𝑬𝑰𝑮𝑯𝑻 𝒊𝒏𝒊𝒕𝒊𝒂𝒍]

 This problem is the most frequent case since with it the weight of objects and their value are so close that visibly there are several good

(7)

Page 6 sur 13

answers. All the difficulties that we find in solving the knapsack problem is due to these kinds of bags due to their enormous research interval.

 So to remedy this problem I have developed 3 reduction algorithms intended to reduce the limits of the search interval for these types of cases which respectively have the role of:

• Finding the pivot value

• Minimum value search

• Maximum value search

 Finding pivot values:

The goal of this algorithm is to find a value, which is used to make comparisons. This method very quickly eliminates the combinations far from being optimal by searching directly for the parent combinations and finding a temporally maximum value, which will serve as a pivot for the comparison.

Therefore, at each once, the value of a combination exceeds this maximum value then we replace this maximum value by this lambda value until the end of the search interval then returns the combination of the maximum value.

sorting the table in descending order of profits // find pivot value //

// knowing that we have at least the coefficient of the admissible variable ratio in the bag //

weight_pivot=0; value_pivot=0; item_pivot=0;

For i de 1 to N do

If weight[i] + weight_max<= weight_initiale Then

value_Pivot=value_pivot+value[i] ; item_pivot=item_pivot+1 ;

End if; End for;

(8)

Page 7 sur 13

• BY KNOWING THE VALUE PIVOT AND COMBINATIONS PARENTS EXPLOIT ONE CAN SEARCH FOR OTHER COMBINATIONS STARTING BY item_pivot + 1 if it is lower than

N- (𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍) While comparing the values of the different combinations with the pivot value and if it turns out that, a value is greater than the pivot value it is replaced by this value.

 Minimum value search

Using the combination of minimum weights indeed if by adding the objects, which have the smallest weights by order of magnitude while counting the number of associable variable we can predict the order of the maximum combination whose weights will not exceed the norm planned:

sorting the table in increasing order of weights For j de 1 to N do

Value=0; weight_pivot=0 and item_max =0;

While Weight[j] + weight_pivot<= weight_initiale Then

item_max=item_max + 1 Value=value[i] +value

End while;

End for;

After that searching interval will be [𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 ; item_max] and not [𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 ; 𝑵 − 𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍 𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍)

] And we will find

item_max < (𝑵 −

𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍 𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍

)

 Maximum value search

(9)

Page 8 sur 13

Using the combination of the largest values

By adding the objects having the largest values in the order of the table while respecting the weight constraint we can find the maximum value of the combinations of an order and therefore drop the idea of generating all the combinations of this order as follows:

// after sorting

Weight_sum = 0; item_min = (𝑤𝑒𝑖𝑔ℎ𝑡 𝑡𝑜𝑡𝑎𝑙 𝑤𝑒𝑖𝑔ℎ𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙);

max_value = 0;

For i from 1 to N do

While weight [i] + weight [i + 1] <= initial weight { item_min =item + 1;

Value_max = value [i] + valeur_max I = i + 2;}

End while;

End for;

if Value_max <= value(calculated with the pivot method) Then var_min=var_min +1

After that searching interval will be

[var min

; Var max] au lieu de [

𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍

; N−

𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍

] and we will find item_min>

𝒘𝒆𝒊𝒈𝒉𝒕 𝒕𝒐𝒕𝒂𝒍

𝒘𝒆𝒊𝒈𝒉𝒕 𝒊𝒏𝒊𝒕𝒊𝒂𝒍

(10)

Page 9 sur 13

After all these reductions the search interval will be

[var_min

; Var_max]

Minimum maximum

After these steps, we are free to apply the search method we want and here I will consider the most time-consuming (exhaustive search) for reasons of calculation on this interval

3) Medium sack :

𝑊𝐸𝐼𝐺𝐻𝑇 𝑡𝑜𝑡𝑎𝑙 𝑊𝐸𝐼𝐺𝐻𝑇 𝑖𝑛𝑖𝑡𝑖𝑎𝑙= 𝑵

𝟐 with N

2≠ 2 then the good combination is among this interval

[𝑵

𝟐− 𝟏 ; 𝑵

𝟐 + 𝟐]

This bag resembles the secondary bag except that its interval is reduced compared to that of the latter and to find the right combination we can use the reduction methods used for the secondary bag.

4) Rare sack:

𝑊𝐸𝐼𝐺𝐻𝑇 𝑡𝑜𝑡𝑎𝑙 𝑊𝐸𝐼𝐺𝐻𝑇 𝑖𝑛𝑖𝑡𝑖𝑎𝑙> 𝑵

𝟐 then the good combination is among this interval [ 𝑊𝐸𝐼𝐺𝐻𝑇 𝑡𝑜𝑡𝑎𝑙

2 ∗ 𝑊𝐸𝐼𝐺𝐻𝑇 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 ; N 2− 1]

These cases are present only when the total weight of the objects is much higher than that of the bag weight total≫weight initial.

Personally, this bag cannot be considered as a case but as I had to see an example, I consider it as a particular case

Pratical exemple

Maximum weight W=110

(11)

Page 10 sur 13

To find the optimal combination I will apply my algorithms Here we notice that the number of variable n = 8.

a) Let's look at the relationship weighttotal / weightinitial = 232/110 = 2.01

2.01-2 = 0.01 → weighttotal / weightinitial≈2 the bag is of the secondary type so the optimal combination is element of the interval [2, 6]

b) Looking for the pivot value of the variables with the first algorithm

weight_pivot = 0; value_pivot = 0; var_pivot= 0;

For i from 1 to 8 do

If weight [i] + weight_pivot <= 110 Then value_Pivot = value_pivot + value [i];

Var_pivot = var + 1;

End if; End for;

Item Profit Weight

1 11 1

2 21 11

3 31 21

4 33 23

5 43 33

6 53 43

7 55 45

8 65 55

(12)

Page 11 sur 13

At the end of this algorithm we find

{weight_pivot = 100; value_pivot = 120; var_pivot = 2

c) Find the maximum number of associable variable in this bag according to the minimum weights (algorithm 2)

sorting the table in increasing order of weights For j from 1 to 8 do

Value = 0; weight_pivot = 0 and var_max = 0;

while weight [j] + weight_pivot <= weight_initial Then var_max = var_max + 1

value = value [i] + value End while; end for

At the end of this algorithm we find

{weight_max = 89; value = 139; var_max = 5

d) Look for the number of associable minimum variables in this bag according to the profits (algorithm 3)

(13)

Page 12 sur 13

Sorting the table in descending order of profits Weight_summer = 0; var_min = (wt / wi); max_value = 0;

For i from 1 to 8 do

While weight [i] + weight_summer <= initial weight { Var_min = var + 1;

Value_max = value [i] + value_max;

if Value_max <= value (calculated with the second algorithm) then var_min = var_min +1

At the end of this algorithm we get {value_max = 129; var_min = 3

After all this the search interval will be reduced to [3, 5] when I applied into the problem the exhaustive searching algorithm I found the optimal combination (value_opt = 159 [1, 2, 3, 5, 6]; weight_opt = 109)

VI. COMPLEXITY IN TIME :

This part is the most important since it will allow me to give my opinion on the problem P VS NP

Here I consider that the research is done with the so-called exhaustive research technique so the time that I quote is possible time in the worst case.

After my research, I will say that the complexity of KP is related to the bag, which we have to do but whatever the case the time it takes to find the right answer with this branching technique is less than O(N7).

VII.

CONCLUSION:

With a little hindsight we note that the NP-complete and NP-difficult combinatorial problems all have in common the fact that they have several constraints to respect. For their resolutions but whatever, their complexity by separating them according to a clear and clear logic we can solve them in polynomial time and for me instead of wanting to satisfy all these constraints at the same time it is better to

(14)

Page 13 sur 13

go with the most obvious (affordable) constraint towards the most complicated. In my opinion, there is only that that we can solve these problems in polynomial time and so in the end I would say that P = NP. /.

Références

Documents relatifs

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

• The first vector is a set of integers that represents the departure time of each train line at each station, e.g., if the instance of the problem have 3 lines and each line have

running CPU time in seconds; rp: denotes the ratio y c + where y + is the capacity level where the algorithm detects that the periodicity level y ∗ is reached; vrs: number of

6 presents the comparison between full sampling at d=0.5mm and optimized sampling using adaptive algorithm for the three magnetic field components (Hx, Hy and Hz) at three

This paper presents a method for ultrasonic characterization of porous silicon in which a genetic algorithm based optimization is used to solve the inverse problem. A one

Then, we present an algebraic Jacobian matrix which is de- rived from the theory of K¨ ahler differentials and used in the local algebraic observability test.. In the second part

Dynamic programming with blocs (DP Blocs) decrease the number of la- bels treated by 79% compared to the label correcting algorithm (DP LC), this is due to the reduction of the

method to solve the graph oloring problem by an exat way. This