• Aucun résultat trouvé

Performance Comparisons of Various Strategies

Dans le document List of Tables (Page 116-120)

In this section, we apply DLM-General to solve 12 difficulty benchmark problems in order to determine the effects of different combinations of strategies on solution time and quality.

Our goal is to select, if there exists, one combination of parameters or strategies that can generalize to these 12 test problems in terms of solution time and quality. These 12 problems

are: G1, G2 and G5 from [135, 121], and 2.1, 2.7.5, 5.2, 5.4, 6.2, 6.4, 7.2, 7.3 and 7.4 from [57]. Since these benchmark problems are continuous constrained NLPs, we have created the corresponding discrete and mixed-integer versions. These versions are done in such a way that allow solutions in the discrete or mixed-integer problems to be compared directly to their continuous counterparts.

We transform a continuous constrained NLP with n variables, x1, . . . , xn, to a discrete or mixed-integer version as follows. In creating a discrete NLP, we discretize all variables in the original problem, whereas in creating an MINLP, we let variables with odd indices be continuous and those with even indices be discrete. In discretizing continuous variable xj in the range [Rlj, Ruj], we force it to take discrete values from the set:

Aj =

Rlj + (Ruj −Rlj)si, i= 0,1, . . . , s if Ruj −Rlj <1

Rlj +si, i= 0,1, . . . ,b(Ruj −Rlj)sc if Ruj −Rlj ≥1 (4.8)

where s = 107. For example, if Ruj −Rlj = 1, then xj will be discretized into a set of 107 discrete points. Obviously, given an MINLP with n dimensions, the discrete subspace created has at least 107bn/2c points, a space so huge that it is impossible for any algorithm to enumerate. Using such a finely discretized space allows us to compare directly the solutions in the original continuous versions and those found by DLM in the transformed discrete and mixed-integer versions.

Next, we compare various combinations of parameters used inDLM-General. For neigh-borhood search, we tested three different neighneigh-borhood generators governed by Cauchy (de-noted by N1), Gaussian (denoted by N2), and uniform (denoted by N3) distributions. For global search, we tested three different tabu-list (Q) sizes: 0 (denoted by S1), 6 (denoted by S2) and 10 (denoted by S3). For the relax-and-tighten strategy, we tested three differ-ent factors for tightening and relaxing constraints, v: ∞ (denoted by T1), 1.2 (denoted by

99

T2) and 1.5 (denoted by T3). Note that when S1 is adopted, global search is actually not performed because the size of the tabu list Q is zero. Further, when T1 is employed, relax-and-tighten is not used because Vmax0 /v =Vmax0 /∞= 0. For simplicity, we use N?-S?-T? to represent a combination of parameters/strategies chosen. As an example, N1-S3-T2 means that DLM-Generaltakes a Cauchy neighborhood generator, uses a tabu listQof size 6, and sets v to 1.2.

For each combination of parameters, we evaluate the aforementioned 12 problems (mixed-integer versions) from randomly generated starting points until a feasible solution is found.

We call one of the above runs a feasible run that consists of one or more DLM runs (defined in Section 4.1.6) and that finds a feasible solution in the last DLM run. We repeated until 100 feasible runs were performed and recorded the CPU times and corresponding solution qualities. For a given combination of parameters, let tx(i) and fx(i) be the CPU time and objective of theithfeasible run. In order to compare all these combinations of parameters, we perform normalization by using one set of parameters as a reference. In our experiments, we select N1-S3-T2 as the reference for normalizing all (27 in total) combinations of parameters as follows:

rt(i) =

tx(i)/tr(i)−1.0, if tx(i)> tr(i) 1.0−tr(i)/tx(i), if tx(i)≤tr(i)

(4.9)

rf(i) = (fx(i)−fr(i))/|fr(i)| (4.10) where tr(i) and fr(i) are the CPU time and objective-function value of the ith feasible run for the reference strategy N1-S3-T2, i= 1,2, . . . ,100. In our normalizations, (4.9) measures the symmetric speedup [205] in order to give equal weights to speedups and slowdowns in CPU times. On the other hand, because objective-function values might be negative, (4.10) measures relative improvements and degradations in objectives. The averagert and rf of all

the 100 normalized CPU times and solution qualities are computed as follows:

rf = 1 100

100

X

i=1

rf(i), rt = 1 100

100

X

i=1

rt(i) (4.11)

A combination of parameters is considered better than reference N1-S3-T2 if both rf and rt

are negative.

Figures 4.5 thru 4.7 show the results on evaluating the 12 difficult mixed-integer con-strained NLP benchmarks. The left three diagrams (indexed by a, d and g) of Figures 4.5 thru 4.7 lead us to conclude that, without relax-and-tighten, some problems with many equality constraints (7.4 for example) elude solutions even after 100 runs. On the other hand, if v is set to be too large (v = 1.5 for instance), DLM-General cannot find solutions to all the 12 benchmark problems, as shown in Figure 4.6. Global search truly improves the solution quality. For example, when compared to the reference, N1-S2-T2 finds much better CLMdn than N1-S1-T2. Moreover, solution times based on a Cauchy distribution outperform those based on the other two distributions. This is expected because the Cauchy distribution has a long, flat tail that enables a search to explore more effectively wider regions in the search space. Overall, we conclude that strategies/parameters combination N1-S2-T2 gives the best performance; hence, we use it to solve general constrained NLPs in the rest of the experiments.

Finally, Figures 4.8 thru 4.10 plot the performance of DLM-General using N1-S2-T2 (Cauchy, tabu-list Q size = 6 and v = 1.2) on the 12 benchmark problems in discrete, continuous and mixed-integer forms, respectively. Each point in a graph represents a pair of CPU time and solution quality for one feasible run ofDLM-General. As a local search method in Lagrangian space, DLM-General usually finds different solutions when using different

101

randomly generated starting points. Within 100 feasible runs, DLM-Generalis able to find good solutions for most of the problems tested.

4.3 Experimental Results on Constrained NLP

Dans le document List of Tables (Page 116-120)