• Aucun résultat trouvé

4. Computational results

4.3. Analysis of Tabu Search algorithmic behavior

The scope of this section is to present the results of tests performed in order to better understand the behavior of the Tabu Search meta-heuristic. The analysis of these results should be of help in assessing the algorithm’s characteristics and applicability.

The first analysis regards the performance of the algorithm in relation to the network characteristics. An analysis of the relative gaps between the best feasible MIP-solver solution and the lower bound for the R-instances shows that they peak for the loose capacity instances (labeled FXX,C1) and become lower as the capacity becomes tighter.

This somewhat different from the observations made for general network design

problems where the tendency is that the higher the fixed/variable cost ratio, the larger the gap and, thus, the more difficult the instances (the F10,CX labeled instances being the most difficult independently of the problem size).

Instance Nodes Arcs Com. XpressMP Bound XP / Bound TS TS / XP TS / Bound C20,230,200,V,L 20 228 200 101.112 86.180 14,77% 102.919 1.76% 16.26%

C20,230,200,F,L 20 230 200 153.534 122.311 20.34% 150.764 -1.84% 18.87%

C20,230,200,V,T 20 229 200 105.840 92.608 12.50% 103.371 -2.39% 10.41%

C20,230,200,F,T 20 228 200 154.026 124.358 19,26% 149.942 -2.72% 17.06%

C20,300,200,V,L 20 294 200 81.184 73.894 8,98% 82.533 1.63% 10.47%

C20,300,200,F,L 20 292 200 131.876 110.533 16,18% 128.757 -2.42% 14.15%

C20,300,200,V,T 20 291 200 78.675 74.583 5,20% 78.571 -0.13% 5.08%

C20,300,200,F,T 20 291 200 127.412 106.628 16,31% 116.338 -9.52% 8.35%

C30,520,100,V,L 30 518 100 55.138 54.160 1,77% 55.981 1.51% 3.25%

C30,520,100,F,L 30 516 100 n/a 92.636 n/a 104.533 n/a 11.38%

C30,520,100,V,T 30 519 100 53.125 52.681 0,84% 54.493 2.51% 3.33%

C30,520,100,F,T 30 517 100 106.761 97.653 8,53% 105.167 -1.52% 7.14%

C30,520,400,V,L 30 520 400 n/a 111.054 n/a 119.735 n/a 7.25%

C30,520,400,F,L 30 520 400 n/a 143.335 n/a 162.360 n/a 11.72%

C30,520,400,V,T 30 516 400 n/a 114.725 n/a 120.421 n/a 4.73%

C30,520,400,F,T 30 518 400 n/a 148.210 n/a 161.978 n/a 8.50%

C30,700,100,V,L 30 680 100 48.849 48.400 0,92% 49.429 1.17% 2.08%

C30,700,100,F,L 30 680 100 65.516 59.483 9,21% 63.889 -2.55% 6.90%

C30,700,100,V,T 30 687 100 47.052 46.260 1,68% 48.202 2.39% 4.03%

C30,700,100,F,T 30 686 100 57.447 55.123 4,05% 58.204 1.30% 5.29%

C30,700,400,V,L 30 685 400 n/a 94.725 n/a 103.932 n/a 8.86%

C30,700,400,F,L 30 679 400 n/a 128.950 n/a 157.043 n/a 17.89%

C30,700,400,V,T 30 678 400 n/a 95.183 n/a 103.085 n/a 7.67%

C30,700,400,F,T 30 683 400 n/a 128.441 n/a 141.917 n/a 9.50%

Table 3. Computational Results for the C-Problem Instances (3600 sec.)

The previous observation seems to indicate that tight capacity DBCMND problems are easier to solve than loose capacity ones. This could be explained by the impact of the design-balance constraints. When the global capacity of the network is tight, a larger number of arcs need to be open in order to provide the means to flow the commodities. For DBCMND problem instances, this forces a high “open” versus “closed arcs” ratio to satisfy the design-balance constraints. Since many arcs need to be open in the final solution, the LP-relaxation is tighter and hence results in better lower bounds than for loose-capacity problem instances.

Instance Nodes Arcs Com XpressMP Bound XP / Bound TS TS / XP TS / Bound

Table 4. Computational Results R-Problem Instances

Turning to the gaps between the TS and the MIP-solver solutions relative to the same the R instances, one notices that the Tabu Search largely outperforms the MIP-solver on high-fixed-cost, loose-capacity instances (F05,C1 and F10,C1) for the two largest instance groups (R17 and R18). Furthermore, the TS algorithm also outperforms the MIP-solver on the largest high fixed/variable cost ratio instances in R18. This indicates that the algorithm is robust with respect to instance attributes and appears particularly useful for addressing network instances where MIP-solvers may struggle.

Data

set n/a <[-5%]

[-5%]-[-2.5%]

[-2.5%]-[0%]

[0%]- [2.5%]

[2.5%]-[5%] >[5%]

C 9 1 6 1 6 1 0

R 2 5 5 4 19 12 7

C+R 11 6 11 5 25 13 7

Table 5. Comparative Distribution of Relative Gaps TS versus XpressMP

One notices that the performance of the Tabu Search algorithm on loosely capacitated problem instances is better than on very-tight capacitated instances. This behavior could be explained by the choice of redirecting flow on single paths when closing arcs in the Exploration phase (Section 3.3). Indeed, when arc capacities are tight, residual capacities are generally small and more than one path could be required for flow redirection. This results in infeasible moves for a number of commodities and a less thorough exploration of the search space. While the performance of the method is very good, this behavior points to the need for further research into more sophisticated, but still efficient flow-redirection mechanisms.

Considering actual scheduled service network design applications, a rather large number of potential services and departure times are usually considered. Fixed costs tend to be important in these cases reflecting the asset-related costs associated to operating vehicles and convoys. Variable costs, reflecting the cost of transporting commodities (passengers or freight) are less important. This translates into large networks with high fixed cost and loose capacity (considering the many possible services considered initially) characteristics. It therefore seems that the Tabu Search meta-heuristic we propose would perform well addressing DBCMND problem instances coming from transport service network design applications.

Candidate selection Phase 1

lf lp lr lv

33.20% 21.70% 31.80% 13.29%

Candidate selection Phase 2

Path 1 Path 2 Path 3 Path 4

58.03% 12.16% 15.26% 14.55%

Table 6. Distribution of Candidate Selection for the Tabu Search

Let’s now turn to the behavior of the algorithm relative to the candidate lists in the two phases of the Tabu Search. The goal is to investigate whether candidates are chosen

from all four sub-lists in the exploration phase and all four candidate paths are selected in the feasibility phase. The results may be used to further tune the algorithm and remove eventual redundant computations.

Table 6 displays the distribution of the best neighbor selection from the candidate sub-lists in the explorations phase (Phase 1) and that of the selection of the four candidate paths in the feasibility phase (Phase 2). The figures are averages taken over all 78 instances. As can be seen from the table, all four candidate sub-lists and path candidates contribute to the selection of the best neighbor solution. Obviously, the actual selection distributions vary with the individual instances. Thus, for instance, the penalty sub-list has a tendency of not providing any best neighbor solutions for some instances. Then, as it is often the case, the algorithm could be more finely tuned for the particular setting of a given application. All strategies contribute to the performance of the algorithm when sets of problem instances are considered, however, which is the case for the present experimentation.

The third and last analysis we present in this section targets the interaction between the exploration and the feasibility phases and the resulting impact on the search trajectory according to the values of the total solution cost and of the solution value (total solution cost + penalty value) for the current solution.

Figure 12 shows the graphs of this evolution for instance C20,200,300,F,L. As can be seen, the initial solution r-DBCMND and rounding heuristic yield a solution with high cost and penalty values. The algorithm starts off with the exploration phase and switches to the feasibility phase after eleven iterations because of the improvement gap of 5% and the iteration range value of 10. Every ten iterations the feasibility phase sets in, which can be seen at the points where the two curves meet (penalty value=0). An interesting observation is that the best solution is found after 55 iterations of the exploration phase and 5 initiations of the feasibility phase. For the remaining time, the search oscillates above the best found solution. This could signify that the algorithm produces good feasible solution very fast for this rather easy particular instance. For more complex instances, the stopping criterion used terminates the search before the improvement in feasible solutions smoothes out, as illustrated in Figure 13.

Figure 13 shows the evolution of the solution values for instance C30,520,400,F,T. The points where the curves drop indicate when the feasibility phase was run. The solution value improves with each feasibility-phase run. The search is stopped on a time limit, however, and further improvements could probably be achieved.

The small number of iterations performed for this instance result from the requirement to solve several CMCF problems for the feasibility phases, which is very computationally demanding. Thus, for example, between 500 and 1000 CPU seconds where required per run of the feasibility phase depending on the level of imbalance. This raises the question of whether an alternative evaluation method should be applied during the feasibility phase to speed the evaluations or if fewer candidate paths should be investigated. Later studies might shed new light on this issue. In the meantime, it should be noted that, although the feasibility phase is computationally demanding, it also proves to be the driving force behind improving the feasible solution values and achieving good feasible solutions.

120000 140000 160000 180000 200000 220000 240000 260000 280000 300000

V (total solution value) Z (total cost value)

100 200 300 400

Total cost and total solution value development of instance C20,200,300,F,L

Iteration number

Figure 12. Evolution of Solution Values for Instance C20,200,300,F,L

V (total s olution value) Z (total cos t value)

160000 180000 200000 220000 240000 260000 280000

10 20 30 40

Total cost and total solution value development of instance C30,520,400,F,T

Iteration number

Figure 13. Evolution of Solution Values for Instance C20,520,400,F,T 4.4. Simple multi-search implementation investigation

The outcome of the initial parameter tuning was a parameter configuration that appeared to produce the overall best results for the TS algorithm. Nevertheless, different network characteristics might require different parameter configurations to obtain the best possible solutions. We examine this possibility by solving each instance with the same Tabu Search algorithm but with several parameter configurations and selecting the best result among those obtained by these configurations. We will thus see whether a significant benefit could be obtained and gauge the robustness of the method.

Note that the proposed methodology corresponds to the so-called independent multi-search strategy, much used to design parallel meta-heuristics (e.g., Crainic, 2005,

Crainic and Nourredine, 2005, Cung et al., 2002). According to this strategy, the algorithm would be deployed on different CPUs with different parameter configurations.

The different runs would take place in parallel, the best solution among those found by the individual searches being selected as the overall best one.

We selected five additional parameter configurations, shown in Table 7 together with the one used in the previously. As can be seen, only six of the eight parameters vary in the six configurations. We have chosen to fix the improvement range and improvement gap due to the significant importance of the feasibility phase in finding good feasible solutions. To allow as many feasibility-phase executions as possible, the improvement range and the improvement gap were kept at their low and high level, respectively. The remaining 6 parameters were used at various levels.

Parameter Description Values

lf # of Fixed cost candidates 5 5 10 5 10 5 lp # of Penalty cost candidates 5 5 5 5 10 5 lr # of Residual capacity candidates 15 5 5 10 10 15 lv # of Variable cost candidates 15 5 5 10 10 15 lt Tabu Iist Iength 25 25 20 20 10 10 Penalty value scale 0.5 0.5 2 2 2 0.5

lg Improvement gap 5% 5 5 5 5 5

li Improvement range 10 10 10 10 10 10 Average score 3.14 3.60 4.01 4.06 3.71 2.41

Table 7. Parameter Settings for the Multi-search Experiment

The computational results from the parallel implementation can be seen in Tables 8 and 9. The three columns labeled “XpressMP”, “TS”, and “Parallel TS” display the computational results from the MIP-solver, the Tabu Search with the initial parameter configuration (previous results), and the best Tabu Search solution over the six parameter configurations, respectively. The figures in column “Parallel improv.” show the relative improvement for the multi-search TS relative to the Tabu Search run with the initial-parameter configuration. The last two columns, “TS avg.” and “St. dev.”, show the average solution values obtained from the six parameter configurations and the standard deviation with respect to these average solution values, respectively. The computational results for the individual parameter settings for the individual runs can be found in Pedersen (2006).

The figures in Tables 8 and 9 indicate that some improvement can indeed be obtained from an independent multi-search implementation, where various parameter configurations characterize the individual searches. Most improvements are small, however, 59 out 78 instances displaying improvements less than 2%. There are a few instances, however, for which significant improvements can be achieved, e.g., the 10.31% improvement for instance R15,F10,C8. This confirms that a multi-search approach may improve the quality of the solutions obtained. It also raises the questions whether the proposed methodology with a single parameter setting is sufficiently robust.

We have thus analyzed the performance of the initial parameter configuration in relation to the other five. The “Average scores” row at the bottom of Table 7 displays an aggregated measure of “success” for each parameter configurations and provides the means of performance comparisons. A score from 1 to 6 (1 being the best) was given to each configuration for each problem instance, 1 representing the best solution value and 6, the worst. Table 7 displays the averages over the instances.

Instance XpreesMp TS Parallel TS Parallel improv.

TS avg. St. dev.

C20,230,200,V,L 101.112 102.919 101.345 -1.55% 103.076 1.12%

C20,230,200,F,L 153.534 150.764 148.384 -1.60% 151.166 1.22%

C20,230,200,V,T 105.840 103.371 103.371 0.00% 105.559 1.85%

C20,230,200,F,T 154.026 149.942 144.766 -3.58% 147.652 1.39%

C20,300,200,V,L 81.184 82.533 80.269 -2.82% 80.951 1.07%

C20,300,200,F,L 131.876 128.757 126.258 -1.98% 128.297 1.20%

C20,300,200,V,T 78.675 78.571 78.444 -0.16% 79.749 1.27%

C20,300,200,F,T 127.412 116.338 116.338 0.00% 118.591 1.21%

C30,520,100,V,L 55.138 55.981 55.786 -0.35% 56.256 0.57%

C30,520,100,F,L n/a 104.533 101.612 -2.87% 103.967 1.15%

C30,520,100,V,T 53.125 54.493 54.092 -0.74% 54.298 0.27%

C30,520,100,F,T 106.761 105.167 104.702 -0.44% 106.661 1.41%

C30,520,400,V,L n/a 119.735 118.071 -1.41% 119.700 0.97%

C30,520,400,F,L n/a 162.360 160.979 -0.86% 163.332 1.05%

C30,520,400,V,T n/a 120.421 120.421 0.00% 121.310 0.68%

C30,520,400,F,T n/a 161.978 161.978 0.00% 164.852 1.07%

C30,700,100,V,L 48.849 49.429 49.429 0.00% 49.723 0.45%

C30,700,100,F,L 65.516 63.889 63.292 -0.94% 63.635 0.50%

C30,700,100,V,T 47.052 48.202 47.487 -1.51% 47.969 0.52%

C30,700,100,F,T 57.447 58.204 57.187 -1.78% 58.343 1.20%

C30,700,400,V,L n/a 103.932 103.932 0.00% 105.835 1.76%

C30,700,400,F,L n/a 157.043 148.114 -6.03% 161.398 7.26%

C30,700,400,V,T n/a 103.085 103.085 0.00% 103.797 0.69%

C30,700,400,F,T n/a 141.917 138.609 -2.39% 141.743 1.17%

Table 8. Sequential versus Multi-search Performance Comparisons, C problems A perfectly equal-quality performance would have yielded an average score of 3.5 for each configuration. While such a perfect match cannot be expected, the various configurations perform rather similarly. None stands out as particularly bad, the worst score being 4.06 for configuration 4. The configuration 6, with a score of 2.41, proves to be somewhat better than the first five. It is interesting to note that the only difference between this configuration and the initially selected one is a shorter tabu-list length. This is an indication that the initial set of problem instances used for calibration was probably too small and we misjudged somewhat the appropriate level for this parameter.

The scores indicate, nevertheless, that the Tabu Search meta-heuristic is robust with respect to the parameter selection. High-quality results are obtained over a range of

parameter values for quite a diverse set of problem instances. This claim is further supported by investigating the standard deviations on the values of the best solutions obtained with each of the six parameter configurations. These results are displayed in the last columns of Tables 8 and 9. Only 10 out of the 74 instances display a standard deviation larger than 2%, the highest being 7,26% for instance C30,700,400,F,L. The standard deviation values are thus quite small, particularly considering the very different characteristics of the 78 problem instances, supporting the claim of a robust algorithm with respect to parameter settings and problem-instance characteristics.

Documents relatifs