• Aucun résultat trouvé

Deterministic Algorithms

Background and Related Work

2.2 Combinatorial Optimization

2.2.1 Deterministic Algorithms

This section discusses various exact and approximate deterministic algo-rithms applicable to integer programming problems.

2.2.1.1 Exact Algorithms

• Linear Programming (LP): A commonly used scheme for solving an ILP (2.2) using linear programming [Papadimitriou and Stei-glitz, 1982; Goldfarb and Todd, 1989], allows the feasible set to assume positive real values and the solutions are rounded to the nearest integers. This technique is also known as solving an LP-relaxation of an ILP. This scheme is satisfactory when the variables assume large values and are not so sensitive to integer rounding.

In other cases and especially when the variables can assume only binary values, this method can lead to solutions very far from op-timal. However, in special situations when the matrix A is totally unimodular,1 the optimal solution of the LP-relaxation is integral and is also the optimal solution to the ILP [Gondran and Minoux, 1984]. When the general IP (2.1) has a nonlinear cost function, a strategy is to approximate the nonlinearities using piecewise lin-ear functions, making the IP amenable to solution via linlin-ear pro-gramming. [Papadimitriou and Steiglitz, 1982] describe various techniques for handling several types of nonlinearities in the cost function and constraint functions via mathematical reformulations so the IP can be solved using linear programming. Exploitation of the structure of the problem and intimate knowledge of the nature of the cost function are key components of all the schemes that

uti-1 All square submatrices extracted from A have determinants equal to 0 , 1 , or — 1, which implies that all entries of A are elements of the set { 0 , 1 , —1}.

Background and Related Work 15

lize a linear programming solver for an IP. Despite the possibility for various LP reformulations of an IP, exact solutions are generally possible only for small IPs and that too with a very heavy relative time penalty.

• Cutting-Planes: A popular strategy for achieving exact integral solutions to an LP-relaxation of an IP when the constraint matrix A cannot be cast in a totally unimodular form, is via cutting-planes [Papadimitriou and Steiglitz, 1982; Gondran and Minoux, 1984; Nemhauser and Wolsey, 1989]. A cutting-plane algorithm iteratively adds linear constraints to the feasible hyper volume identified by the LP solver. These hyper planes iteratively "cut away" portions of the feasible region so that integer feasible so-lutions are not excluded, and ultimately elements of the solution set from which the optimal solution is selected are all integral and feasible. However, a cutting-plane algorithm is efficient only in a limited class of IPs in which the constraint matrix A is "almost totally unimodular" (one or two rows of A have integral elements of any value).

• Branch-And-Bound: Branch-and-bound [Parker and Rardin, 1988; Nemhauser and Wolsey, 1989] is a classical technique for in-teger programming. This algorithm is based on the concept of

"intelligently" enumerating feasible points of a combinatorial op-timization problem without utilizing "brute-force." The method doesn't necessarily enumerate "all" points, but selects subtrees to search using bounds on possible outcomes. The feasible space is iteratively partitioned to yield subproblems, each subproblem is solved to obtain bounds on its cost function, subproblems whose lower bounds are higher than the known smallest upper bound are eliminated, promising subproblems are further partitioned, and the process of partitioning, bounds evaluation, and elimination or consideration is repeated until the best known lower bound does not show improvement. Branch-and-bound is a recursive strategy that is based on a tree search, where the nodes of a tree represent subproblems and the branches of a node are vis-ited only if necessary. Branch-and-bound techniques frequently use LP solvers in order to compute bounds for subproblems and are able to guarantee optimality of a solution without exhaustively enumerating the feasible space. However, there are practical lim-itations to the use of this technique. Application of this

tech-nique requires that problems be easily decomposable with mini-mal coupling between subproblems. Also, in many instances in-teger feasible solutions are not readily available, making elimi-nation of subproblems cumbersome, and in this case the algo-rithm fails as a result of explosive memory requirements [Lee and Mitchell, 1999]. Nevertheless, branch-and-bound remains a pop-ular computational strategy for integer programming and the lit-erature contains references to hybrid techniques (which improve on the basic technique) that utilize combinations of linear pro-gramming, branch-and-bound, and cutting-planes [Mitchell, 1999a;

Mitchell, 1999b].

• Dynamic Programming: Dynamic programming [Bellman and Dreyfus, 1962; Papadimitriou and Steiglitz, 1982] is related to branch-and-bound to the extent that an "intelligent" enumeration of the feasible space is performed, but differs from branch-and-bound due to the strategy of working backwards from the final decision to the earlier ones. An application of this technique is lim-ited to problems in which locally optimal decisions can be chained sequentially in order to generate globally optimal decisions, and to those problems that easily admit decomposition into well defined subproblems. In problems with multiple dimensions, dynamic pro-gramming runs into time and space problems due to the need to store an exponentially growing number of decision tables. Branch-and-bound has proved more effective than dynamic programming for many problem types and is thus more preferred.

2.2.1.2 Approximate Algorithms

When one is faced with optimization of an integer program with a compli-cated structure or of a large and practical size, seeking an exact solution may not be computationally feasible. In these practical instances, one is forced to consider approximate algorithms that can generate "high-quality"

solutions in a "time-efficient" manner.

• All the exact optimization algorithms discussed in Section 2.2.1.1 can be utilized to generate approximate solutions, and as discussed earlier, this is inevitable for practical problem sizes.

• Heuristics: Heuristics [Pearl, 1984] are "tailored-made algo-rithms" based on rules that exploit the structure of a special

Background and Related Work 17

case of a given problem, and can often generate optimal solu-tions in polynomial time for these special cases. Moreover, these schemes can generate good solutions even for general problem in-stances of practical size. Some examples of popular heuristics are the Lin-Kernighan heuristic [Lin and Kernighan, 1973] for a symmetric traveling salesman problem,2 the Kernighan-Lin heuris-tic [Kernighan and Lin, 1970] for graph partitioning, and the Christofides heuristic [Papadimitriou and Steiglitz, 1982] for the metric traveling salesman problem.3 Some heuristics are applica-ble more generally-the "greedy heuristic" for instance, in which at each stage the best alternative among the feasible alternatives is chosen. The greedy heuristic has been applied to numerous combi-natorial optimization problems, and works well for certain problems for which the structure is especially suitable to the strategy, but works very poorly on others [Parker and Rardin, 1988].

• Tabu Search: Tabu search [Glover, 1986; Hertz et al., 1997] is gaining application for combinatorial optimization in spite of the fact that there is no known proof of its convergence, because it is an efficient optimization scheme for many problems. A Tabu search algorithm works by not only retaining information of the best solu-tion detected, but also by systematically memorizing the itinerary through previous solutions. This memory helps restrict some search transitions in the neighborhood of a current solution, and thus dis-courages cycling among recently visited solutions. However, re-strictions are relaxed when a solution has some preferred charac-teristics. The search selects the best solution from a set of feasible solutions that are subject to restrictions and relaxations. Later, the lists maintaining restriction and relaxation information are up-dated. The search is continued until some stopping criterion is met.

Performance of a Tabu search algorithm is influenced significantly by a large number of parameters, and these need to be fine tuned for various problem domains. Also, maintaining and updating the search memory lists can be quite complicated and cumbersome.

These drawbacks restrict the elegance of Tabu search as an opti-mization approach.

2 A traveling salesman problem of n cities where the nxn distance matrix is symmetric.

3A traveling salesman problem where the arrangement of cities satisfies the triangle inequality.