• Aucun résultat trouvé

Comparison with search algorithms

6.4 O-DPOP: Message size vs. Number of Messages

6.4.5 Comparison with search algorithms

In backtrack search algorithms, the control strategy is top-down: starting from the root, the agents per-form assignments and inper-form their children about these assignments. In return, the children determine their best assignments given these decisions, and inform their parents of the utility or bounds on this utility. This top-down exploration of the search space has the disadvantage that the parents make deci-sions about their values blindly, and need to determine the utility for every one of their values before deciding on the optimal one. This can be a very costly process, especially when domains are large. Ad-ditionally, if memory is bounded, many utilities have to be derived over and over again [141,170]. This, coupled with the asynchrony of these algorithms makes for a large amount of effort to be duplicated unnecessarily [241].

In contrast, O-DPOP uses a bottom-up strategy, similar to the one of DPOP. In this setting, higher

agents do not assign themselves values, but instead ask their children what values would they prefer.

Children answer by proposing values for the parents’ variables. These proposals are similar to the COST messages in search algorithms, the difference being that they are sent proactively, and in the context chosen by the lower agents, as opposed to search, where the proposals are chosen by the higher agents. By using the idea ofvaluation sufficiency, O-DPOP can possibly find the optimal solution without exploring all values of some of the variables, which is in contrast with search algorithms. This also enables O-DPOP to be able to deal withopenproblems, i.e. problems with unbounded domains.

6.4.6 Summary

O-DPOP uses linear size messages by sending the utility of each tuple separately. Based on the best-first assumption, we use the principle of open optimization [70] to incrementally propagate these messages even before the utilities of all input tuples have been received. This can be exploited to significantly reduce the amount of information that must be propagated. In fact, the optimal solution may be found without even examining all values of the variables, thus being possible to deal with unbounded domains.

Preliminary experiments on distributed meeting scheduling problems show that O-DPOP gives good results when the problems have low induced width.

As the new algorithm is a variation of DPOP, we can apply to it all the techniques described for self-stabilization [165], approximations and anytime solutions [158], distributed implementation and incentive-compatibility [171] that have been proposed for DPOP.

Tradeoffs between Memory/Message Size and Solution Quality

In this chapter we discuss possible tradeoffs between solution quality on one hand, and compu-tation/memory/communication requirements on the other hand. We introduce two algorithms that offer configurable tradeoffs quality/effort.

In Section7.1, we introduce LS-DPOP(k), a hybrid algorithm which is a mix between classical local search methods in which nodes take decisions based only on local information, and full infer-ence methods that guarantee completeness. LS-DPOP operates in the framework from Section6.2 for detecting difficult subproblems, where normal DPOP cannot be applied. In such subprob-lems, LS-DPOP executes a local search procedure guided by as much inference as allowed byk.

LS-DPOP(k) can be seen as a large neighborhood search, where exponential neighborhoods are rigorously determined according to problem structure, and polynomial efforts are spent for their complete exploration at each local search step.

The second contribution of this chapter is A-DPOP (Section7.2), a parameterized approximation scheme based on DPOP, which allows the desired tradeoff between solution quality and computa-tional complexity. A-DPOP allows to adapt the size of the largest message to the desired approxi-mation ratio. Clusters of high width are detected as in Section6.2and explored with approximate propagations using the idea of minibuckets [49,51].

7.1 LS-DPOP: a local search - dynamic programming hybrid

We present a new hybrid algorithm for local search in distributed combinatorial optimization. This method is a mix between classical local search methods in which nodes take decisions based only on local information, and full inference methods that guarantee completeness.

We propose LS-DPOP(k), a hybrid method that combines the advantages of both these approaches.

117

LS-DPOP(k) is a utility propagation algorithm controlled by a parameterkwhich specifies the maximal allowable amount of inference. The maximal space requirements are exponential in this parameter. In the dense parts of the problem, where the required amount of inference exceeds this limit, the algorithm executes a local search procedure guided by as much inference as allowed byk. LS-DPOP(k) can be seen as a large neighborhood search, where exponential neighborhoods are rigorously determined according to problem structure, and polynomial efforts are spent for their complete exploration at each local search step.

For difficult optimization problems, local search methods have been developed. These methods start with a random assignment, and then gradually improve it by applying incremental changes. Their advantage is that they require linear memory, and in many cases provide good solutions with a small amount of effort. However, the decisions taken are often myopic in the sense that they take into account only local information, thus getting stuck into local optima rather easily. Large neighborhood search [3]

tries to overcome this problem by exploring a much larger set of neighboring states before moving to the next one. Dynamic programming has already been recognized as an efficient way to explore exponential size neighborhoods with a polynomial effort [67]. Another example of such a hybrid technique is the work of Kask and Dechter from [105] (see Section7.1.5).

For distributed environments, there are distributed local search methods like DSA ( [109]) / DBA( [237]) for optimization, and DBA for satisfaction ( [227]). To our knowledge, the concept of large neighbor-hoods has not been exploited in distributed environments.

We propose a distributed algorithm that combines the advantages of both these approaches. This method is a utility propagation algorithm controlled by a parameterkwhich specifies the maximal allowable amount of inference. The maximal space requirements are exponential in this parameter. In the dense parts of the problem, where the required amount of inference exceeds this limit, the algorithm executes a local search procedure guided by as much inference as allowed byk. If this parameter is equal to the induced width of the graph or larger, then the algorithm is full inference, therefore complete. Larger values ofkare conjectured to produce better results.

We show the efficiency of this approach with experimental results from the distributed meeting scheduling domain.

The rest of this chapter is structured as follows: Section7.1.1presents the hybrid optimization algorithm. Section7.1.4presents an experimental evaluation. Section7.1.5presents the relationship between this approach and existing work. Section7.1.6concludes.