• Aucun résultat trouvé

Heuristics for generating low-depth DFS trees for search algorithms 46

3.4 Pseudotrees / Depth-First Search Trees

3.4.2 Heuristics for finding good DFS trees

3.4.2.1 Heuristics for generating low-depth DFS trees for search algorithms 46

While many algorithms exist for generating shallow DFS trees in the centralized case (e.g. [132,127, 135]), it is unclear how to implement them in a distributed way, and little work has been done in this area. Chechetka and Sycara introduced in [31] the first distributed algorithm that constructs a pseu-dotree [78] using a heuristic designed to minimize the depth of the pseupseu-dotree. The algorithm works well, but in general it does not produce DFS trees, rather pseudotrees, thus violating our requirement from Section2.2.2.

Actually, the fact that we require DFS trees as opposed to just any pseudotree means that search algorithms can be arbitrarily bad compared to dynamic programming ones. To see this, consider a simple example of a ring constraint network withnagents. Any DFS arrangement of such a network will have depthn, thus making search algorithms run in time exponential inn(runtime isO(dn)). In contrast, a dynamic programming algorithm like DPOP would only be exponential in the width of the DFS, which is 2 for a ring, thus offering an exponential speedup (runtime isO(d2)).

3.4.2.2 Heuristics for generating low-width DFS trees for dynamic programming The objective of these methods is to produce the DFS arrangement with the lowest induced width. In a centralized setting, the most common heuristics for this problem are the following: themaximum cardinality set[208], themaximum degree[208], and themin-fillheuristic [110]. The min-fill heuristic does not produce in general pseudotree orderings (much less DFS ones), and is difficult to implement in a distributed setting because it would require coordination at each step between all the remaining agents in order to decide which one should be considered next in the elimination ordering. In the following we describe distributed adaptations of the maximum cardinality set and max-degree heuristics.

MCN: maximum connected node A heuristic called the most connected node (MCN) (also known asmax-degree) has been proved quite effective. MCN was introduced by [208], and subse-quently re-explored in [111,25,127,84]. This heuristic works as follows: the agent with the maximum number of neighbors is selected as the root (ties are broken by picking the agent with the lowest ID).

Afterwards, the process proceeds by visiting at each step neighboring agents with the highest number of neighbors (ties are again broken by picking the neighbor with the lowest ID).

Concretely, the process is implemented by changing the DFS algorithm3in two places. First, in step 1 each agent broadcasts the number of its neighbors; the agent ranked highest is chosen as the root.

Second, step 5 is implemented by having each agent sort the list of its neighbors, the most connected ones first. The rest proceeds as normal.

MCS: maximum cardinality set adapted to DFS trees Themaximum cardinality setheuristic was introduced by [208], and was subsequently used in many other contexts like [111,25,84]. This heuristic is designed to find low-width elimination orders for variable elimination procedures. It works by selecting some agent as the first one to be eliminated, and adding it to the setSof visited agents.

Then, each agent not inSis considered in turn. The one that has the most number of neighbors already inSis selected to be eliminated next, and is placed in the setS. Ties are broken randomly (or by agent ID). The process is repeated until all agents are inS.

MCS as was originally described in [208] does not produce a DFS ordering of the agents in the graph. Therefore, we propose in the following a simple adaptation of the DFS generation Algorithm3 that takes advantage of the MCS heuristic. We replace the DFS message handling code from Algo-rithm3(lines 11-15) with the following process, which is intended to simulate the MCS heuristic:

Whenever an agentXireceives a DFS message from one of its neighbors ,Xidoes the following:

• select its neighbors that are not either already visited, nor in the context of theDF Smessage : these are agents not yet visited, future children/pseudochildren;

• ask each one of them how many of their neighbors are already in the context of theDF S mes-sage;

• send theDF Stoken next to the neighbor which replies with the highest number;

The DPOP Algorithm

49

DPOP: A Dynamic Programming Optimization Protocol for DCOP

“Good things come in large packages.”

In this chapter we introduce the DPOP algorithm for DCOP. DPOP is an algorithm based on dynamic programming [19] which performs bucket elimination [49] on a DFS tree in a distributed fashion. DPOP’s main advantage is that it requires only a linear number of messages, thus intro-ducing exponentially less network overhead than search algorithms when applied in a distributed setting. Its complexity lies in the size of the UTIL messages, which is bounded exponentially by theinduced widthof the DFS ordering chosen. DPOP is therefore an excellent choice for solving DCOP in case the problems have low induced width.

In case the problems have high induced width and DPOP is unfeasible, other techniques must be explored. The whole part III of this thesis (Chapters6,7and8) discusses techniques that deal with the exponential space problem in different ways, offering different tradeoffs.

For the centralized case, we have reviewed in Section3.2.1the BTE algorithm introduced by Kask et al. ( [107]) and Shenoy ( [190]). BTE is a general algorithm which operates on any variable ordering (which is assumed to be given as input). BTE then creates a pseudotree which corresponds to this ordering, and operates on this pseudotree. The issue in a multiagent setting is that operating on arbitrary pseudotrees (i.e. non DFS) breaks the assumption that only neighbors can communicate directly (see Section2.2).

Therefore, this chapter introduces DPOP, a special case of BTE that operates on a variable ordering which is given by a DFS arrangement of the problem graph. This guarantees that the restrictions from

51

Section2.2hold.

4.1 DPOP: A Dynamic Programming Optimization Protocol for DCOP

DPOPis a complete algorithm, and has the important advantage that it generates only a linear number of messages. This is important in distributed settings because sending a large number of small messages (like search algorithms do) typically entails large communication overheads.

In the following sections we will present in more detail DPOP’s three phases. For a formal descrip-tion, see Algorithm4.

Algorithm 4DPOP: Dynamic Programming Optimization Protocol DPOP(X,D,R): each agentXidoes:

DPOP phase 1: DFS arrangement- run token passing mechanism as in Algorithm3

1 At completion,XiknowsPi, P Pi, Ci, P Ci, Sepi

DPOP phase 2: UTIL propagation(bottom-up UTIL message propagation)

2 J OINiPi=null

3 forallXj∈Ci/* for all children ofXi; ifXiis a leaf, skip this */do

4 wait forU T ILijmessage to arrive fromXj

5 JOINiPi=JOINiPi⊕U T ILij//we add to the join UTIL messages from children as they arrive

6 J OINiPi=J OINiPi⊕RPii

Xj∈P PiRji //also join all relations with parent/pseudoparents

7 U T ILPii=JOINiPiXi//use projection to eliminate self out of message to parent

8 SendU T ILPiimessage toPi

DPOP phase 3: VALUE propagation(top-down VALUE message propagation)

9 wait forVALUEiPi(Sepi)msg fromPi//Sepiis the optimal assignment for all vars inSepi

10 Xi←argmaxvi∈di(JOINiPi[Sepi])// sliceJ OINiPicorresponding toSepi; find bestvi 11 forallXj∈Ci/* for all children ofXi; ifXiis a leaf, skip this */do

12 sendVALUE(Sepi∩ Sepj ∪Xi) message toXj