• Aucun résultat trouvé

Comparing H-DPOP with search algorithms

Distributed search ( [141,81,194,33]) is an alternative approach to inference based algorithms like DPOP. Algorithms based on sequential search naturally provide pruning based on hard constraints:

partial assignments that have led to an inconsistency are not further explored, and the search backtracks.

Further pruning is achieved through more sophisticated consistency techniques, and by using variants of the branch-and-bound principle. The main advantage of search over inference based algorithms like DPOP ( [160]) is that search uses only polynomial space, which makes it suitable for memory limited platforms. The main drawback is that typically, search algorithms require an exponential number of

small messages, thus producing high network overheads.

A major improvement to search has been to use a cache at each node to remember past results ( [42]). The total size of all caches represents the explored search space in sequential search. Experi-mentally we will show that the total explored search space in search withfull cachingis similar to the explored space in H-DPOP. This comparison will provide a further testimony to the pruning power of H-DPOP with an important advantage that H-DPOP uses linear number of messages, as opposed to an exponential number in search.

In addition, we will show comparisons with a version of search which exploits only hard constraints without using the branch and bound principle. This version of search is closer to our H-DPOP algorithm as H-DPOP does pruning using hard constraints only without any other bounding. Our experiments show that search only using hard constraints provides similar performance as branch and bound search with only minimal degradation in cache size and message exchanges. This further highlights that the hard constraints are the dominating factor in all these problems and H-DPOP which efficiently exploits them with linear number of messages is superior to search.

5.3.1 NCBB: Non Commitment Branch and Bound Search for DCOP

NCBB ( [33]) is a polynomial space distributed branch and bound search for distributed optimization.

The basic idea of branch and bound is the same as in centralized branch and bound search ( [118]).

The distributed nature of search allows it to use different agents to search the non intersecting parts of search space concurrently providing speed and computational resource advantage. It also allows for eager propagation of bound changes from children to parent providing better pruning. The details of NCBB can be found in [33]. We will describe it shortly here.

NCBB works on the DFS tree arrangement of agents in the constraint graph. The DFS ordering can be done in the same way as in Section3.4.1or in [33]. The main advantage of such an ordering over the traditional OR based search is that given ancestor assignments the agents in a given subtree can workindependentlyto minimize their cost. The time complexity ofO(dn)in the OR based search (dis the maximum domain size,nis the number of agents) reduces toO(bdH+1), wherebis the branching factor of DFS tree arrangement,dis the maximum domain size andHis the depth of DFS traversal of constraint graph.

During the initialization of search in NCBB all agents compute the global upper and lower bounds on the solution cost. Then each agent chooses its value greedily provided the ancestor assignments to minimize its contribution (excluding the its subtree) to the global solution cost. After initialization agents start performing the main search procedure. An agentXiinitiates search (in its subtree) only after receiving an explicitSEARCHmessage from its parent. Before thisSEARCHmessage allXi’s ancestors choose their values and announce them to all their descendants.

A distinct feature of NCBB is that whenXiselects a value to explore on a given subtree it may choose different values for its different subtrees (thenon commitment part). The advantage of such concurrent search is that it allows for tighter upper bound when a value is used in a different subtree as we can take into account the already known cost for the completed subtree searches and do better pruning. Once the search is finished at the root for all subtrees and for each of root’s values we have the solution to the optimization problem.

5.3.1.1 NCBB with caching

The main advantage of search over inference based algorithms like DPOP ( [160]) is that search uses only polynomial space making it suitable for memory limited platforms. However the polynomial space comes with a price: search forgets everything from the past, so it may have to re-explore some parts of the search space. A natural extension of search would be to use a variable sized cache at each agent storing the previous search results, so that when the search explores previously visited search space its value can be directly looked upon in the cache. Such a scheme greatly improves the performance of search (as shown in [42], [32]) and allows the user to control the space-time tradeoff by varying the cache size (using user defined cache factor).

An advanced version of NCBB ( [32]) incorporates such a caching scheme. The maximum cache size at any nodeXiisd|Sepi|(see Section3.1.2.3). In the original NCBB the cache stores the solution cost, indexed by the value assignment inSepi, provided by the subtree atXi. The results are entered in the cachegiventhe subtree can provide a solution within the current bounds atXi. Otherwise the result is not cached. In practice such a scheme keeps the cache size smaller at the expense of extra effort (number of messages) invested for re-exploring the previously pruned and not cached parts of the search space.

NCBB: NCBB with a modified caching policy We have modified the caching in NCBB (calledN CBB, shown in graphs asNCBB Modified) so that we also store the< cost, Sepi>pair at Xieven if for the currentSepiassignment subtree can not provide a solution within the bounds. This saves us the extra effort when such a combination is encountered again in the search.N CBBworks on the same DFS tree as in H-DPOP using the MCN heuristic ( [208,127]) to provide comparable results. We have implemented another version of search which works only on the hard constraints without any other bounds isNCBB Hard Constraints.

InN CBBwe store the< Sepi, cost >pairs in the cache even if the subtrees rooted at agentXi

can not provide a solution within the current bounds. This modification saves the overhead in terms of messages exchanged when the sameSepi assignment is explored again. Figure5.3(b)shows the number of messages exchanged in NCBB andN CBB(for experimental setup see Section5.4.1.2, here it suffices to know thatpor edge inclusion probability, plotted on the x axis, is a graph parameter).

100

Figure 5.3:NCBB vsN CBB: NQueen graphs

N CBBalways requires a smaller number of messages, and the savings are often quite significant (between 40% and 65% forp=0.19).

In contrast, the cache size used inN CBBis more than the cache in NCBB but this increase in the cache is more than compensated by reduced message exchange. It is better to have a slightly bigger local cache than to increase the network overhead by exchanging larger number of messages. The idea of search with caching being better than the search alone is a testimony to this approach.

Hence in our opinion the comparison of H-DPOP withN CBBis more accurate than H-DPOP vs NCBB, as both H-DPOP andN CBBtraverse a similar search space.

5.3.2 Comparing pruning in search and in H-DPOP

NCBB Hard Constraintand H-DPOP both prune the search space based on only hard constraints, so we would expect that the size of explored search space should be identical in both cases. However, the experimental results show that there are slight variations. We show in the following the different pruning strategies employed by search and H-DPOP which account for this difference.

Figure5.4shows a DFS arrangement of a constraint network. First consider the pruning done by H-DPOP. H-DPOP does pruning frombottom up. As the message goes up from the leaf in the subtree of nodeN3it prunes all the inconsistent combinations. H-DPOP always explore the search space at any nodeNiwhich is consistent given the assignment ofSepinodes atNiand the subtree ofNi. On the contrary this is not true for search algorithms. Search prunes assignments from top to bottom. At the nodeN1, the partial solution from rootNrootuntilN1is consistent. However, there is no guarantee that this consistent partial solution will be consistent upon further exploration of the subtree atN1.

Furthermore, there is no guarantee in either search or the H-DPOP algorithm that they always

ex-Figure 5.4:A DFS tree arrangement to illustrate differences in bottom-up pruning (H-DPOP) vs. top-down pruning (search)

plore onlyglobally consistentcombinations. H-DPOP explores consistent solutions in the assignment space ofSepinodes and the subtree of any nodeNi. However, such combinations may become incon-sistent as UTIL messages goes up the DFS tree on two accounts. In Figure5.4,N3sends its message to its parent. The parent combines this message message with its other childN2’s message. During this process the combinations which are present in both sibling’s messages are passed up, and the rest are pruned. The other source of pruning are the constraints ofN4with its parent and pseudo parents, which could make some combinations from the children ofN4inconsistent.

In search, any consistent partial solution may become inconsistent as the search expands lower nodes in the DFS tree. So this leads to inconsistent search space exploration inNCBB Hard Constraint.