• Aucun résultat trouvé

Maheswaran et al. [126] show that in some settings and according to some metrics (complete) central-ization is not worse (privacy-wise) than some distributed algorithms.

Even though the nodes in a cluster send relations to the cluster root, these relations may very well be the result of aggregations, andnotthe original relations.

Example 18 For example, in Figure8.1,X13sendsX9(viaX11andX10) 3 relations:r1311,r1013andr913. Notice thatr1113that is sent toX9like this is not the realr1311, but the result of the aggregation resulting from the partial join performed with the UTIL message thatX13has received fromX14. Therefore, inferring true valuations may be impossible even in this scenario.

8.5 Summary

We have presented an optimal, hybrid algorithm that uses a customizable message size and amount of memory. PC-DPOP allows for a priory, exact predictions about privacy loss, communication, memory and computational requirements on all nodes and links in the network.

The algorithm explores loose parts of the problem without any centralization (like DPOP), and only small, tightly connected clusters are centralized and solved by the respective cluster roots. This means that the privacy concerns associated with a centralized approach can be avoided in most parts of the problem. We will investigate more thoroughly the privacy loss of this approach in further work.

Experimental results show that PC-DPOP is particularly efficient for large optimization problems of low width. The intuition that dense problems tend to require more centralization is confirmed by experiments.

Dynamics

151

Dynamic Problem Solving with Self Stabilizing Algorithms

In this chapter we extend the discussion of distributed optimization algorithms to dynamically changing environments, like for example dynamic scheduling applications where tasks arrive and are executed continuously, or sensor networks where vehicles to be tracked move continuously.

These problems can be modeled asdynamic CSPs, and there is a wide body of research on this topic: [213,52,20,217], to name just a few. We refer the interested reader to [212] for an excellent survey of various techniques that can be applied in this setting. However, the vast majority of these techniques operate in a centralized fashion: the dynamic changes in the environment are communicated to a central server, which then resolves the problem whenever necessary. In the following, we will presentdistributedalgorithms for dynamic constraint reasoning; we focus on a class of algorithms calledself-stabilizing.

Self stabilization in distributed systems is a concept introduced by Dijkstra in [57]. It is the ability of a system to respond to transient failures by eventually reaching astable statewhere alegitimacy predicateis satisfied, and maintaining it afterwards. In the context of DCOP, we define thelegitimacy predicateas ”all variables are assigned to their values from the optimal solution of the DCOP“.

Definition 34 (Self-stabilizing DCOP algorithm) A DCOP algorithm is calledself-stabilizingif it is able to always converge from any arbitrary initial configuration to a stable state where the legitimacy predicate is satisfied. This stable state corresponds to the optimal solution of the optimization problem, i.e. all variables in the problem are assigned their optimal values for the current problem configuration.

Algorithms with this property are very well-suited to cope with error-prone distributed systems like distributed sensor networks, or with dynamic environments like control systems or distributed

153

scheduling, where convergence to stable states is ensured without user intervention. However, ensuring self-stabilization presents two major challenges. First, the algorithm must be able to deal with arbitrary state changes, like for example arbitrary changes in the problem topology (for example new agents coming in, or the network experiences temporary problems), in the valuations of the agents, or even in their internal data structures (for example as a result of temporary power outages). There is an obvious solution to this problem, namely simply restarting the optimization as soon asanychange has happened in the problem. Nevertheless, this approach is most likely not practical, as it would raise another problem, namely the algorithm’s ability to deal with successive changes that occur in a relatively fast sequence. The algorithm then must be fast enough in solving the changed problem, such that it is able to keep up with the changes.

These problems have so far mostly prevented self-stabilizing algorithms from addressing anything but relatively “low-level” tasks: leader election, spanning tree maintenance (e.g. [40]) and mutual ex-clusion. We will present in the following two notable exceptions: the earlier work of Collin, Dechter and Katz [39] for distributed self-stabilizing constraint satisfaction, and a self-stabilizing extension of the DPOP algorithm. We also note an attempt at self-stabilizing constraint optimization using a dis-tributed, self-stabilizing version of branch and bound ( [223]). This approach is not practical, however, since it may create an exponential number of agents, because they represent processes corresponding to subproblems.

9.1 Self-stabilizing AND/OR search

Collin, Dechter and Katz introduced in [39] the first self-stabilizing distributed constraint satisfaction algorithm. This algorithm also operates on a DFS tree.

In order to be able to guarantee self-stabilization, this algorithm uses a powerful principle: each agent executes continuously two parallel protocols: a DFS-contruction protocol, and a search protocol.

The DFS protocol they use (Collin and Dolev [40], also Dolev [59]) is guaranteed to eventually produce a valid DFS tree, provided no more changes happen in the problem structure.

The search protocol executes in parallel with the DFS generation protocol. It operates on the DFS tree that the first protocol produces. While this tree is not yet correctly established, the results are undefined. However, since the DFS protocol is guaranteed to eventually stabilize on a correct DFS, the search process is thus guaranteed to start operating on a correct DFS eventually. As the search process is also guaranteed to produce the correct solution in finite time given a correct DFS tree, it follows that the whole algorithm is self-stabilizing. For more details and a formal proof, see [39].

Using thissatisfactionalgorithm as a basis, one could in principle extend alsodAOBB(Algorithm2) for self-stabilizingoptimization. Specifically, we use the same self stabilizing DFS protocol [40], interleaved with a self stabilizing version of the search protocol executed indAOBB. The latter protocol

can be easily made self-stabilizing by having all agents cycle continuously through their values (forward search phase) and propagate cost bounds to their ancestors (backward bound propagation).