• Aucun résultat trouvé

The Algorithm

Dans le document To the memories of Necdet Doˇganata and (Page 91-96)

Part I Fundamental Algorithms

6.5 Asynchronous GHS Algorithm

6.5.3 The Algorithm

Algorithm6.2shows the operation of AGHS_MST as in [8] as regards to the above description of its operation. The procedure do_test is invoked when a node receives

6.6 Chapter Notes 79 an initiate message, do_report is invoked when a node receives a report message, and do_changeroot is used by a node to change its root.

6.5.3.1 Analysis

Theorem 6.3 AGHS_MST computes an MST using O(mlogn) messages in O(nlogn)time.

Proof Each step of AGHS_MST requires a broadcast and convergecast of messages, and then a step of combining fragments is performed. These steps requireO(n) time, as the depth of a fragment is at mostn, andO(n)messages, as the number of edges in a fragment is also at mostn. The combining of the fragments requires similar time and message complexities as the messages traverse in the fragment.

However, in the last step, there are at mostΘ(m)messages for updating the new core identifier. Since the number of phases isO(logn), the total number of time steps consideringO(n)steps per phase isO(nlogn). The message complexity therefore, counting both the initial and combining steps, isO(m+nlogn).

In detail, there will be 4|E|test-reject messages (one pair for each side of every edge),n initiate messages per level,nreport messages per level, 2n(test-accept) messages per level (one pair for each node),n(change-root/connect) messages per level (core to MWOE path), addition of which yields 4m+5nlognmessages, and

therefore, Msg(AGHS_MST)=O(m+nlogn).

6.6 Chapter Notes

The distributed MST problem is a fundamental problem in distributed computing.

AGHS_MST algorithm, which hasO(nlogn)time complexity andO(m+nlogn) message complexity, was a fundamental asynchronous distributed MST algorithm, which has inspired further research. Chin and Ting [2] improved the time complexity of GHS algorithm toO(nlog logn), Gafni [3] provided a further improvement to O(nlogn), and then Awerbuch [1] provided a running time of O(n), which is optimal.

Considering the diameterd of the graph, Garay, Kutten, and Peleg [4] provided a distributed MST algorithm with running timeO(d+n0.61). Kutten and Peleg [6] further improved the bound toO(d+√

nlogn). Maleq and Pandurangan [5]

viewed the problem from the point of providing an approximate MST algorithm rather than an exact one, and they proposed anO(logn)-approximate distributed MST algorithm with running timeO(d+l), whered is the diameter, andlis the local shortest path diameter of the graph, which depends on the topology and the edge weights of the graph.

80 6 Minimum Spanning Trees

Algorithm 6.2 AGHS_MST 1: state: (sleep, find, found)

2: int statech[|Γ (i)|]: basic, branch, reject 3: int level, rec

4: neighbors testch, bestch, parent 5: real name, bestwt

6: message types initiate, connect, test, ack, reject, report {Initialize}

13: connect(L): ifL <level then

14: statech[j] ←branch

15: send initiate(level,name,state)toj

16: else if statech[j] =basic then process message later 17: else send initiate(level+1, ω(ij ),find)toj 18: initiate(L, F, S): levelL; nameF; stateS; parentj;

19: bestch←⊥; bestwt← ∞;

20: for allxΓ (i): statech[x] =branchx=jdo

21: send initiate(L, F, S)tox

22: if state=find then rec0; do_test 23: test(L, F ): if (L >level) then process message later

24: else ifF=name then

33: reject(): if (statech[j] =basic) then statech[j] =reject

34: do_test

35: report(ω): if (j=parent) then

36: ifω <bestwt then

37: bestwtω; bestchj

38: recrec+1; do_report

39: else

40: if state=find then process message later 41: else ifω >bestwt then do_changeroot

42: else ifω=bestwt= ∞then terminate

43: changeroot(): do_changeroot 44: end while

6.6 Chapter Notes 81 45: procedure do_test

46: ifjΓ (i):statech[j] =basic then

47: testchjstatech[j] =basicω(ij )minimal 48: send test(level,name)to testch

49: else testch←⊥; do_report 50: end if

51: end procedure 52:

53: procedure do_report

54: if rec=#{j:statech[j] =branchj=parent} ∧testch=⊥then 55: statefound; send report(bestwt)to parent

56: end if 57: end procedure 58:

59: procedure do_changeroot

60: if statech[bestch] =branch then 61: send changeroot to bestch

62: elsesend connect(level)to bestch; statech[bestch] ←branch 63: end if

64: end procedure

Fig. 6.5 Example graph for Exercise1

Fig. 6.6 Example graph for Exercise2

6.6.1 Exercises

1. Find MST in the graph of Fig.6.5using Kruskal_MST and Dijkstra_MST algo-rithms.

2. Show the execution of DistPrim_MST algorithm in the graph of Fig.6.6 assum-ing that nodeais the root.

3. Show a possible execution of SGHS_MST in the graph of Fig.6.7.

82 6 Minimum Spanning Trees Fig. 6.7 Example graph for

Exercise3

4. Under which circumstances would an approximate distributed MST algorithm be preferred over an exact distributed algorithm?

References

1. Awerbuch B (1987) Optimal distributed algorithms for minimum weight spanning tree, count-ing, leader election, and related problems. In: Proc 19th ACM symp on theory of computcount-ing, pp 230–240

2. Chin F, Ting H (1985) An almost linear time andO(nlogn+e)messages distributed algorithm for minimum-weight spanning trees. In: Proc 26th IEEE symp foundations of computer science, pp 257–266

3. Gafni E (1985) Improvements in the time complexity of two message-optimal election algo-rithms. In: Proc of the 4th symp on principles of distributed computing, pp 175–185

4. Garay J, Kutten S, Peleg D (1998) A sublinear time distributed algorithm for minimum-weight spanning trees. SIAM J Comput 27:302–316

5. Khan M, Pandurangan G (2008) A fast distributed approximation algorithm for minimum span-ning trees. Distrib Comput 6(20):391–402

6. Kutten S, Peleg D (1998) Fast distributed construction of k dominating sets and applications.

J Algorithms 28:40–66

7. Peleg D (2000) Distributed computing: a locality-sensitive approach. SIAM, Philadelphia.

ISBN 0-89871-464-8

8. Tel G (2000) Introduction to distributed algorithms, 2nd edn. Cambridge University Press, Cambridge

Chapter 7

Routing

Abstract Routing in a computer network is the process of communicating mes-sages from source nodes to destination nodes along selected paths with the lowest possible costs. This chapter introduces few sample distributed routing algorithms based on sequential routing algorithms.

7.1 Introduction

For the routing process, we will assume that the edges of G have nonnegative weights and that these represent costs of sending the messages and delays incurred.

The network graph in this case is represented by weighted communication links as G(V , E, w), wherew:E→R. Since nodes are not connected to every other node of the weighted graph, messages must be forwarded between the intermediate nodes from source to the destination. The cost of sending a message from a source to a destination is the sum of the weights of the edges of the path between them.

There is at least one shortest path between every pair of nodes, and the purpose of a routing algorithm is to determine this shortest path. Desirable properties of a routing algorithm are as follows:

Correctness: Every message should be delivered correctly to its destination.

Complexity: The algorithm must have low time, message, and space complexities.

Robustness: The algorithm should update routing tables when topology changes.

Shortest Paths: Messages should be transferred along the minimum-cost paths from the source to the destination.

In this chapter, we will first review three classical sequential routing algorithms due to Dijkstra, Bellman and Ford, and Floyd and Warshall in Sect.7.2as these form the basis of the distributed routing algorithms. Then we describe distributed implementations of Bellman–Ford and Floyd–Warshall algorithms in Sects.7.3,7.4, 7.5, and7.6. We conclude by the descriptions of two fundamental routing protocols.

Dans le document To the memories of Necdet Doˇganata and (Page 91-96)