• Aucun résultat trouvé

replication strategy for peer-to-peer

N/A
N/A
Protected

Academic year: 2022

Partager "replication strategy for peer-to-peer"

Copied!
31
0
0

Texte intégral

(1)

A churn-resilient

replication strategy for peer-to-peer

distributed hash-tables

ANR – SHAMAN – Kickoff - 27 January, 2009

Sergey Legchenko, Sébastien Monnet, Pierre Sens

(2)

Outline

• Background and definition

• Replication protocol in presence of churn

(3)

Background

Peer-to-peer systems

• distribution

• symmetry (a node = a peer client and server)

• decentralized control

• dynamicity

• self-organization

(4)

Distributed Hash Tables

52

24 75

40

18 91

32

66

83

(5)

DHTs

logical address space

52

24 75

40

18 91

32

66

83

South America

North America North America

Australia

Asia

Europe

Europe Asia

Asia

high latency, low bandwidth between logical neighbors

Overlay network

(6)

Insertion of blocks in DHT

04F2

5230

834B

C52A

8909 8BB2

3A79 8954

8957 AC78

895D E25A

04F2

3A79

5230

834B

8909 8954

8957 8BB2

AC78 C52A E25A

k = 8958 put(8959,block)

k = 8959

root of key 8959

block

replica

895D replica

(7)

Insertion of blocks in DHT

04F2

5230

834B

C52A

8909 8BB2

3A79 8954

8957 AC78

895D E25A

04F2

3A79

5230

834B

8909 8954

8957 8BB2

AC78 C52A E25A

block

Address space

replica

895D replica

k = 8958 get(8959,block)

k = 8959

(8)

Leafset-based replication

Root node

Data block Replica set

Leaf set

put(k, )

(9)

Impact of Churn

Churn = « the continuous process of node arrival and departure »

• Join

• Leave

• Crash

(10)

Authors Systems Observed Session Time

SGG02 Gnutella, Napster 50% < 60 minutes

CLL02 Gnutella, Napster 31% < 10 minutes

SW02 FastTrack 50% < 1 minute

BSV03 Overnet 50% < 60 minutes

GDS03 Kazaa 50% < 2.4 minutes

Metrics of Churn

[Rhea 04] Handling churn in DHT, Usenix Annual Technical Conf.

2004

(11)

Churn in DHTs

k

new replica set for key k must fetch copy

from another replica

low-bandwith transfer

(12)

Churn in DHTs

k

data may be

lost under

high churn

(13)

Churn in DHTs

k

new replica set

new node joins replica must

be moved

(14)

Churn in DHTs

k

data may be unavailable under high churn

new replica set for key k

(15)

A churn resistant protocol

• Main idea : relax the DHT constraints

– Allow uncontinuous replica-set in the replica set

– Avoiding useless data movement

(16)

An enhanced replication protocol

• Root only stores meta-information of blocks is responsible (replica set)

• Insertion of new block : randomly choses r block in the middle of its leafset (1/3 in the middle)

Leaf set

15 40 50 60 70 85 100 110 120

r = 3

Block ID Replica sets

B1 50, 100, 85

B2 60, 50, 70

1/3 leaf set

B1 B2 B1 B1

B2 B2

(17)

Maintenance protocol

• For each data-block for which the peer p is root :

– Check if the r copies of the replica set are still in the leaf-set.

– If some copies are missing randomly chose new ones in the middle (1/3) of the leaf-set

• For each data-block for which the peer p is hosting a copy:

– Check if the known current root is still the current root. if not, notice the new root – The new root update its state

(18)

Replication protocol with churn

• Arrival of new node 65

No data is moving

Leaf set

70 85 100 110 120

B1 B1

B2

60 15 40 50

B1

B2 B2

65

(19)

Replication protocol with churn

• Departure of node 85

Replication degree is maintain, no useless data movement (only one message B1)

Leaf set

70 85

B1

100 110 120

B2 B1

60 15 40 50

B1

B2 B2

65

B1

(20)

Comparison with Past replication protocol

Leaf set

70 85 100 110 120

B1 B1 B2

15 40 50 60

B1

B2 B2 B2

B1

65

B1 B2

4 blocks moving (Past) vs. 1 block (Enhanced Past)

• Insertion of 65 : B1 and B2 move

• Leave of 85 : B1 and B2 move

(21)

Performance evaluation

• PeerSim simulation

• Fine grain simulation

• Implementation of Pastry KBR / Past DHT / Enhanced Past (Pasta)

• Comparison of 2 strategies

– Contiguous placement of data block = Past

– « Free » random placement (mobile position) = Pasta

• Preliminary results

(22)

Simulation parameters

• Network :

– 100-peer network (N)

– ADSL : 1 mbits/s for upload and 10 mbits/s for download – Latency between 80 and 120 ms

• DHT :

– leaf-set size = 24

– inter-maintenance time of 10 minutes at the DHT level (replica set) – inter-maintenance time of 1 minute at the KBR level (leaf set)

– 10 000 data-blocks (files) of 10 000 KB – Replication degre = 3 (r)

(23)

Performance evaluation : Churn pattern

• Periodically insert join and leave

• 2 patterns :

– One hour churn

– Continuous churn (snapshot of the system after 5 hours of churn)

• 3 metrics :

– Number data-blocks exchanged (bandwidth of the maintenance protocol) – Number of data-blocks lost

– Stabilization time

(24)

One hour of churn: number of data-block exchanged

• number of exchanged blocks 2 times lower than in the standard solution

(25)

One hour of churn: number of block lost

Number of blocks divided by 3

(26)

One hour of churn : Recovery times

Time divided by 5.5

(27)

Continuous churn : block lost

Number of blocks is divided by 4.5

(28)

Recovery of a single failure

• Past : 4606 seconds to recover from a single failure

• Pasta : 1889 seconds to recover from a single failure

(29)

Maintenance protocol cost

(30)

Conclusions

• Current P2P DHT replication strategies does not support high churn levels

• Observations :

Many transfers goal is just to maintain location constraints

The network is quickly saturated (depends on the network, the amount of stored data and the churn rate)

• Our proposition relaxes location constraints

• Main benefits

Node contents are de-correlated => data transfer are more parallelized

A single failure recovery is much faster

Fewer (almost none) useless data transfer

Less network congestion

Better churn support : less lost data-blocks

(31)

Future works

• Placement strategy inside the leafset

– Monitoring the stability of nodes (modification of leafset maintenance protocol), placement on the most stable node

• Study the impact of leafset size

– Large leaf set (100 nodes)

• Real experimentation

– Modification of FreePastry

– Deployment o a distributed architecture

Références

Documents relatifs

Qu’elle allègue que cette interprétation est non seulement conforme à la lettre de l’article, à son esprit mais est surtout rendue obligatoire par la directive 2001/29

Fabien Mathieu

Avec l'arrivée de l'Internet et son rôle actif croissant dans l'économie et la société, l'informatique réseau n'a eu de cesse de trouver des innovations pour exploiter les

We can clearly see on the right figure, the round-robin allocation of processes once the host list is exhausted: the number of cores allocated at nancy makes a stair at 400

• RMNP-OpenMP provides support for directed data exchange between computing nodes on several high speed networks like Infiniband and Myrinet using the Reconfigurable Multi

In the first one, a core member that leaves is replaced by a randomly chosen spare member, while in the second one, the departure of a core member leads to the renewal of the whole

Although peer to peer metadata management based on Bloom filter can scale easily due to the loose coupling among MDSs, such strategies are generally probabilistic in terms of

In this work, we consider achievable rates for transmission strategies suited to Ultra-wideband (UWB) systems and focus non-coherent receivers (i.e. those which do not perform chan-