• Aucun résultat trouvé

Fault tolerance for a data flow model

N/A
N/A
Protected

Academic year: 2021

Partager "Fault tolerance for a data flow model"

Copied!
59
0
0

Texte intégral

(1)

Fault tolerance for a data flow model

Improve classical fault tolerance protocols using the application knowledge given by its data flow representation

Xavier Besseron xavier.besseron@imag.fr PhD supervisor: Thierry Gautier

Laboratory of Informatics of Grenoble (LIG) – INRIA MOAIS Project

March 2010

(2)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(3)

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(4)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Grid computing

What are grids?

Clusters are computers connected by a LAN Grids are clusters connected by a WAN Heterogeneous (processors, networks, ...) Dynamic (failures, reservations, ...)

Aladdin – Grid’5000

French experimental grid platform More than 4800 cores

9 sites in France 1 site in Brazil 1 site in Luxembourg

(5)

Fault tolerance

0 0.2 0.4 0.6 0.8 1

0 1000 2000 3000 4000 5000

Fault probability

Processor number 1−day execution time 5−days execution time 10−days execution time

Why fault tolerance?

Fault probability is high on a grid

Split a large computation in shorter separated computations Capture application state and reconfigure it dynamically

(6)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(7)

Kaapi’s data flow model

Data flow model

Shared Data= object in a global memory Task= function call, accessing shared data

Access mode= constraint on shared data access (read, write, ...)

Shared<Matrix> A;

Shared<double> B;

Fork<Task>() (A,B);

Application example: Jacobi3D Solve a Poisson problem

Domain decomposition parallelization Jacobi iterative method

(8)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Jacobi3D: Domain decomposition

Example with a 2D domain

Domain element

Computation domain Neighbor subdomains Subdomain

Subdomain ←→ Shared data in the data flow graph

(9)

Jacobi3D: Domain decomposition & iterations

Update Update Update Update Update Update

Update Update Update Update Update Update

Iteration 1

Iteration 2

dom[1].0 dom[2].0 dom[3].0

dom[0].0 dom[4].0 dom[5].0

dom[1].1 dom[2].1 dom[3].1

dom[0].1 dom[4].1 dom[5].1

dom[1].2 dom[2].2 dom[3].2

dom[0].2 dom[4].2 dom[5].2

Data version 0

Data version 1

Data version 2

dom[0] dom[1] dom[2] dom[3] dom[4] dom[5]

Domain

Tasks are deterministic, ie same input ⇒ same output Execution order respects the data flow constraints

(10)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Jacobi3D: Real data flow graph

Data flow graph generated by Kaapi for processor N

User tasks

(11)

Jacobi3D: Real data flow graph

Data flow graph generated by Kaapi for processor N

User data SubDomain1

SubDomain2

(12)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Jacobi3D: Real data flow graph

Data flow graph generated by Kaapi for processor N

User data Borders between SudDomain1 and SubDomain2 Borders

with ProcN−1

Borders with ProcN+1

(13)

Jacobi3D: Real data flow graph

Data flow graph generated by Kaapi for processor N

User data Errors

(14)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Jacobi3D: Real data flow graph

Data flow graph generated by Kaapi for processor N

Broadcast tasks

Receive tasks

Communication tasks generated by Kaapi

(15)

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(16)

Context DFG Coordinated Checkpoint Global rollback Partial rollback PerspectivesOptimized coordination

Coordinated checkpoint

Principle

Take a consistent snapshot of an application:

Coordinate all the processes to ensure a consistent global state Save the processes snapshots on a stable memory

Issues

Coordination cost at large scale

Data transfert time for large application state

References

Coordinated checkpoint/rollback protocol: blocking[Tamir84], non-bloblocking[Chandy85]

Implementations: CoCheck [Stellner96], MPICH-V [Coti06], Charm++ [Zheng04], OpenMPI [Hursey07], ...

(17)

Improving coordination step

Classical coordination step

Save a consistent global snapshot:

requires to send a message on all communication channels

Without knowledge of communication pattern, this coordination may require message exchange from all processes to all processes.

⇒ Number of exchanged messages is O(N2)(N = process number).

Coordinated Checkpointing in Kaapi

Equivalent to a blocking coordinated checkpoint, but

Checkpointing a process = Saving the data flow graph and its input data Based on the reconfiguration mechanism of Kaapi (see next slide) Reduce the number of exchanged messages during coordination

⇒ Number of exchanged messages is O(kN) (with k  N).

(18)

Context DFG Coordinated Checkpoint Global rollback Partial rollback PerspectivesOptimized coordination

Dynamic reconfiguration mechanism in Kaapi

Allows to safely reconfigure a distributed set of objects by ensuring amutually consistentview of the objects

Find the neighbor processes

Data flow graph allows to know the future communications

Neighbors processes are processes that can emit message to the considered process

Identify tasks that generate communications Only flush channels with the neighbor processes

Properties

Ensureconsistencyandaccessibilityof the application k is the average number of neighbors processes

application and scheduling dependent

for N-Queens application with work-stealing scheduling: k < 2 for Jacobi3D application with graph partitioning: k ≈ 7

(19)

Mutual consistency protocol in Kaapi

00 11 0000 1111

00000000000000000 11111111111111111

000000000000 000000000000 111111111111 111111111111 0000000000

0000000000 1111111111 1111111111

0000000000000000 0000000000000000 1111111111111111 1111111111111111

00000 00000 11111 11111

0000000000000000 1111111111111111 Master

process

Local reconfiguration point Execution

Global reconfiguration point release_mutual_consistency()

acquire_mutual_consistency()

Process 1

Process 2

Process 3

Measured coordination time (see next slide)

Reconfiguration request

Reconfiguration results

E = {2}

E = {1, 3}

E = {2}

CONT PING

PONG

CONT

PING PONG

PONG CONT

PING

(20)

Context DFG Coordinated Checkpoint Global rollback Partial rollback PerspectivesOptimized coordination

Experimental results: Coordination time

N-Queens application using work-stealing scheduler No checkpoint, only coordination

0 200 400 600 800 1000 1200

0.00.51.01.52.02.53.0

Number of nodes

Coordination time (seconds)

Optimized coordination Full coordination

Orsay Toulouse Lille Bordeaux Nancy Lyon Sophia Rennes

But for large application state, coordination time is small compared to data transfert.

(21)

Experimental results: Coordination time

N-Queens application using work-stealing scheduler No checkpoint, only coordination

0 200 400 600 800 1000 1200

0.00.51.01.52.02.53.0

Number of nodes

Coordination time (seconds)

Optimized coordination Full coordination

Orsay Toulouse Lille Bordeaux Nancy Lyon Sophia Rennes

But for large application state, coordination time is small compared to data transfert.

(22)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(23)

Global rollback

Principle

Checkpointed states are consistent global states All processes rollback to the last checkpointed state

Good performances after global rollback require either Spare nodes to replace the failed ones

reserve spare nodes that could be used for another computation wait for others nodes to be available or for failed nodes to be fixed orLoad balancingalgorithms

using over-decomposition, ie placing many subdomains per processor

Question:What is the influence of over-decomposition on the execution time?

after failure of f nodes without spare nodes

(24)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

XPs: over-decomposition influence

Experience 1: influence on the execution time

Execution time in function of the decomposition d , ie the number of subdomains

3D domain, constant size per node: 107double-type reals On 1 node: 107reals, ie ≈ 76 MB

On 100 nodes: 100 × 107reals, ie ≈ 7.6 GB Nancy cluster of Grid’5000

Experience 2 : influence on the execution timeafter global recovery

Execution time in function of the decomposition d and of the number of failed nodes f

3D domain: 100 × 107reals with type double (≈ 7.6 GB) Using 100 nodes of the Nancy cluster of Grid’5000 Execution on 100 − f nodes

(25)

XPs: over-decomposition influence

Experience 1: Execution time

0 20 40 60 80 100

0.00.10.20.30.40.5

Number of subdomains per node

One−iteration execution time (seconds)

1 node 100 nodes

(26)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

XPs: over-decomposition influence

Experience 2: Execution timeafter global recovery

Before failure After 1 failure After 10 failures After 20 failures After 50 failures

0.00.20.40.60.8

Number of failed nodes (f)

One−iteration execution time (seconds)

Decomposition (d) 100 subdomains

200 subdomains 500 subdomains

1000 subdomains 2000 subdomains 5000 subdomains

10000 subdomains

(27)

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(28)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback

Principle

Restart failed processes from last checkpoint Replay communications to the restarted processes

no message logging

re-execute tasks that produced the communications

Two aspects

Find the set of tasks required for restarting this represents the lost work

Schedule the lost work

in order to reduce the overhead induced by the failure

(29)

Partial rollback principle: Execution

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Non-failed process

Executed task

(30)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback principle: Failure

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Failed process

Executed task Task to re-execute

(31)

Partial rollback principle: Lost communications

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Failed process

Executed task Task to re-execute

Clost

(32)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback principle: Communications to replay

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Failed process

Executed task Task to re-execute

Call

Call

(33)

Partial rollback principle: Tasks to re-execute

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Failed process

Executed task Task to re-execute

Gfailed Gto re-execute

(34)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback principle: In-memory data

Non-failed process Non-failed process

Non-executed task

Data version

2

3

4

5

6

C o m m u n i c a t i o n D e p e n d e n c y

1

Send task Receive task Failed process

Executed task Task to re-execute

I n - m e m o r y data version

Gfailed Gto re-execute (optimized)

(35)

Global vs partial rollback: Reexecution of the lost work

Global rollback

00000000 11111111 00000000 11111111 00000000 0000 11111111 1111

00000000 0000 11111111 1111

00000000 11111111

00000 00000 11111 11111 00000 00000 11111 11111 00000 00000 00000 11111 11111 11111

00000 00000 00000 11111 11111 11111

00000 00000 11111 11111

Tre−executionglobal

Wlostglobal

Re-execution of the lost work Domain

Tlost Checkpoint Failure

P0

P1

P2

P3

P4

Time

Partial rollback

00 11 00000000 0000 11111111 1111

00000000 0000 11111111 1111

00000000 11111111

000000 111111 000000 111111 000000 000 111111 111

000000 000 111111 111

000000 111111

Wlostpartial

Tre−executionpartial Re-execution of the lost work Domain

Tlost Checkpoint Failure

P0

P1

P2

P3

P4

Time

Thanks to over-decomposition, the lost work can be parallelized !

(36)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback: Proportion of tasks to re-execute

Jacobi3D executed on 100 nodes

40 × 40 × 1 subdomains, ie 16 subdomains per node Failure of 1 fixed node

0 50 100 150 200 250 300

020406080100

Time between last checkpoint and failure (number of iterations)

Proportion of tasks to re−execute (%)

Experimental measures Simulation results

(37)

Partial rollback: Time to re-execute the lost work

Experimental conditions

100 computation nodes, 10 checkpoint servers (Bordeaux cluster) Domain size = 76 MB, splitted in 1000 subdomains

Failure of 1 fixed node Considering 2 grains:

2 ms for a subdomain update 50 ms for a subdomain update

Measured value

Time to re-execute the lost work:

Data redistribution + Computation

(38)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback: Time to re-execute the lost work

Time of a subdomain update ≈ 2 ms

10 iterations 100 iterations 200 iterations

010203040506070

3.9 5.1 5.7

29.6

7.1

75.1

100 % 6 % 100 % 51 % 100 % 75%

Time between last checkpoint and failure (number of iterations) Time to re−execute the lost work (seconds)

Global rollback Partial rollback

⇒Scheduling should take in consideration the previous data placement

(39)

Partial rollback: Time to re-execute the lost work

Time of a subdomain update ≈ 50 ms

10 iterations 100 iterations 200 iterations

020406080100

9.7 4.6

59.9

32.2

116.5

88.8

100 % 6 % 100 % 51 % 100 % 75%

Time between last checkpoint and failure (number of iterations) Time to re−execute the lost work (seconds)

Global rollback Partial rollback

⇒Scheduling should take in consideration the previous data placement

(40)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Partial rollback: Time to re-execute the lost work

Time of a subdomain update ≈ 50 ms

10 iterations 100 iterations 200 iterations

020406080100

9.7 4.6

59.9

32.2

116.5

88.8

100 % 6 % 100 % 51 % 100 % 75%

Time between last checkpoint and failure (number of iterations) Time to re−execute the lost work (seconds)

Global rollback Partial rollback

⇒Scheduling should take in consideration the previous data placement

(41)

Outline

1 Context

2 Kaapi’s data flow model

3 Coordinated Checkpoint

4 Global rollback

5 Partial rollback

6 Perspectives

(42)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Perspectives

Scheduling algorithms for partial recovery

Need to take in consideration the data placement and communication cost minimize makespan with communication ⇒ NP-hard

find and try some heuristics

RDMA support in Kaapi

Currently communications in Kaapi are based on active messages

⇒ Data copy on reception

Optimization: Use RDMA (Remote Direct Memory Access) for data transfert

Reducing the data transfert cost during checkpoint and recovery step Incremental checkpoint for Kaapi (based on DFG)

Placing checkpoint servers near the computation nodes require to take in consideration the network topology

(43)

Other contributions

Dynamic reconfiguration

Allows dynamic change on the application while ensuring:

Concurrency management

Concurrent & cooperative execution ⇒ X-Kaapi Mutual consistency

Consistent view of a distributed set of objects

Software development (mostly Kaapi) Kaapi (≈ 100 000 lines of code)

Authors: T. Gautier, V. Danjean, S. Jafar [TIC], D. Traoré [KaSTL], L. Pigeon, X. Besseron

My developments & contributions:

Graph partitioning scheduling (≈ 10 000 lines of code) Fault tolerance support (≈ 10 000 lines of code)

Large scale deployments & multi-grids computations (using TakTuk)

(44)

Context DFG Coordinated Checkpoint Global rollback Partial rollback Perspectives

Thanks for your attention

Questions?

(45)

Outline

7 Coordination time

8 Placement of checkpoint servers

9 Over-decomposition

(46)

Coordination time Placement of checkpoint servers Over-decomposition

Experimental results: Coordination time

N-Queens application using work-stealing scheduler No checkpoint, only coordination

0 200 400 600 800 1000 1200

0.00.51.01.52.02.53.0

Number of nodes

Coordination time (seconds)

Optimized coordination Full coordination

Orsay Toulouse Lille Sophia Lyon Rennes Bordeaux

(47)

Experimental results: Coordination time

N-Queens application using work-stealing scheduler No checkpoint, only coordination

0 200 400 600 800 1000 1200

0.00.51.01.52.02.53.0

Number of nodes

Coordination time (seconds)

Optimized coordination Full coordination

Nancy Sophia Lille Bordeaux Rennes Toulouse Lyon

(48)

Coordination time Placement of checkpoint servers Over-decomposition

Outline

7 Coordination time

8 Placement of checkpoint servers

9 Over-decomposition

(49)

Placement of checkpoint servers

Idea

Reduce the checkpointing time byplacing the checkpoint servers near the computation processes

Practically, checkpoint servers can be:

a dedicated node of the cluster

another computation process (buddy-processor of Charm++)

Experimental study

180 nodes of the Orsay cluster from Grid’5000 120 nodes for computation

12, 24 or 60 nodes for checkpoint servers Application state ≈ 20 GB, ie 169 MB per node

Testing 3 placement methods: ordered, by-switch and random

(50)

Coordination time Placement of checkpoint servers Over-decomposition

Network topology of the Orsay cluster

3 Gb/s Ethernet links

1 Gb/s Ethernet links switch 0

switch 12 switch 2 switch 1

node 2 node 1

node 15

node 17 node 16

node 30

node 167 node 166

node 180 180 nodes First switch level

Second switch level

(51)

Network topology of the Orsay cluster

3 Gb/s Ethernet links

1 Gb/s Ethernet links switch 0

switch 12 switch 2 switch 1

node 2 node 1

node 15

node 17 node 16

node 30

node 167 node 166

node 180 180 nodes First switch level

Second switch level

Checkpoint servers can be placed by following the

node order

(52)

Coordination time Placement of checkpoint servers Over-decomposition

Network topology of the Orsay cluster

3 Gb/s Ethernet links

1 Gb/s Ethernet links switch 0

switch 12 switch 2 switch 1

node 2 node 1

node 15

node 17 node 16

node 30

node 167 node 166

node 180 180 nodes First switch level

Second switch level

Checkpoint servers can be placedby switch

(53)

Network topology of the Orsay cluster

3 Gb/s Ethernet links

1 Gb/s Ethernet links switch 0

switch 12 switch 2 switch 1

node 2 node 1

node 15

node 17 node 16

node 30

node 167 node 166

node 180 180 nodes First switch level

Second switch level

Checkpoint servers can be placedrandomly

(54)

Coordination time Placement of checkpoint servers Over-decomposition

Placement of checkpoint servers in the Orsay cluster

120 computation nodes

Application state ≈ 20 GB, ie 169 MB per node

12 24 60

020406080

83.07

19.71 15.56

41.50

16.30 7.70

22.23 15.05

3.10

Number of checkpoint servers

Checkpoiting time (seconds)

Placement Ordered Random By switch

⇒ Need to take in consideration the network topology

Could be done automatically (using Network Weather Service for example)

(55)

Placement of checkpoint servers in the Orsay cluster

120 computation nodes

Application state ≈ 20 GB, ie 169 MB per node

12 24 60

020406080

83.07

19.71 15.56

41.50

16.30 7.70

22.23 15.05

3.10

Number of checkpoint servers

Checkpoiting time (seconds)

Placement Ordered Random By switch

⇒ Need to take in consideration the network topology

Could be done automatically (using Network Weather Service for example)

(56)

Coordination time Placement of checkpoint servers Over-decomposition

Outline

7 Coordination time

8 Placement of checkpoint servers

9 Over-decomposition

(57)

Over-decomposition on Jacobi3D

Number of nodes: n, number of subdomains: d Classical decomposition (MPI): n = d Over-decomposition: d  n

⇒ Over-decomposition allows to be independent of the processor number

Example: "Over"-decomposition in 6 subdomains Distribution on 2 processors

Update Update Update

dom[0].0 dom[1].0 dom[2].0

dom[0].1 dom[1].1 dom[2].1

dom[3].0 dom[4].0 dom[5].0

Update Update Update

dom[3].1 dom[4].1 dom[5].1

Processor 1 Processor 2

Distribution on 3 processors

Update Update dom[0].0 dom[1].0

dom[0].1 dom[1].1

dom[4].0 dom[5].0

Update Update

dom[4].1 dom[5].1 dom[2].0 dom[3].0

Update Update

dom[2].1 dom[3].1

Processor 1 Processor 2 Processor 3

(58)

Coordination time Placement of checkpoint servers Over-decomposition

Over-decomposition influence: Modelization

Let Tnd be the execution time of one iteration for a d -subdomains decomposition

using n processors

Execution time Tnd=d

n ×Td11 Optimal time Tnnis for d = n Over-decomposition overhead is

Tnd/Tnn= d n



×n

d ≤ 1 +n d

After the global recovery and load balancing f is the number of failed nodes

After failures, over-decomposition overhead is

Tn−fd /Tn−fn−f =

 d n − f



×n − f

d ≤ 1 + n d

(59)

Over-decomposition influence: Modelization

Simulating execution on 1000 − f processors

Before failure 1 failure 10 failures 100 failures 500 failures

0.00.51.01.52.0

Number of failures (f)

Execution time (over the optimal)

Decomposition (d) 1000 subdomains 10000 subdomains 1e+05 subdomains 1e+06 subdomains

Références

Documents relatifs

However, in case of functions with limited regular or mixed Sobolev smoothness, it will turn out that the sparse tensor product space, which equilibrates the number of degrees

Die Räumlichkeiten der Hafenverwal- tung im Pantschau werden renoviert, da- mit diese dem neuen Hafenmeister, Simi Züger, ab April zur Verfügung stehen. Das Hallen-, Schwimm-

An alternative method is to develop an online control approach where each WT adjusts its own induction model coefficients in response to the information of local WFs, such as the

This data flow graph is called the abstract repre- sentation of the application because this representation is casually connected to the (execution of the) applica- tion: any

Our numerical experiments show that the surface strain at the transition between a continental indenter and an oceanic subduction can be dominated by the suction exerted

Analysing the execu- tion abstract representation as a data flow graph allow us to identify, among the last checkpoint of non-failed processes, the strictly required computation

Optimize the checkpoint step using the abstract representation of the execution (data flow graph):. Partial flush: only between processes which communicates Increment checkpoint:

6: Comparison of the restart time between global restart and partial restart, for different checkpoint periods and different computation grains (lower is better) The restart time