• Aucun résultat trouvé

INF344: Données du Web Distributed Computing with MapReduce and Beyond

N/A
N/A
Protected

Academic year: 2022

Partager "INF344: Données du Web Distributed Computing with MapReduce and Beyond"

Copied!
85
0
0

Texte intégral

(1)

11 May 2015

INF344: Données du Web

Distributed Computing with MapReduce and Beyond

(2)

11 May 2015

Outline

MapReduce Introduction

The MapReduce Computing Model MapReduce Optimization

MapReduce in Hadoop

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(3)

11 May 2015

Outline

MapReduce Introduction

The MapReduce Computing Model MapReduce Optimization

MapReduce in Hadoop

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(4)

11 May 2015

Data analysis at a large scale

Very largedata collections (TB to PB) stored on distributed filesystems:

Query logs

Search engine indexes Sensor data

Needefficient waysfor analyzing, reformatting, processing them In particular, we want:

Parallelization of computation (benefiting of the processing power of all nodes in a cluster)

Resilience to failure

(5)

11 May 2015

Centralized computing with distributed data storage

Run the program at client node, get data from the distributed system.

Client node

disk memory

disk memory

disk memory program

data flow (input) data flow (output)

Downsides: important data flows, no use of the cluster computing resources.

(6)

11 May 2015

Pushing the program near the data

Client node

disk process1

disk disk

program () program ()

program ()

process2

process3 coordinator()

result

result result

MapReduce: Aprogramming model(inspired by standard functional programming operators) to facilitate the development and execution of distributed tasks.

Published by Google Labs in 2004 at OSDI [DG04]. Widely used since then, open-source implementation inHadoop.

(7)

11 May 2015

MapReduce in Brief

The programmer defines the program logic astwo functions:

Map transforms the input into key-value pairs to process Reduce aggregates the list of values for each key

The MapReduce environment takes in chargedistribution aspects A complex program can be decomposed as asuccessionof Map and Reduce tasks

Higher-level languages (Pig, Hive, etc.) help with writing distributed applications

(8)

11 May 2015

Outline

MapReduce Introduction

The MapReduce Computing Model MapReduce Optimization

MapReduce in Hadoop

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(9)

11 May 2015

Three operations on key-value pairs

1. User-defined: map: (K,V)→list(K0,V0) f u n c t i o n map ( uri , d o c u m e n t )

f o r e a c h d i s t i n c t t e r m in d o c u m e n t o u t p u t ( term , c o u n t( term , d o c u m e n t ))

2. Fixed behavior:shuffle:list(K0,V0)→list(K0,list(V0))regroups all intermediate pairs on the key

3. User-defined: reduce: (K0,list(V0))→list(K00,V00) f u n c t i o n r e d u c e ( term , c o u n t s )

o u t p u t ( term , sum( c o u n t s ))

(10)

11 May 2015

Job workflow in MapReduce

Important: each pair, at each phase, is processedindependentlyfrom the other pairs.

... ,(kn, vn) , ... , (k2,v2), (k1, v1) Input: a list of (key, value) pairs

map(k1, v1)

Map operator (k' 1, v'1)

...(k'2, v'2) ...(k'1, v'p) ...(k'1, v'q) ...

reduce(k'1, <v'1, v'p, v'q, ...>)

(k'2, <v'2, ...>)

Reduce operator (v")

(k'1, <v'1, v'p, v'q, ...>) Intermediate structure

Network and distribution are transparently managed by the MapReduce environment.

(11)

11 May 2015

Example: term count in MapReduce (input)

URL Document

u1 the jaguar is a new world mammal of the felidae family.

u2 for jaguar, atari was keen to use a 68k family device.

u3 mac os x jaguar is available at a price of us $199 for apple’s new “family pack”.

u4 one such ruling family to incorporate the jaguar into their name is jaguar paw.

u5 it is a big cat.

(12)

11 May 2015

Example: term count in MapReduce

term count jaguar 1 mammal 1 family 1 jaguar 1 available 1 jaguar 1 family 1 family 1 jaguar 2 . . .

mapoutput shuffleinput

term count jaguar 1,1,1,2 mammal 1 family 1,1,1 available 1 . . .

shuffleoutput reduceinput

term count jaguar 5 mammal 1 family 3 available 1 . . .

final output

(13)

11 May 2015

Example: term count in MapReduce

term count jaguar 1 mammal 1 family 1 jaguar 1 available 1 jaguar 1 family 1 family 1 jaguar 2 . . .

mapoutput shuffleinput

term count jaguar 1,1,1,2 mammal 1 family 1,1,1 available 1 . . .

shuffleoutput reduceinput

term count jaguar 5 mammal 1 family 3 available 1 . . .

final output

(14)

11 May 2015

Example: term count in MapReduce

term count jaguar 1 mammal 1 family 1 jaguar 1 available 1 jaguar 1 family 1 family 1 jaguar 2 . . .

mapoutput shuffleinput

term count jaguar 1,1,1,2 mammal 1 family 1,1,1 available 1 . . .

shuffleoutput reduceinput

term count jaguar 5 mammal 1 family 3 available 1 . . .

final output

(15)

11 May 2015

Example: simplification of the map

f u n c t i o n map ( uri , d o c u m e n t )

f o r e a c h d i s t i n c t t e r m in d o c u m e n t o u t p u t ( term , c o u n t( term , d o c u m e n t )) can actually be further simplified:

f u n c t i o n map ( uri , d o c u m e n t ) f o r e a c h t e rm in d o c u m e n t

o u t p u t ( term , 1) since all counts are aggregated.

Might be less efficient though (we may need acombiner, see further)

(16)

11 May 2015

A MapReduce cluster

Nodes inside a MapReduce cluster are decomposed as follows:

Ajobtrackeracts as a master node; MapReduce jobs are submitted to it

Severaltasktrackersrun the computation itself, i.e.,mapand reducetasks

A given tasktracker may run several tasks in parallel

Tasktrackers usually also act asdata nodesof a distributed filesystem (e.g., GFS, HDFS)

+ a client node where the application is launched.

(17)

11 May 2015

Processing a MapReduce job

A MapReducejobtakes care of the distribution, synchronization and failure handling. Specifically:

the input is split intoM groups; each group is assigned to a mapper(assignment is based on the data locality principle)

each mapper processes a group and stores the intermediate pairs locally

grouped instances are assigned toreducersthanks to a hash function

(shuffle) intermediate pairs are sorted on their key by the reducer one obtains grouped instances, submitted to thereducefunction Remark: the data locality does no longer hold for thereducephase, since it reads from the mappers.

(18)

11 May 2015

Assignment to reducer and mappers

Each mapper task processesa fixed amount of data(split), usually set to the distributed filesystem block size (e.g., 64MB) The number of mapper nodes is function of the number of mapper tasks and the number of available nodes in the cluster: each mapper nodes can process (in parallel and sequentially)several mapper tasks

Assignment to mapper tries optimizingdata locality: the mapper node in charge of a split is, if possible, one that stores a replica of this split (or if not possible, a node of the same rack)

The number of reducer tasks isset by the user

Assignment to reducers is done through a hashing of the key, usuallyuniformly at random; no data locality possible

(19)

11 May 2015

Distributed execution of a MapReduce job.

(20)

11 May 2015

Processing the term count example

Let the input consists of documents, say, one million 100-terms documents of approximately 1 KB each.

The split operation distributes these documents in groups of 64 MBs:

each group consist of 64,000 documents. Therefore M =d1,000,000/64,000e ≈16,000 groups.

If there are 1,000 mapper node, each node processes on average 16 splits.

If there are 1,000 reducers, each reducerri processes all key-value pairs for termst such thathash(t) =i (1≤i ≤1,000)

(21)

11 May 2015

Processing the term count example (2)

Assume thathash(’call’)=hash(’mine’)=hash(’blog’)=i = 100. We focus on three Mappersmp,mq andmr:

1. Gpi =(<. . . , (’mine’, 1), . . . , (’call’,1), . . . , (’mine’,1), . . . , (’blog’, 1) . . . >

2. Gqi =(< . . . , (’call’,1), . . . , (’blog’,1), . . . >

3. Gri =(<. . . , (’blog’, 1), . . . , (’mine’,1), . . . , (’blog’,1), . . . >

ri readsGpi,Gpi andGpi from the three Mappers, sorts their unioned content, and groups the pairs with a common key:

. . . , (’blog’, <1, 1, 1, 1>), . . . , (’call’, <1, 1>), . . . , (’mine’, <1, 1, 1>) Ourreducefunction is then applied byri to each element of this list.

The output is(’blog’, 4), (’call’, 2) and (’mine’, 3)

(22)

11 May 2015

Failure management

In case of failure, because the tasks are distributed over hundreds or thousands of machines, the chances that a problems occurs

somewhere are much larger; starting the job from the beginning is not a valid option.

The Master periodically checks the availability and reachability of the tasktrackers (heartbeats) and whethermaporreducejobs make any progress

1. if a reducer fails, its task isreassigned to another tasktracker; this usually require restarting mapper tasks as well (to produce intermediate groups)

2. if a mapper fails, its task isreassigned to another tasktracker 3. if the jobtracker fails,the whole job should be re-initiated

(23)

11 May 2015

Joins in MapReduce

Two datasets,AandBthat we need to join for a MapReduce task If one of the dataset is small, it can besent over fullyto each tasktracker and exploited inside themap(and possiblyreduce) functions

Otherwise, each dataset should begrouped according to the join key, and the result of the join can be computing in thereduce function

Not very convenient to express in MapReduce. Much easier using Pig.

(24)

11 May 2015

Using MapReduce for solving a problem

Prefer:

Simplemapandreducefunctions

Mapper tasks processinglarge data chunks(at least the size of distributed filesystem blocks)

A given application may have:

A chain ofmapfunctions(input processing, filtering, extraction. . . ) A sequence ofseveralmap-reducejobs

Noreducetaskwhen everything can be expressed in themap(zero reducers, or the identity reducer function)

Not the right tool for everything(see further)

(25)

11 May 2015

Outline

MapReduce Introduction

The MapReduce Computing Model MapReduce Optimization

MapReduce in Hadoop

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(26)

11 May 2015

Combiners

A mapper task can produce a large number of pairs with the same key

They need to be sent over the network to the reducer: costly It is often possible tocombinethese pairs into a single key-value pair

Example

(jaguar,1), (jaguar, 1), (jaguar, 1), (jaguar, 2)→(jaguar, 5)

combiner :list(V0)→V0 function executed (possibly several times) tocombine the values for a given key, on a mapper node No guarantee that thecombiner is called

Easy case: the combiner is the same as thereducefunction.

Possible when the aggregate functionαcomputed byreduceis distributive:α(k1, α(k2,k3)) =α(k1,k2,k3)

(27)

11 May 2015

Compression

Data transfersover the network:

From datanodes to mapper nodes (usually reduced using data locality)

From mappers to reducers

From reducers to datanodes to store the final output Each of these can benefit fromdata compression

Tradeoffbetween volume of data transfer and (de)compression time

Usually,compressingmapoutputsusing a fast compressor increases efficiency

(28)

11 May 2015

Optimizing the shuffle operation

Sorting of pairs on each reducer, to compute the groups: costly operation

Sorting much more efficientin memorythan on disk

Increasing the amount of memoryavailable forshuffleoperations can greatly increase the performance

. . . at the downside of less memory available formapandreduce tasks (but usually not much needed)

(29)

11 May 2015

Speculative execution

The MapReduce jobtracker tries detecting tasks that take longer than usual (e.g., because of hardware problems)

When detected, such a task isspeculativelyexecuted on another tasktracker, without killing the existing task

Eventually, when one of the attempts succeeds, the other one is killed

(30)

11 May 2015

Outline

MapReduce Introduction

The MapReduce Computing Model MapReduce Optimization

MapReduce in Hadoop

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(31)

11 May 2015

Hadoop

Open-source software, Java-based, managed by the Apache foundation, forlarge-scale distributed storage and computing Originally developed for Apache Nutch (open-source Web search engine), a part of Apache Lucene (text indexing platform)

Open-source implementation of GFS and Google’s MapReduce Yahoo!: a main contributor of the development of Hadoop Hadoop components:

Hadoop filesystem (HDFS) MapReduce

Pig (data exploration), Hive (data warehousing): higher-level languages for describing MapReduce applications

HBase: column-oriented distributed DBMS

ZooKeeper: coordination service for distributed applications

(32)

11 May 2015

Hadoop programming interfaces

Different APIs to write Hadoop programs:

A richJavaAPI (main way to write Hadoop programs) AStreamingAPI that can be used to writemapandreduce

functions in any programming language (using standard inputs and outputs)

AC++API (Hadoop Pipes)

With ahigher-language level(e.g., Pig, Hive) Advanced features only available in the Java API

Two different Java APIs depending on the Hadoop version;

presenting the “old” one

(33)

11 May 2015

Java map for the term count example

p u b l i c c l a s s T e r m C o u n t M a p p e r e x t e n d s M a p R e d u c e B a s e i m p l e m e n t s Mapper < Text , Text , Text , I n t W r i t a b l e > { p u b l i c v oi d map (

T e x t uri , T e x t d o c u m e n t ,

O u t p u t C o l l e c t o r < Text , I n t W r i t a b l e > output , R e p o r t e r r e p o r t e r )

{

P a t t e r n p = P a t t e r n . c o m p i l e (" [\\ p { L }]+ ");

M a t c h e r m = p . m a t c h e r ( d o c u m e n t );

w h i l e( m a t c h e r . f i n d ()) {

S t r i n g t er m = m a t c h e r . g r o u p ().

o u t p u t . c o l l e c t (new T e x t ( t e r m ) ,new I n t W r i t a b l e ( 1 ) ) ; }

} }

(34)

11 May 2015

Java reduce for the term count example

p u b l i c c l a s s T e r m C o u n t R e d u c e r e x t e n d s M a p R e d u c e B a s e

i m p l e m e n t s Reducer < Text , I n t W r i t a b l e , Text , I n t W r i t a b l e > { p u b l i c v oi d r e d u c e (

T e x t term , I t e r a t o r < I n t W r i t a b l e > counts , O u t p u t C o l l e c t o r < Text , I n t W r i t a b l e > output , R e p o r t e r r e p o r t e r )

{

int sum =0;

w h i l e( c o u n t s . h a s N e x t ()) { sum += v a l u e s . n e x t (). get ();

}

o u t p u t . c o l l e c t ( term , new I n t W r i t a b l e ( sum ));

} }

(35)

11 May 2015

Java driver for the term count example

p u b l i c c l a s s T e r m C o u n t {

p u b l i c s t a t i c v o i d m a i n ( S t r i n g a r g s []) t h r o w s I O E x c e p t i o n { J o b C o n f c o n f = new J o b C o n f ( T e r m C o u n t .c l a s s);

F i l e I n p u t F o r m a t . a d d I n p u t P a t h ( conf , new P a t h ( a r g s [ 0 ] ) ) ; F i l e O u t p u t F o r m a t . a d d O u t p u t P a t h ( conf , new P a t h ( a r g s [ 1 ] ) ) ; // In a r e a l a p p l i c a t i o n , we w o u l d h av e a c u s t o m i n p u t // f o r m a t to f e t c h URI - d o c u m e n t p a i r s

c o n f . s e t I n p u t F o r m a t ( K e y V a l u e T e x t I n p u t F o r m a t .c l a s s);

c o n f . s e t M a p p e r C l a s s ( T e r m C o u n t M a p p e r .c l a s s);

c o n f . s e t C o m b i n e r C l a s s ( T e r m C o u n t R e d u c e r .c l a s s);

c o n f . s e t R e d u c e r C l a s s ( T e r m C o u n t R e d u c e r .c l a s s);

c o n f . s e t O u t p u t K e y C l a s s ( T e x t .c l a s s);

c o n f . s e t O u t p u t V a l u e C l a s s ( I n t W r i t a b l e .c l a s s);

J o b C l i e n t . r u n J o b ( c o nf );

} }

(36)

11 May 2015

Testing and executing a Hadoop job

Required environment:

JDK on client

JRE on all Hadoop nodes

Hadoop distribution (HDFS + MapReduce) on client and all Hadoop nodes

SSH servers on each tasktracker, SSH client on jobtracker (used to control the execution of tasktrackers)

An IDE (e.g., Eclipse + plugin) on client Three different execution modes:

local One mapper, one reducer, run locally from the same JVM as the client

pseudo-distributed mappers and reducers are launched on a single machine, but communicate over the network distributed over a cluster for real runs

(37)

11 May 2015

Debugging MapReduce

Easiest: debugging inlocal mode

Web interfacewith status information about the job Standard output and errorchannels saved on each node, accessible through the Web interface

Counterscan be used to track side information across a MapReduce job (e.g., number of invalid input records)

Remote debuggingpossible but complicated to set up (impossible to know in advance where amaporreducetask will be executed) IsolationRunnerallows to run in isolation part of the MapReduce job

(38)

11 May 2015

Task JVM reuse

By default, eachmapandreducetask (of a given split) is run in a separate JVM

When there is a lot of initialization to be done, or when splits are small, might be useful toreuse JVMsfor subsequent tasks Of course, only works for tasks run on the same node

(39)

11 May 2015

Hadoop in the cloud

Possibly to set up one’s own Hadoop cluster

But often easier to use clusters in the cloud that support MapReduce:

Amazon EC2 Cloudera etc.

Not always easy to know the cluster’s configuration (in terms of racks, etc.) when on the cloud, which hurts data locality in MapReduce

(40)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Basics

Pig operators

From Pig to MapReduce Limitations of MapReduce

Alternative Distributed Computation Models

(41)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Basics

Pig operators

From Pig to MapReduce Limitations of MapReduce

Alternative Distributed Computation Models

(42)

11 May 2015

Pig Latin

Motivation: define high-level languages that use MapReduce as an underlying data processor.

A Pig Latin statement is an operator that takes a relation as input and produces another relation as output.

Pig Latin statements are generally organized in the following manner:

1. ALOADstatement reads data from the file system as arelation(list of tuples).

2. A series of “transformation” statements process the data.

3. ASTOREstatement writes output to the file system; or, aDUMP statement displays output to the screen.

Statements are executed as composition of MapReduce jobs.

(43)

11 May 2015

Using Pig

Part of Hadoop, downloadable from the Hadoop Web site Interactive interface (Grunt) and batch mode

Two execution modes:

local data is read from disk, operations are directly executed, no MapReduce

MapReduce on top of a MapReduce cluster (pipeline of MapReduce jobs)

(44)

11 May 2015

Term count in Pig

a = l o a d ’ h d f s :// i n p u t . txt ’ as ( l in e : c h a r a r r a y );

b = f o r e a c h a g e n e r a t e f l a t t e n( T O K E N I Z E ( l i n e )) as w or d ; c = g r o u p b by w o r d ;

d = f o r e a c h c g e n e r a t e C O U N T ( b ) , g r o u p; d u m p d ;

(45)

11 May 2015

Example input data

A flat file, tab-separated, extracted from DBLP.

2005 VLDB J. Model-based approximate querying in sensor networks.

1997 VLDB J. Dictionary-Based Order-Preserving String Compression.

2003 SIGMOD Record Time management for new faculty.

2001 VLDB J. E-Services - Guest editorial.

2003 SIGMOD Record Exposing undergraduate students to database system internals.

1998 VLDB J. Integrating Reliable Memory in Databases.

1996 VLDB J. Query Processing and Optimization in Oracle Rdb 1996 VLDB J. A Complete Temporal Relational Algebra.

1994 SIGMOD Record Data Modelling in the Large.

2002 SIGMOD Record Data Mining: Concepts and Techniques - Book Review.

(46)

11 May 2015

Computing average number of publications per year

- - L o a d r e c o r d s f r o m the f i l e a r t i c l e s = l o a d ’ j o u r n a l . txt ’

as ( y e a r : c h a r a r r a y , j o u r n a l : c h a r a r r a y , t i t l e : c h a r a r r a y ) ;

s r _ a r t i c l e s = f i l t e r a r t i c l e s by j o u r n a l ==’ S I G M O D R e c o r d ’;

y e a r _ g r o u p s = g r o u p s r _ a r t i c l e s by y e a r ; a v g _ n b = f o r e a c h y e a r _ g r o u p s

g e n e r a t e group, c o u n t( s r _ a r t i c l e s . t i t l e );

d u m p a v g _ n b ;

(47)

11 May 2015

The data model

The model allows nesting of bags and tuples. Example: the year_grouptemporary bag.

group: 1990 sr_articles:

{

(1990, SIGMOD Record, SQL For Networks of Relations.), (1990, SIGMOD Record, New Hope on Data Models and Types.) }

Unlimited nesting, but no references, no constraint of any kind (for parallelization purposes).

(48)

11 May 2015

Flexible representation

Pig allows the representation of heterogeneous data, in the spirit of semi-structured dat models (e.g., XML).

The following is a bag with heterogeneous tuples.

{

(2005, {’SIGMOD Record’, ’VLDB J.’}, {’article1’, article2’})

(2003, ’SIGMOD Record’, {’article1’, article2’}, {’author1’, ’author2’})

}

(49)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Basics

Pig operators

From Pig to MapReduce Limitations of MapReduce

Alternative Distributed Computation Models

(50)

11 May 2015

Main Pig operators

Operator Description

foreach Apply one or several expression(s) to each of the input tu- ples

filter Filter the input tuples with some criteria distinct Remove duplicates from an input join Join of two inputs

group Regrouping of data

cogroup Associate two related groups from distinct inputs cross Cross product of two inputs

order Order an input

limit Keep only a fixed number of elements

union Union of two inputs (note: no need to agree on a same schema, as in SQL)

split Split a relation based on a condition

(51)

11 May 2015

Example dataset

A simple flat file with tab-separated fields.

1995 Foundations of Databases Abiteboul 1995 Foundations of Databases Hull 1995 Foundations of Databases Vianu 2010 Web Data Management Abiteboul 2010 Web Data Management Manolescu 2010 Web Data Management Rigaux 2010 Web Data Management Rousset 2010 Web Data Management Senellart

NB:Pig accepts inputs from user-defined function, written in Java – allows to extract data from any source.

(52)

11 May 2015

The group operator

The “program”:

b o o k s = l o a d ’ webdam - b o o k s . txt ’

as ( y e a r : int , t i t l e : c h a r a r r a y , a u t h o r : c h a r a r r a y ) ; g r o u p _ a u t h = g r o u p b o o k s by t i t l e ;

a u t h o r s = f o r e a c h g r o u p _ a u t h g e n e r a t e group, b o o k s . a u t h o r ; d u m p a u t h o r s ;

and the result:

(Foundations of Databases, {(Abiteboul),(Hull),(Vianu)}) (Web Data Management,

{(Abiteboul),(Manolescu),(Rigaux),(Rousset),(Senellart)})

(53)

11 May 2015

Unnesting with flatten

Flatten serves to unnest a nested field.

- - T a k e the ‘ a u t h o r s ’ bag and f l a t t e n the n e s t e d set f l a t t e n e d = f o r e a c h a u t h o r s

g e n e r a t e group, f l a t t e n( a u t h o r );

Applied to the previousauthorsbags, one obtains:

(Foundations of Databases,Abiteboul) (Foundations of Databases,Hull) (Foundations of Databases,Vianu) (Web Data Management,Abiteboul) (Web Data Management,Manolescu) (Web Data Management,Rigaux) (Web Data Management,Rousset) (Web Data Management,Senellart)

(54)

11 May 2015

The cogroup operator

Allows to gather two data sources in nested fields Example: a file with publishers:

Fundations of Databases Addison-Wesley USA

Fundations of Databases Vuibert France

Web Data Management Cambridge University Press USA

The program:

p u b l i s h e r s = l o a d ’ webdam - p u b l i s h e r s . txt ’

as ( t i t l e : c h a r a r r a y , p u b l i s h e r : c h a r a r r a y ) ; c o g r o u p e d = c o g r o u p f l a t t e n e d by group,

p u b l i s h e r s by t i t l e ;

(55)

11 May 2015

The result

For each grouped field value, two nested sets, coming from both sources.

(Foundations of Databases,

{ (Foundations of Databases,Abiteboul), (Foundations of Databases,Hull), (Foundations of Databases,Vianu) },

{(Foundations of Databases,Addison-Wesley), (Foundations of Databases,Vuibert)

} )

A kind of join? Yes, at least a preliminary step.

(56)

11 May 2015

Joins

Same as before, but produces a flat output (cross product of the inner nested bags). The nested model is usually more elegant and easier to deal with.

- - T a k e the ’ f l a t t e n e d ’ bag , j o i n w i t h ’ p u b l i s h e r s ’ j o i n e d = j o i n f l a t t e n e d by group, p u b l i s h e r s by t i t l e ;

(Foundations of Databases,Abiteboul,

Fundations of Databases,Addison-Wesley) (Foundations of Databases,Abiteboul,

Fundations of Databases,Vuibert) (Foundations of Databases,Hull,

Fundations of Databases,Addison-Wesley) (Foundations of Databases,Hull,

...

(57)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Basics

Pig operators

From Pig to MapReduce Limitations of MapReduce

Alternative Distributed Computation Models

(58)

11 May 2015

Plans

A Pig program describes alogical data flow

This is implemented with aphysical plan, in terms of grouping or nesting operations

This is in turn (for MapReduce execution) implemented using a sequence ofmapandreducesteps

(59)

11 May 2015

Physical operators

Local Rearrange group tuples with the same key, on a local machine Global Rearrange group tuples with the same key, globally on a cluster

Package construct a nested tuple from tuples that have been grouped

(60)

11 May 2015

Translation of a simple Pig program

(61)

11 May 2015

A more complex join-group program

- - L o a d books , but k e e p o n l y b o o k s f r o m V i c t o r V i a n u b o o k s = l o a d ’ webdam - b o o k s . txt ’

as ( y e a r : int , t i t l e : c h a r a r r a y , a u t h o r : c h a r a r r a y ) ; v i a n u = f i l t e r b o o k s by a u t h o r == ’ V i a n u ’;

p u b l i s h e r s = l o a d ’ webdam - p u b l i s h e r s . txt ’

as ( t i t l e : c h a r a r r a y , p u b l i s h e r : c h a r a r r a y ) ; - - J o i n on the b o o k t i t l e

j o i n e d = j o i n v i a n u by title , p u b l i s h e r s by t i t l e ; - - Now , g r o u p on the a u t h o r n a m e

g r o u p e d = g r o u p j o i n e d by v i a n u :: a u t h o r ; - - F i n a l l y c o u n t the p u b l i s h e r s

- - ( nb : we s h o u l d r e m o v e d u p l i c a t e s !) c o u n t = f o r e a c h g r o u p e d

g e n e r a t e group, C O U N T ( j o i n e d . p u b l i s h e r );

(62)

11 May 2015

Translation of a join-group program

(63)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models

(64)

11 May 2015

MapReduce limitations (1/2)

High latency. Launching a MapReduce job has a high overhead, andreducefunctions are only called after allmapfunctions succeed, not suitable for applications needing a quick result.

Batch processing only. MapReduce excels at processing a large collection, not at retrieving individual items from a collection.

Write-once, read-many mode. No real possibility of updating a dataset using MapReduce, it should be regenerated from scratch No transactions.No concurrency control at all, completely

unsuitable for transactional applications [PPR+09].

(65)

11 May 2015

MapReduce limitations (2/2)

Relatively low-level. Ongoing efforts for more high-level languages: Scope [CJL+08], Pig [ORS+08, GNC+09], Hive [TSJ+09], Cascadinghttp://www.cascading.org/

No structure.Implies lack of indexing, difficult to optimize, etc. [DS87]

Hard to tune. Number of reducers? Compression? Memory available at each node? etc.

(66)

11 May 2015

Hybrid systems

Best of both worlds?

DBMS are good at transactions, point queries, structured data MapReduce is good at scalability, batch processing, key-value data HadoopDB[ABPA+09]: standard relational DBMS at each node of a cluster, MapReduce allows communication between nodes Possible to use DBMS inputs natively in Hadoop, but no control about data locality

(67)

11 May 2015

Job Scheduling

Multiple jobs concurrently submitted to the MapReduce jobtracker Fair scheduling required:

each submitted job should have some share of the cluster prioritization of jobs

long-standing jobs should not block quick jobs fairness with respect to users

Standard Hadoop scheduler: priority queue

Hadoop Fair Scheduler: ensures cluster resources are shared among users. Preemption (= killing running tasks) possible in case the sharing becomes unbalnaced.

(68)

11 May 2015

What you should remember on distributed computing

MapReduce is a simple model forbatch processingof very large collections.

⇒goodfor data analytics;not goodfor point queries (high latency).

The systems bringsrobustness against failureof a component and transparent distribution and scalability.

⇒more expressive languages required (Pig)

(69)

11 May 2015

Resources

Original description of the MapReduce framework [DG04]

Hadoop distribution and documentation available at http://hadoop.apache.org/

Documentation for Pig is available at http://wiki.apache.org/pig/

Excellent textbook on Hadoop [Whi09]

Online MapReduce exercises with validation

http://cloudcomputing.ruc.edu.cn/login/login.jsp

(70)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models Apache Hive: SQL Analytics on MapReduce Apache Storm: Real-Time Computation Apache Spark: More Complex Workflows

Pregel, Apache Giraph, GraphLab: Think as a Vertex

(71)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models Apache Hive: SQL Analytics on MapReduce Apache Storm: Real-Time Computation Apache Spark: More Complex Workflows

Pregel, Apache Giraph, GraphLab: Think as a Vertex

(72)

11 May 2015

Apache Hive

Data warehousing on top of Hadoop

SQL-like language to express high-level queries, esp., aggregate and analytics

Queries translated into Map–Reduce jobs (or Spark jobs, see further)

Similar as Pig, but declarative vs imperative, and geared towards analytics vs transformation of datasets; dataflow more obvious in Pig programs

(73)

11 May 2015

Term count in Hive

C R E A T E T A B L E doc ( t e x t S T R I N G )

ROW F O R M A T D E L I M I T E D F I E L D S T E R M I N A T E D BY ’ \ n ’ S T O R E D AS T E X T F I L E ;

L O A D D A T A I N P A T H ’ h d f s :// i n p u t . txt ’ I N T O T A B L E doc ; S E L E C T word , C O U N T (*)

F R O M doc L A T E R A L V I E W E X P L O D E ( S P L I T ( text , ’ ’)) t e m p AS w o r d

G R O U P BY w o r d ;

(74)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models Apache Hive: SQL Analytics on MapReduce Apache Storm: Real-Time Computation Apache Spark: More Complex Workflows

Pregel, Apache Giraph, GraphLab: Think as a Vertex

(75)

11 May 2015

Apache Storm

Real-time analytics, in opposition to the batch model of MapReduce

Achieves very reasonable latency Process data in a streaming fashion

Based on the notion of event producer (spout) and manipulation (bolt)

Written in Clojure (Lisp) + Java, jobs usually written in Java

(76)

11 May 2015

Term count in Storm

p u b l i c c l a s s T C S t o r m {

p u b l i c s t a t i c v o i d m a i n ( S t r i n g [] a r g s ) t h r o w s E x c e p t i o n { C o n f i g c o n f i g = new C o n f i g ();

c o n f i g . put (" i n p u t F i l e ", a r g s [ 0 ] ) ;

T o p o l o g y B u i l d e r b u i l d e r = new T o p o l o g y B u i l d e r ();

b u i l d e r . s e t S p o u t (" line - reader - s p o u t ", new L i n e R e a d e r S p o u t ( ) ) ;

b u i l d e r . s e t B o l t (" word - s p i t t e r ", new W o r d S p l i t t e r B o l t ( ) ) . s h u f f l e G r o u p i n g (" line - reader - s p o u t ");

b u i l d e r . s e t B o l t (" word - c o u n t e r ", new W o r d C o u n t e r B o l t ( ) ) . s h u f f l e G r o u p i n g (" word - s p i t t e r ");

L o c a l C l u s t e r c l u s t e r = new L o c a l C l u s t e r ();

c l u s t e r . s u b m i t T o p o l o g y (" H e l l o S t o r m ", config , b u i l d e r . c r e a t e T o p o l o g y ( ) ) ; }

}

Only the driver, the spouts and the bolts also need to be defined!

(77)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models Apache Hive: SQL Analytics on MapReduce Apache Storm: Real-Time Computation Apache Spark: More Complex Workflows

Pregel, Apache Giraph, GraphLab: Think as a Vertex

(78)

11 May 2015

Apache Spark

Similar positioning as Pig: high-level language with a dataflow of operators

Many more operators than Pig

Complex programs can be written in Scala, Java, or Python Contrarily to Pig, jobs are not translated to MapReduce, but executed directly in a distributed fashion

Based onRDDs (Resilient Distributed Dataset) that can be HDFS files, HBase tables, or the result of applying a succession of operators on these

Contrarily to MapReduce, local workers have the ability to keep data in memory in a succession of tasks

Extensions allowing to perform streaming data processing, to process Hive SQL queries

MLLib: machine learning library on top of Spark

(79)

11 May 2015

Term count in Spark (Python)

f i l e = s p a r k . t e x t F i l e (" h d f s :// i n p u t . txt ")

c o u n t s = f i l e. f l a t M a p (l a m b d a l i n e : l i n e . s p l i t (" ")) \ .map(l a m b d a w o r d : ( word , 1)) \

. r e d u c e B y K e y (l a m b d a a , b : a + b )

c o u n t s . s a v e A s T e x t F i l e (" h d f s :// o u t p u t . txt ")

(80)

11 May 2015

Outline

MapReduce

Toward Easier Programming Interfaces: Pig Limitations of MapReduce

Alternative Distributed Computation Models Apache Hive: SQL Analytics on MapReduce Apache Storm: Real-Time Computation Apache Spark: More Complex Workflows

Pregel, Apache Giraph, GraphLab: Think as a Vertex

(81)

11 May 2015

Graph Computation Frameworks

Parallel computation on graph-like data (Web graph, social networks, transportation networks, etc.)

Pregel: original system by Google

Apache Giraph: open-source clone of Pregel GraphLab: similar goals, different architecture

One writesvertex programs: each vertex receives messages and sends messages to its neighbours

Pregel and Giraph are based on thebulk synchronous parallel paradigm: synchronization barrier once vertex programs are executed on every vertex

GraphLab uses anasynchronousmodel

(82)

References I

[ABPA+09] Azza Abouzeid, Kamil Bajda-Pawlikowski, Daniel J. Abadi, Alexander Rasin, and Avi Silberschatz. HadoopDB: An Architectural Hybrid of MapReduce and DBMS

Technologies for Analytical Workloads. Proceedings of the VLDB Endowment (PVLDB), 2(1):922–933, 2009.

[CJL+08] Ronnie Chaiken, Bob Jenkins, Per-Åke Larson, Bill Ramsey, Darren Shakib, Simon Weaver, and Jingren Zhou. SCOPE: easy and efficient parallel processing of massive data sets. Proc. Intl. Conf. on Very Large Databases (VLDB), 1(2):1265–1276, 2008.

[DG04] Jeffrey Dean and Sanjay Ghemawat. MapReduce:

Simplified Data Processing on Large Clusters. InIntl.

Symp. on Operating System Design and Implementation (OSDI), pages 137–150, 2004.

(83)

References II

[DS87] D. DeWitt and M. Stonebraker. MapReduce, a major Step Backward. DatabaseColumn blog, 1987.

http://databasecolumn.vertica.com/database- innovation/mapreduce-a-major-step-backwards/.

[GNC+09] Alan Gates, Olga Natkovich, Shubham Chopra, Pradeep Kamath, Shravan Narayanam, Christopher Olston, Benjamin Reed, Santhosh Srinivasan, and Utkarsh Srivastava. Building a HighLevel Dataflow System on top of MapReduce: The Pig Experience. Proceedings of the VLDB Endowment (PVLDB), 2(2):1414–1425, 2009.

[ORS+08] Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, and Andrew Tomkins. Pig latin: a

not-so-foreign language for data processing. InProc. ACM Intl. Conf. on the Management of Data (SIGMOD), pages 1099–1110, 2008.

(84)

References III

[PPR+09] Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel J.

Abadi, David J. DeWitt, Samuel Madden, and Michael Stonebraker. A comparison of approaches to large-scale data analysis. InProc. ACM Intl. Conf. on the

Management of Data (SIGMOD), pages 165–178, 2009.

[TSJ+09] Ashish Thusoo, Joydeep Sen Sarma, Namit Jain, Zheng Shao, Prasad Chakka, Suresh Anthony, Hao Liu, Pete Wyckoff, and Raghotham Murthy. Hive - A Warehousing Solution Over a Map-Reduce Framework. Proceedings of the VLDB Endowment (PVLDB), 2(2):1626–1629, 2009.

[Whi09] Tom White. Hadoop: The Definitive Guide. O’Reilly, Sebastopol, CA, USA, 2009.

(85)

11 May 2015

Licence de droits d’usage

Contexte public} avec modifications

Par le téléchargement ou la consultation de ce document, l’utilisateur accepte la licence d’utilisation qui y est attachée, telle que détaillée dans les dispositions suivantes, et s’engage à la respecter intégralement.

La licence confère à l’utilisateur un droit d’usage sur le document consulté ou téléchargé, totalement ou en partie, dans les conditions définies ci-après et à l’exclusion expresse de toute utilisation commerciale.

Le droit d’usage défini par la licence autorise un usage à destination de tout public qui comprend : – le droit de reproduire tout ou partie du document sur support informatique ou papier,

– le droit de diffuser tout ou partie du document au public sur support papier ou informatique, y compris par la mise à la disposition du public sur un réseau numérique,

– le droit de modifier la forme ou la présentation du document,

– le droit d’intégrer tout ou partie du document dans un document composite et de le diffuser dans ce nouveau document, à condition que : – L’auteur soit informé.

Les mentions relatives à la source du document et/ou à son auteur doivent être conservées dans leur intégralité.

Le droit d’usage défini par la licence est personnel et non exclusif.

Tout autre usage que ceux prévus par la licence est soumis à autorisation préalable et expresse de l’auteur :sitepedago@telecom-paristech.fr

Références

Documents relatifs

Context: Server config, virtual host, directory, per-directory access control file ( .htaccess )..

While Tomcat Valves configuration applies to a number of web applications deployed on Tomcat server, Servlet Filters must be configured for every single web application

• Cohérence (ou consistance des données) (Consistency en anglais): tous les nœuds du système voient exactement les mêmes données au même moment ;. • Disponibilité

Cas concret : création de l’environnement applicatif d’un site web 140 5.1 Fichier de configuration principal d'Apache httpd.conf 141. 5.2 Fichier de configuration

On peut activer ou n on (activée par défaut) l'option &#34;In dexes&#34; au n iveau d'un répertoire (voir la partie suivan te con cern an t la sécurisation des accès) de man

Puis à partir de l’hôte virtuel SSL fourni par Apache (/etc/apache2/sites-available/default-ssl), créer votre hôte virtuel SSL pour le

Lorsque nous publions des ressources http avec le moteur SSL actvité sur le reverse proxy public, nous pouvons par exemple desservir plusieurs intranet sous

Créer un handler pour tous les fichiers (inconditionnel) Contexte : server config, virtual host, directory, .htaccess Exemple :. Interdire AddHandler et SetHandler