• Aucun résultat trouvé

Power Management in Grid Computing with Xen

N/A
N/A
Protected

Academic year: 2021

Partager "Power Management in Grid Computing with Xen"

Copied!
11
0
0

Texte intégral

(1)

HAL Id: inria-00441366

https://hal.inria.fr/inria-00441366

Submitted on 15 Dec 2009

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Power Management in Grid Computing with Xen

Fabien Hermenier, Nicolas Loriant, Jean-Marc Menaud

To cite this version:

Fabien Hermenier, Nicolas Loriant, Jean-Marc Menaud. Power Management in Grid Computing with

Xen. 2006 ISPA Workshop on Xen in High-Performance Clusters and Grid Computing Environments

(XHPC’06), Dec 2006, Sorrento, Italy. pp.407-416, �10.1007/11942634_43�. �inria-00441366�

(2)

FabienHermenier,Ni olasLoriant,Jean-Mar Menaud

OBASCOproje t

É oledesMinesdeNantes-INRIA,LINA

4,rueAlfredKastler

44307NantesCedex3,Fran e

{fabien.hermenier,ni olas.lorian t,je an-ma r .me naud }emn .fr

Abstra t. While hip vendorsstill sti kto Moore'slaw, andthe

per-forman eperdollarkeepsgoingup,theperforman eperwatthasbeen

stagnantforlastfewyears.Moreoverenergypri es ontinuetorise

world-wide.Thisposesamajor hallengetoorganisationsrunninggrids,indeed

su har hite turesrequirelarge oolingsystems.Indeedtheone-year ost

ofa ooling systemand of thepower onsumption mayoutt thegrid

initialinvestment.

Weobserve,however,thatagriddoesnot onstantlyrunatpeak

perfor-man e.Inthispaper, we proposeaworkload on entration strategyto

redu egridpower onsumption.UsingtheXenvirtualma hinemigration

te hnology, our power management poli y an dispat h transparently

and dynami allyany appli ations of the grid.Our poli y on entrates

theworkloadtoshutdownnodesthatareunusedwithanegligibleimpa t

onperforman e.Weshowthroughevaluationsthatthispoli yde reases

theoverallpower onsumptionofthegridsigni antly.

1 Introdu tion

Thenumberofappli ationsrequiringgiganti storageand omputational

apa-bilitieshasin reasedtheinterestingrid omputinginthelastfewyears.Atthe

sametime, hipvendorshavebeenprodu ingmoreandmorepowerfulCPUsat

lowerpri es. Past work has mostlyfo used on designing e ient and s alable

grids. Nevertheless, energy onsumption has re entlybe ome amajor on ern

fororganisationsrunninggrids.Indeed,power onsumptionisanimportant

por-tionofthetotalownership ost,itdeterminesthesizeofthe oolingsystemand

oftheele tri alba kuppowergenerators.The ostofa oolingsystemwiththe

power onsumptionmaysometimesex eedthegridinitialinvestment.Moreover,

intensivepower onsumption in reases the han e of omponent failure. Thus,

power onsumptionmustbeakeyfeatureingriddesign.

Inthe meantime and despite numerous userssubmitting jobs, a grid must

rarelymaintainpeakperforman es onstantly.Forexample,theaverageloadof

thegridoftheÉ oledesMinesdeNantessubatomi resear hlabisabout70%.

Moreover,many appli ationshavespe i needse.g. middlewareoroperating

(3)

forageneri powermanagementsystemadaptedtogrid.

Inthis paper, we presentour prototypeon powermanagementfor grid

ar- hite tures. Ourprototypeis basedonXen virtualma hine hypervisor sothat

oursolutionistransparenttouser.Basedonworkloadprobes(CPUload,

mem-ory,NICsthroughputet .),thepoli ymigratesvirtualma hinesto on entrate

theworkloadonfewernodesofthegrid,thusallowingunusednodesto beshut

down.Wepresentperforman eresultsoverdierentappli ation workloadsand

showthee tivenessofouralgorithm.

The rest of the paper is organised as follows. Se tion 2 presents existing

powermanagementpoli iesfor luster omputing.Se tion 3reviewsthevirtual

ma hinete hnologyandthenSe tion4des ribesourimplementation.Se tion5

showsperforman eresultsandSe tion 6 on ludes.

2 Energy management

Mostexistingworkonpowermanagementhasfo usedonCPUsandlaptops.

In-deed,batterydurationisa riti alproblemforusersandCPUisbyfarthemost

power onsuming omponent of a omputer

1

. Today, most pro essors feature

DVFS (Dynami Voltage and Frequen y S aling) even serverdesigned

pro es-sor.ADVFSpro essorhasasetoflevels(voltageandfrequen ypairs).Adefault

poli yintegratedintheBIOS anbeoverloadedbytheoperatingsystem.OSand

systemsele tthelevela ordingto performan eandtemperature onstraints.

Therest of this se tiondes ribesIVS, CVS and VOVO,three power

man-agementpoli ies forsingle systemor lusters and theproblem of omputation

migrationin grid omputing.

2.1 IVS IndependentVoltageS aling

Independent Voltage S alingis anode-lo al poli y for powermanagement. In

this approa h, every node must be a DVFS pro essor. Ea h node adjuts its

pro essorvoltageand frequen yto thelowest possiblevaluesu h that there is

not mu h impa t on performan e. IVS is very simple and it does not require

anyspe i environmentor information aboutrunning appli ations. Elnozahy

etal.[1℄reportpowersavingsofupto30%dependingonappli ations.Inorder

to minimize the impa t on performan e, Chen et al. [2℄ applied IVS only to

nodesnotinthe riti alpathofrunningappli ations.

2.2 CVS CoordinatedVoltage S aling

In ontrast to IVS, Coordinated Voltage S alingis a global poli y. Ea h node

periodi allyinformsamonitorofitsworkload.Themonitorevaluatestheoverall

1

(4)

low-nodesadapt theirvoltageandfrequen ytothisaverage.Thisstrategytendsto

de reasethevarian eof thefrequen ydistributiona rossthegrid.

CVS is ee tive when the workload is uniformly distributed among nodes.

Elnozahy et al. [1℄ evaluate the powersavings of CVS to be about 25%.

Nev-ertheless, the ee tiveness of CVS is questionable when workloaddistribution

is notuniform, indeed, nodes with an bigger workload are slowed down, thus

de reasingtheoverallperforman e.

2.3 VOVO  Vary On,Vary o

Pinheiroet al.[3℄ andChaseet al.[4℄haveproposed VaryOn VaryO, whi h

dynami ally adapts thenumberof nodes available a ordingto the grid

work-load.

VOVOappliesaworkload on entrationstrategy.SimilarlytoCVS,a

mon-itor estimates theoverallworkload,from whi h VOVOdedu esthe numberof

nodesrequired.Whentoomanynodesareinuse,jobsaremigratedorbalan ed

to on entrate the workload. Then unused nodes are simply shut down. The

monitormaydeneaperforman etoleran etopossiblyoverloadsomenodesto

furtherin reasepowersavings.

VOVOhas been found to give a power savings of 31% to 86% [1℄, indeed

VOVOisfully ee tivewhenalarge lusteris almostunused. HoweverVOVO

requiresjob migrationte hnology,that anbeimplementedatseverallevel.At

the appli ation level, usershave to developtheir own load balan er. While at

theoperatingsystemlevel,aspe i OShasto beinstalledon everyma hines

and these must be identi al. Whilethis is rarelyan issuein lusters, grids are

moreheterogeneous.

Designing the migration me hanism at the appli ation[5℄, middleware, or

operating system[6℄ levels is hardly feasible, as energy saving is an issue for

organizationsrunninggrids,notforusersdevelopingappli ations.Inour ontext,

wesupposethatgridarebuiltbyaggregationsofprimary luster.Ea h lusteris

managedbydierentorganizationswiththeirspe i appli ations.Thus,various

frameworksandmiddlewareareusedtodevelopgridappli ations.Itisunfeasible

to impose one of them on the entire grid as these are spe i to the kind of

appli ationsdeveloped( omputationnal,data,servi es)[7℄.

3 Virtual Ma hines

ThegoalofavirtualVirtualma hinesistoprovideea hofmultipleuserswith

the illusion of having an entire omputer (a virtual domain), all on a single

physi al omputer(thehost).Thiste hnology,alsoknownasvirtualization,

dif-ferentiates from appli ation virtual ma hines su h as Sun Mi rosystem's Java

(5)

op-pervisororthevirtualma hinemonitormustenfor eisolationbetweenmultiple

virtualdomains andmust partitionresour es(CPUs, RAM,HDDs, NICs,et )

among them. The rest of this se tion des ribesvirtualization and

paravirtual-ization,thetwomajorwaystoimplementanhypervisor.

Fig.1.Ar hite tureofaXenvirtualisedSystem

In ontrasttoemulation,virtualizationreprodu esthehostar hite ture

iden-ti ally forthe virtualma hines. Hypervisorssu hasVMWare [8℄run most

in-stru tionsofthevirtualdomainsdire tly onthehostpro essor.Thehypervisor

musttrapsensitiveinstru tionsthat annotbeexe utedsafelysu hasTLB

op-erations.Whilevirtualizationrestri tsthevirtualar hite turetothehostone,it

stillsallowsrunningunmodiedOSimageswithmu hbetterperforman ethan

emulation.

Paravirtualization[9,10℄hasbeenintrodu edtoover omestheperforman e

loses ofvirtualizationdue to sensitiveoperations.It expli itlyrequires porting

the OS to a parti ular hypervisor. The ar hite ture presented to the virtual

ma hines slightlydiersfromthehostar hite tureinorderto eliminate

 oop-erativelysensitiveoperations.

Virtualizationthusmakesitpossiblehostingalotofvirtualdomainsonthe

same ma hine without onstraints. Ea h domain ould use its own operating

system,middlewareorappli ationswithoutriskof oni tswithotherssystem.

Thisadvantagesmakethatvirtualizationappre iableingrid omputing:

(6)

Oursolutionis adynami pla ementsystemforvirtualdomains onagrid

vir-tualizedwithXen(anhypervisorforea hnode).Thepla ementalgorithmaims

tosaveenergyby on entratingvirtualdomainsonthesmallestsubsetofnodes

possible.Nodesnothostingvirtualdomains anbestopped,in orderto redu e

theoverallpower onsumption.Atpresent,thedistribution riterionforvirtual

domainspla ementisbasedonCPUusagebut ouldbeextendedto takeother

riteria into a ount, e.g. network tra or memory usage. Distribution of

virtualdomainismadewithaminimalservi einterruptionwithxenmigration

me hanism[12℄.Theproblem ofdata migrationis solvedby usingales server

that exportsoperatingsystemsanddatasforea hvirtualdomain.

Oursolutionistransparentforusersofvirtualdomainsanddoesnotrequire

any adaptationsor spe i operating system

2

. Thear hite ture is basedon a

lient/servermodel.Astandardpre-madedomainisrunningonea hnode.That

domain, alledDomain0,runsatool,theharvestingagent,thatmonitorsCPU,

memoryandnetwork.Thissupervisormonitorstheresour eusageofthenode,

andhowthisresour esaredividedamongthenode'svirtualdomains.

Ea hharvestingagentperiodi allysendsareporttothede isionagentwhi h

analysesthemto getanoverviewofthegridandofitsvirtualdomain resour e

usage. Thenanalgorithm determinesa tionsto betaken bythevarious nodes

(virtualdomainmigration,boot,shutdown).

4.1 Resour e olle tion

Theharvestingagentusesfun tionalitiesoeredbythexenstatlibraryprovides

withXen.Thislibraryallowsthedomain-0toobtainstatisti sgeneratedbythe

hypervisor,forea hvirtualdomain,in ludinginformation on erningallo ated

memory,bytessentand/orre eivedbythevirtualnetworkinterfa eorthe

quan-tityoftimeusedbythevirtualCPU.Byretrievingthisinformationperiodi ally,

itispossibletoknowtheper entageofCPU onsumedbyea hvirtualdomains

and, by summing them, the global CPU usage of the node (in this ase, the

resultobtaineddoesnot onsidertheloadofthehypervisor.)Areportismade

andsenttothede isionagent.Theharvestingagentisresponsiblefornotifying

thede isionagentof hangesinanode'sstate:Whenanodeisstoppedorready

to a eptvirtual domains,its harvesting agentsend adeparture orarrival

no-ti ation,respe tively.Thus,thede isionagentknowsthepla ementofvirtual

domainsamongthegrid,andwhatnodesareavailableoroine.

4.2 The de ision agent

This agent runs on its own ma hine. A ording to reports and noti ations

pa kets sent bynodes, the agent onstru tstworepresentations ofthe grid. A

2

(7)

address,andstate(onlineoroine)ofea hnodes.Avirtualdomainbasedview

givesinformation abouttheirlo ation,their urrentresour e onsumption and

thelevelofsaturationoftheirhost.Thede isionagentthenusesthisinformation

to on entrate virtual domains in the smallest possible subset of nodes while

avoidingsaturation.Thealgorithm isbasedon twovariables: asaturationand

an underload threshold. De isionsare taken by omparing the urrent loadof

ea hnodewithitsthresholds.

Inorder to on entratevirtual domains withoutoverloading nodes, the

al-gorithmsele ts virtualdomains that onsumesthehighest amount ofCPU on

the lowest loaded node then move it on a more loaded node. As we suppose

anhomogeneousenvironment,CPU onsumptionofthatdomainshouldremain

almost identi alonthe newnode.So, thedestinationnode is hosensu h that

thesumofthe urrentdestinationnodeloadandthe urrentdomainloadis

un-derthe overload threshold.Therst a eptablenode is pi ked.This operation

is repeated in the nextiteration if thenode is still underloaded and ifamore

appropriatepla eisavailableforvirtualdomains.

Virtualdomains on entration ouldbedangerous in a ertainway, as

vir-tualdomainsdoesnot ontinuallydothesamejob andthusitsCPU

onsump-tion may vary. Then, it is possible that if some domains in rease their CPU

onsumption,thenodesthat hostthemwillbeoverloaded.Tolimitthe

perfor-man edegradation,weredu etheloadinthefollowingway:thedomainhaving

thelowestloadwillbemovedtoanodethat anhostit.Thedestination node

shouldhavethemaximumCPUusagewithoutbeingoverloaded(beforeand

af-ter themigration).If nonodeisavailable,anewnodeintheoinenodesset

willbeinitialisedanddomainwillbemigratedon.

Whenade isionismadebytheagent,itwillsendthe orrespondinga tion

to theappropriate node. A tions are shell ommands, that will beexe utedin

Domain 0. As we wantto migrate virtual domains to savepowerby aVOVO

system,threekindsofa tions anbesenttoanode.Themostuseda tionisto

migrateaspe i virtualma hinetoanewnode.Othersa tions,spe i tothe

VOVOaspe tofoursolution auseanodeto behaltedorbooted.

5 Evaluation

Theevaluationofoursolutionaimstoshowthat dynami pla ementofvirtual

domainsallowsagoodenergysavingwithatolerableperforman edegradation.

We ompareenergye onomyandperforman edegradationofoursolutionand

atraditionalIVSsolution.

Inordertoevaluateoursolutioninapredi tableenvironment,wedeveloped

aben hmarkthat sendsloadrequest to lients.This toolwastunedto make

ea h lient onsumesa ertainamountofCPUtime.Forea hexperimentweran

theteststhreetimes.First,inanativeenvironment,ea happli ationrunsona

(8)

migrated betweena variable amountof a tives nodes, depending on the CPU

resour eneeds).

Our grid is omposed by four Sun V20Z stations. Ea h ma hine ontains

4GBofRAM,2Opteron250andrun aGNU/Linuxdistribution withXen

3.0-testing(basedona2.6.16kernel).Allappli ationsare ompiledin32bitsmode.

A leserver ontainsall appli ationsand OSsused bynodes in orderto make

administration easier. All ma hines (4 nodes and 1 le server) are onne ted

throughtaGigabitnetwork.TheIVSsoftwareusedforthese ondtestisbasedon

powernowdandthenodes' overloadand underloadthreshold hosento migrate

virtualma hines wererespe tively80%and70%.Inallsituations, wemeasure

theee tiveCPU onsumptionofea hnode,distributionofCPUtimebetween

domainsandWatts onsumedbythegrid(not in ludingtheleserver).

Ourben hmarkevaluatesthedynami behaviour(migration, turningnodes

on and o) of our strategy and its benets in omparison to IVS. We adapt

it to reate four dierent s enarios(one per ben hmark lient). Ea h s enario

reateshigh-loadpeaksorlow-usageperiodsat dierenttimes.Thisevaluation

simulatesagridsupportingavarietyofdierentservi es.Figures2(a)and2(b),

and Table 1 show the nodes' CPU utilization and the domain lo ation over

time. First, we observe that the IVS strategy has CPU average load that is

almostequalforallfournodes(approximately60%),whereasoursolutiontends

to on entrateall thework onthree nodes.Se ond, theCPU average loadfor

ea hnodeislessvariablein timeforoursolutionduetothepoli y'sde isions.

Table1.CPUaverageutilizationwithdynami evaluation

IVS

node

1 = 51.16%(δ = 37.73)

Virtualization

node

1 = 82.69%(δ = 20.35)

node

2 = 65.42%(δ = 36.23)

node

2 = 84.08%(δ = 6.91)

node

3 = 68.11%(δ = 22.13)

node

3 = 09.52%(δ = 9.58)

node

4 = 59.59%(δ = 33.39)

node

4 = 65.68%(δ = 28.3)

Thefourdierents enariosarebuiltto overdierentsituations that

illus-trate advantages and drawba ks of domain distributions. A positive ee t of

on entrationappearsaround1100s(Figure5),wheretwoofthefournodesare

disa tivatedbymigratingvirtualDomains1and4toNode2.Thesemigrations

arepossiblebe ausethetwodomains onsumesalmostnoresour esatthat

mo-ment. An overview of a virtual domain repartition appears around 1600s. On

Node2fromwhi h virtualDomain3wasmigrated toNode1,in ordertoleave

CPUtimeforvirtualDomain2.Thus,domainsthatneedanimportantamount

ofresour eshaveahigherprioritythansmalldomains.Thisex lusionofsmall

domainsavoidperforman eloss.

A side ee ts of migration anbe seen on Node 4, at about 400s. To free

some resour eon Node 1, Node 4 was booted, but during the boot time, the

(9)

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-1

Domain-1

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-2

Domain-2

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-3

Domain-3

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-4

Time (in sec.)

Domain-4

(a)NativeExe utionwithIVS

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-1

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-2

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-3

0

25

50

75

100

0

500

1000

1500

2000

2500

% CPU Node-4

Time (in sec.)

boot time

Domain-4

Domain-3

Domain-2

Domain-1

(b)VirtualEnvironmentwithmigrations

Fig.2.GlobalCPU onsumption

minuteslater.Asitisimpossibletopre iselypredi tthefutureneedsofdomains,

it is hard to avoid su h useless boots. Nevertheless, it would be possible to

deneaminimumuptimeto avoidnoderebooting onstantly.

Table 2.Exe utiontimeandaverageenergy onsumption

Environment

Exe utiontime Energy

Cumulated

withNativeConsumptionGain

3

Native 118:39 - 651.12W

-IVS 119:01 +0.3% 636.57W 1.93%

Virtual 139:37 +17.67% 498.13W 9.97%

We anseeinTable2andFigure5,thattheenergysavingofoursolutionis

appre iable but there isasmall performan edegradation(the umulated time

ofexe utionin reasesby20minutes).Thisisduetothetimeneededbyanode

to boot (approximately 3 minutes). To evaluate the time lost due to reboot,

we re-ran the experiment with virtualenvironment without shutting down or

(10)

300

400

500

600

700

800

0

5

10

15

20

25

30

35

40

Energy (Watts)

Time (min)

Native execution

IVS execution

prototype

Fig.3.Globalenergy onsumption

bootingnodes.Migrationisstillee tivebutboottimeisredu edtozero.Inthis

situation,theperforman ede reasesonlyby8.53%(witha umulatedexe ution

timein reasedby10minutes).

This evaluation shows that our solution allows important energy savings,

espe ially in astable environment, but in a very dynami situation, the boot

time of nodes slightly redu es the benet of our solution. About 50% of the

overheadofoursolutionisduetoboottime.Softwaresolutionsu hashibernate

or suspend-to-rammaygreatlyminimisethat issue.

6 Con lusion and future work

In this paper we have show that the Xen live migration allows to adapt the

VOVOpowersavingstrategytogrid omputing onsideringits omplex

infras-tru ture.Contraryto traditionnalVOVOsystems,theabstra tionlayeroered

byXen provides ageneri omputationmigration pro ess basedonvirtual

do-mains pla ement. The omputation on entration is done by dynami ally

mi-gratingvirtualdomains, onsideringtheirresour eneeds.This on entrationof

virtual domains on the minimal subset of nodes allows to shut down unused

nodes,thustosaveenergy.Byseparatingthemigrationandthepla ement

on- ernat thehypervisorlevel,oursolutionis ompletlytransparenttodevelopers

andusers.Evaluationsshowourpowermanagmentpoli y basedsolelyonCPU

loadmaysigni alyde rease power onsumption in gridwith variously loaded

ma hine.

Asfuturework,weintendtoextendourpowermanagementpoli ytoa

multi- riterion system, espe ially to in lude network tra information. Indeed, the

networkhardwareofagridalso onsumesanonnegle tibleamountofele tri al

power. Ourpoli y ouldtighten orregroupona singlenode, highly

ommuni- atingvirtualdomainsinordertoredu enetworktra .Utilizationofsoftware

(11)

1. Elnozahy,E.N.,Kistler,M.,Rajamony,R.:Energy-e ientserver lusters.In:

Pro- eedingofthese ondWorkshoponPowerAwareComputingSystems,Cambridge,

MA,USA(2002)179196

2. Chen, G., Malkowski, K., Kandemir, M., Raghavan, P.: Redu ing power with

performan e onstraintsforparallelsparseappli ations.In:IPDPS'05:Pro eedings

of the 19thIEEE International Parallel and Distributed Pro essing Symposium

(IPDPS'05)-Workshop11,Washington,DC,USA,IEEEComputerSo iety(2005)

231.1

3. Pinheiro,E.,Bian hini,R.,Carrera,E.,Heath,T.:Dynami lusterre onguration

for powerand performan e. InBenini,L., Kandemir, M., Ramanujam, J.,eds.:

Compilers and OperatingSystems for Low Power. KluwerA ademi Publishers

(2002)

4. Chase,J.S.,Anderson,D.C.,Thakar,P.N.,Vahdat,A.M.,Doyle,R.P.:Managing

energyand serverresour esinhosting enters. In:SOSP'01:Pro eedingsof the

eighteenth ACM Symposium on Operating Systems Prin iples, Ban, Alberta,

Canada,ACMPress(2001)103116

5. le Mouël, F., André, F., Segarra, M.T.: AeDEn : An adaptive framework for

dynami distributionovermobileenvironments.Annalsoftele ommuni ations57

(2002)11241148

6. Gallard, P., Morin, C.: Dynami streams for e ient ommuni ations between

migrating pro essesina luster. In:Euro-Par 2003: ParallelPro essing. Volume

2790ofLe t.NotesinComp.S ien e.,Klagenfurt,Austria,Springer-Verlag(2003)

930937

7. Krauter,K.,Buyya,R.,Maheswaran,M.: Ataxonomyandsurveyofgridresour e

managementsystemsfordistributed omputing.SoftwarePra ti eandExperien e

32(2002)135164

8. Ber ,L.,Wadia,N.,Edelman,S.: VMWareESXserver2:Performan eand

s ala-bilityevaluation. Te hni alreport,IBM(2004)

9. Whitaker, A.,Shaw,M., Gribble,S.D.: S aleandperforman eintheDenali

iso-lationkernel. In:Pro eedingsofthe5thsymposiumonOperatingsystemsdesign

andimplementation(OSDI),Boston,MA,USA(2002)195209

10. Barham, P.,Dragovi ,B., Fraser, K., Hand,S., Harris, T., Ho,A., Neugebauer,

R.,Pratt, I.,Wareld, A.: Xen andthe artofvirtualization. In:Pro eedings of

the19thACMSymposiumonOperatingSystemsPrin iples,BoltonLanding,NY,

USA,ACMPress(2003)164177

11. Foster,I.,Kesselman,C.,Tue ke,S.: Theanatomyofthegrid:Enablings alable

virtualorganizations. Le tureNotesinComputerS ien e2150(2001)1??

12. Clark, C., Fraser, K., Hand, S., Hansen, J.G., Jul, E., Limpa h, C., Pratt,

I., Wareld, A.: Live migration of virtual ma hines. In: Pro eedings of the

2nd USENIX Symposium on Networked Systems Design and Implementation

Figure

Fig. 1. Arhiteture of a Xen virtualised System
Table 1. CPU average utilization with dynami evaluation
Fig. 2. Global CPU onsumption
Fig. 3. Global energy onsumption

Références

Documents relatifs

Then, the left hand side is aggregate lifetime cost, including consumption and accounting for young and discounted old age, parents’ and children’s consumption, plus the cost to

In this paper, we are presenting two solutions that uses an adaptive scheme control that is capable of achieving MPP tracking for changing environmental conditions and deliver

In addition to all functionalities of battery electric vehicles, fuel cells have high pressure hydrogen tank and fuel cell stack which converts hydrogen into electricity.. For

To select the appropriate target in the BASEBALL dataset one needs to click on the VARIABLES tab and scroll down to the variable RBI.. Right clicking in the model role cell allows

When a number is to be multiplied by itself repeatedly,  the result is a power    of this number.

/ La version de cette publication peut être l’une des suivantes : la version prépublication de l’auteur, la version acceptée du manuscrit ou la version de l’éditeur. Access

This profile is quite similar to the one obtained in the LES simulation, but since here RANS is used, we have a smaller asymptotic velocity over the flat plate even if the

After the initial accusation in the Story-Present conditions, the experimenter proceeded to tell the participant a story from a previous cheating incident (the same story was used