HAL Id: inria-00307512
https://hal.inria.fr/inria-00307512
Submitted on 30 Jul 2008
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
for sensor network applications
Nicolas Maréchal, Jean-Benoit Pierrot, Jean-Marie Gorce
To cite this version:
Nicolas Maréchal, Jean-Benoit Pierrot, Jean-Marie Gorce. JEGA: a joint estimation and gossip
av-eraging algorithm for sensor network applications. [Research Report] RR-6597, INRIA. 2008, pp.25.
�inria-00307512�
a p p o r t
d e r e c h e r c h e
N
0
2
4
9
-6
3
9
9
IS
R
N
IN
R
IA
/R
R
--6
5
9
7
--F
R
+
E
N
G
Thème COM
JEGA: a joint estimation and gossip averaging
algorithm for sensor network applications
Nicolas Maréchal — Jean-Benoît Pierrot — Jean-Marie Gorce
N° 6597
Unité de recherche INRIA Rhône-Alpes
Ni olas Maré hal∗
, Jean-BenoîtPierrot†
,Jean-Marie Gor e‡
ThèmeCOMSystèmes ommuni ants ProjetARES
Rapportdere her he n°659725juillet200825pages
Abstra t: Distributed onsensusalgorithmsarewidelyusedintheareaof sen-sornetworks.Usually,theyaredesignedtobeextremelylightweightatthepri e of omputationtime. Theyrelyonsimplelo alintera tionrulesbetween neigh-bor nodesandare oftenusedto performthe omputationof spatial statisti al parameters(average,varian e,regression). Inthis paper,we onsider the ase of aparameterestimation from input data streamsat ea h node. An average onsensusalgorithmisusedtoperformaspatialregularizationoftheparameter estimations. Atwosteppro edure ouldbeused: ea h noderstestimatesits ownparameter,andthenthenetworkappliesaspatialregularizationstep. Itis howevermu hmorepowerfultodesignajointestimation/regularizationpro ess. Previouswork hasbeen donefor solvingthis problem but under very restri -tivehypothesesin termsof ommuni ationsyn hroni ity,estimator hoi e and sampling rates. In thispaper,westudy amodiedgossipaveragingalgorithm whi h fullls the sensor networks requirements: simpli ity, lowmemory/CPU usage and asyn hroni ity. By the sameway, we provethat the intuitive idea ofmass onservationprin ipleforgossipaveragingis stableand asympoti ally veried under feedba k orre tions even in presen e of heavily orrupted and orrelatedmeasures.
Key-words: distributed algorithms, gossipalgorithms, epidemi algorithms, averaging,average onsensus,estimation, spa e-timediusion,sensornetworks
∗
CEA-LETIMINATEC,Grenoble,F-38054Fran e(email: ni olas.mare hal ea.fr)
†
CEA-LETIMINATEC,Grenoble,F-38054,Fran e(email: jean-benoit.pierrot ea.fr)
‡
Université de Lyon, INRIA, INSA-Lyon, CITI, F-69621, Fran e (email: jean-marie.gor einsa-lyon.fr)
apteurs
Résumé : Les algorithmes distribués de onsensus de moyenne (average onsensus)sont ourrament utilisés dansle domaine des réseauxde apteurs. Conçuspourêtreextrêmementlégersauprixd'untempsde onvergen ea ru, ilsreposentsurdesintera tionslo alesentre noeudsvoisins etsont prin ipale-ment utilisés pour le al ul de paramètres statistiques empiriques (moyenne, varian e,...). Dans etarti le,nousnousplaçons dansle adredel'estimation sur haquenoeud- apteur d'unparamètreàpartird'unotd'é hantillons. Un algorithme de onsensus de moyenne est utilisé an de régulariser spatialle-mentlesparamètresestimés. Ilseraitpossibled'utiliserunepro édureendeux étapes: haque noeud al ule tout d'abord une première estimation de son paramètre,puisleréseauuniformise esestimationsdansundeuxièmetemps. Il est ependantplusintéressantd'utiliseruns hémad'estimation/régularisation onjointes. Dans depré édents travaux, une solution aété proposée pour ré-soudre eproblèmemaissousdeshypothèsestroprestri tivesentermesde syn- hroniedes ommuni ations,de hoixd'estimateuretde aden ed'é hantillonage. Dans erapport,nousétudionsuneversionmodiéed'unalgorithmede moyen-nage pair-à-pair qui répond à la problématique pré édemment itée tout en respe tant lesspé i itésdes réseauxde apteurs: simpli ité, faible usagedes ressour es(pro esseur,mémoire,...) et asyn hronie.
Mots- lés: algorithmesdistribués,algorithmesgossip,algorithmesépidémiques, onsensusdemoyenne,estimation, diusionspatio-temporelle,réseauxde ap-teurs
1 Introdu tion
Sensornetworks onsistofagreatamountofsmallentities, allednodes,equipped with low ost hardware in order to balan e the totalnetwork ost. They are ommonly usedfor monitoringphysi al phenomena onwide areassu h as hy-drometry, landslides or res, but also for tra king purposes and in military warfare([1℄,[2℄,[3℄,[4℄,[5℄). Thedire t drawba ksof low ost hardware are nu-merous: severe energy onstraints (battery lifetime), poor CPU and storage abilities, lowtransmissionratesand small ommuni ationrange. Fa ingthese limitationsandobje tives,wirelesssensornetworks(WSN)havetoself-organize theirex hangesandthemaneersensornodesmusta hievetheirmission. Asin most ases, omputations in a entralized fashion be omeuntra table and/or inadaptated, robust distributed algorithms have to be designed. The biggest partof algorithmdesign forWSN isdedi ated to improvingtheperforman es whilepreservingenergy onsumption. Forexample,severaldatafusions hemes developped in order to providea good and ompa t representationof the ob-servedphenomenon an be madeonthebasisof ahigh numberoflowquality measures [6℄ andsimplelo alintera tionsbetweenneighbornodes(gossiping). The parti ular lass of distributed onsensus algorithms is of great interest: they provide arobust way of homogenizing parametersamong network nodes [7℄. Morespe i ally, average onsensus algorithmsseem to be agood hoi e wheneverthestabilityand thequalityofthe onsensuspointisa riti alissue [8℄,andextendtoawidepanelofdatafusiontoolssu hasestimatorsfor statis-ti almoments,linearregressionandpolynomialmaptting([6℄,[9℄). However, a problem o urs when data to be averaged are subje t to u tations. This is often the asewhen astatisti al parameteris estimated from time series of noisydata samples. As thenumberofavailable samplesin reases,the estima-tionpro essnaturallya tsasatemporalregularizations hemeandu tuations are redu ed: one would ask to adjust the urrent state of average onsensus in order to a ount for informations with higher pre ision. For an additive zero-meanstationarymeasurementnoisepro ess,thequalityofthisestimation in reaseswiththenumberofmeasuresand,asa orollary,withtime. Neverthe-less,gossip averagingalgorithmsare veryslowto onvergein omparison with entralized algorithms: gossip-based onsensusalgorithms onverge asymptoti- ally,rarelyinnitetime. Assensornetworkssuerfromheavy onstraintson their resour es,time andenergy must be saved, it thus be omesne essaryto runthe gossip averagingalgorithm whilethe estimation is still in progressby using orre tion me hanisms. This double pro ess should be understood asa spatio-temporalregularizations heme: ea hnodeperformsindividuallyalo al regularization of extra ted features from sampled data (estimation) , while a spatialregularization(averaging)isperformedinordertoextra taglobal har-a teristi . Previousworkhasbeendoneonthistopi ,andanalternativeversion of thealgorithm introdu ed in [9℄is des ribedin this arti le. As explainedin thispaper,theoriginalityofourwork onsistsmainlyinthefullasyn hroni ity that is assumedfor bothdata ex hanges and estimation pro esses,and in the wide rangeof estimators overedby thehypotheses. Moreover,our algorithm an nd many appli ation ontexts: as an example, a lo k syn hronization s heme forwireless sensormakesuse ofit[10℄. Thispaperisorganizedas fol-lows: se tion 2providesashort overviewofgossip-based onsensusalgorithms , theirprin iplesand someknownresults. Inse tion3,asolutionisproposed,
addressingtheproblem oftheparallel estimationandaveragingwhile respe t-ingthephilosophyandparadigmofsensornetworks. Furtherthe onvergen eis proved. Afterthesetheoreti al onsiderations,simulationresultsareprovidedin se tion4inordertogiveaqualitativestudyofitsbehaviourw.r.t. parameters s ale. Inadditionto itsapparantmeaning,this work provesthatprin iples of mass onservation arerespe ted and stablethroughour algorithm,evenwhen measurementsarehighly orruptedand orrelated.
2 Distributed onsensus algorithms
2.1 Prin iples of gossip-based onsensus algorithms
Distributed onsensusalgorithms/proto ols aimat agreeing all network nodes with a ommon value or ade ision in a de entralized fashion. From a signal pro essing ontext,this an beunderstood asaspatial regularizationpro ess. When data ex hanges onsist of lo al, asyn hronous and simple intera tions betweenneighbornodes, su halgorithms referedto asgossip-based. The par-ti ularsub lassof gossip-basedaverage onsensusalgorithmsdoesnotlimitto the omputation of averages, but extends to the extra tion of a large variety of aggregated and statisti al quantities like sums/produ ts, max/min values, varian es, quantiles and rank statisti s ([11℄ and [12℄). More dire t appli a-tionslikeleast-squares regressionof modelparametershavealsobeenadapted to thisalgorithms ([6℄,[9℄). All these spe i ities makegossip-based onsensus algorithmsgood andidatesforsensornetworksappli ations,wherebandwidth, energy onsumptionandCPU/memoryusageareenduringseverelimitationsfor thesakeof nodes lifetime and size. Despite their suboptimality
1
, pure gossip onsensusalgorithm anbeusedasapreludetomoresophisti atedalgorithms byhomogenizingparametersupongroupsofnodes(forexample,theredu tion of arrierosetfor redu ing timedrift of TDMA s hemes). The performan e analysis of su h algorithms relies essentially on diusion speed statisti s and is then loselyrelated to performan es of ooding/multi asting pro essesand mixing time of Markov hains: someasymptoti bounds on onvergen e time aregivenin[12℄and[11℄.
2.2 Gossip averaging
Inpra ti alappli ations, onsensusalgorithmsareoftenusedinordertoeasily homogenize some parameters. However, one should distinguish situations in whi h the agreedvalue is riti al. Forexample, agreeing on a meeting point is not as riti al as dete ting the position of a sniper. Obtaining an average is in general mu h slower than any uniformization algorithm based on ood-ing/broad astingte hniques,but ensuresagoodqualitityof onsensus.
Gossip-based average onsensus algorithms (gossip averaging) have been widelystudiedin literature([13℄,[14℄,[11℄,...). As suggestedbytheir denomina-tion,theyaimat omputingaglobal averageof lo alvalues. This paperfo us onaversionbasedonanasyn hronouspeer-to-peer ommuni ationsmodel as presented in [6℄. Su h a model frequently o urs in sensor networks appli a-tionswherefullsyn hroni ityisneitherguaranteednoreasilytra table. Under
1
thisassumption,anintera tion onsistsin hoosingapairofneighbornodesat ea hiterationandmakingalo alaveragingbetweenthem. A gossipaveraging algorithmisthendes ribedbyalineardieren eequationoftheform:
X
k+1
= W
k
X
k
(1)where
X
k
isthe ve torof nodes' valuesat iterationk
(X
0
ontainsthe initial valuestobeaveraged)andW
k
isan
-by-n
matrixwhi hdes ribesinstantaneous pairwise intera tions. In fa t,X
k
anbe seenasa stateve tor, andits om-ponents as estimates of the global average of intial values. Following Boyd's notations,W
k
ouldbewritten:W
k
= I −
(e
i
− e
j
)(e
i
− e
j
)
T
2
∆
= W
ij
(2) wheree
i
= [0 . . . 0 1 0 . . . 0]
T
is a
n
-dimensional ve torwithi
th
entryequal to
1
(here, double indexij
meansthat nodei
onta ts nodej
). This learly orrespondstothefollowingrule(timeintobra kets,node'sIDasindex):x
i
(k + 1) = x
j
(k + 1) =
x
i
(k) + x
j
(k)
2
(3)For onvenien e, the
W
k
are onsideredtobei.i.d. randommatri es. Ea hW
ij
is hosenwithprobabilityp
ij
,i.e. a ordingto theprobabilitythat nodei
initiates aniterationinvolvingnodej
.W = E [W
k
] = E [W
0
] =
1
n
X
i,j
p
ij
W
ij
(4)where
n
standsforthenumberofnetworknodes. In[14℄, thefollowingstatementisproven:Proposition1([14℄). If
W
isadoubly-sto hasti ergodi matrix, thenX
k
−−−−→
k→∞
¯
θ1
whereθ =
¯
1
n
P
n
i=1
x
i
(0)
,and1 = [1, . . . 1]
T
∈ R
n
.Thismeansthat onvergen etothetrueaveragevalueisensuredifandonly if the largesteigenvalue(in modulus) of
W
is equalto 1, with multipli ity1
. Inotherwords,W
isthetransitionmatrixofaMarkov hainwhoseunderlying (weighted) transitiongraphG
is strongly onne ted, i.e. forea h pair(i, j)
of verti esofG
,apathfromi
toj
andapathfromj
toi
areexisting. In[15℄,the authorsprovedthatproposition1isequivalenttothefollowingproposition.Proposition2([15℄). Let
G = (V, E)
be thegraphsu hthat: theset ofverti esV
isthe setof networknodes. thereis an edge between twoverti es only if the orresponding nodes are intera ting innitelymany times.
Then,
X
k
onvergestoθ
¯
ifG
is onne ted.Anotherwaytodene
E
withrespe ttothesetE
k
ofa tivelinks(edges)at timeinstantk
isthefollowing:E =
∞
\
n=0
∞
[
i=n
E
k
= lim sup
n
E
n
2.3 Performan e analysis of noiseless gossip averaging
The onvergen e speed of thestandard gossip averagingalgorithm isstrongly relatedtothese ondlargesteigenvalue(inmodulus),
λ
2
ofW
(see[15℄). Given linkprobabilities,performan esarequantiedintermsofǫ
-averagingtime,i.e. theminimaltimethatguaranteesarelativeerroroforderǫ
withprobabilityat least1 − ǫ
:T
ave
(ǫ) = sup
X
0
∈R
n
inf
(
k : Pr
"
X
k
− ¯
θ1
2
kX
0
k
2
≥ ǫ
#
≤ ǫ
)
(5)Boydandal. derivedupperandlowerboundsfor
T
ave
(ǫ)
basedonthese ond largesteigenvalueofthe orrespondingmatrixW
:0.5 log ǫ
−1
log λ
2
(W )
−1
≤ T
ave
(ǫ) ≤
3 log ǫ
−1
log λ
2
(W )
−1
(6)Theeigenvalue
λ
2
isfun tion ofthelink probabilitiesbetweenpairofneighbor nodes. Thus,one antrytoredu eλ
2
bya tingonneighborlinks,intwoways: link reation/deletiona ordingtotopology-basedheuristi s.
neighborsele tionuniformlyandrandomly hosen,ornot: potential heuris-ti s.
Somepapersdes ribed solutionsforthe (distributed)optimization of
λ
2
for a given network using semidenite programming[16℄. The hoi e of algorithms withsu h omplexityisquestionableforsensornetwork appli ations: their op-erationshould remainreasonablyfeasibleonthe hosenar hite ture, and per-forman egainsmust beimportantenoughto ompensatefortime andenergy lostduringoptimization. However,thiss hemeisofgreatinterestina topology-stablenetwork.2.4 Gossip averaging of noisy measures
Insomeappli ations,theparameterstobeaveragedareestimatedfromsampled streamsofdata. Re ently,atremendeous workhasbeenpublishedto enhan e performan es and robustness of onsensusalgorithms ([17℄,[18℄) and to dene proto ols for thems. However, one of the most important pra ti al problem is thepresen e of additivemeasurementnoise. When oupling noises orrupt ex hanges, Boyd et al provedthat the onsensuspoint deviatesfrom the true averagevalue,andtakestheformofarandomwalkover
R
n
: themeansquared normof lo al deviationsin reaselinearlywith time. In[19℄, asolutionis pro-posed to redu etherate ofdeviation, whilein [9℄theeortisdoneto provide simultaneousaveragingandestimation. Nevertheless,theworkdonein[9℄relies onex hangessyn hroni ityandonlyfo usonthe aseofleastmeansquare es-timation. Theproblemisthento ndasolutiontodesyn hronizingex hanges whilepreserving the onvergen ein aseof moregeneralestimates. Themain di ultyin thisstudy remainsthe onvergen erateof theestimationpro ess. Forexample,thevarian eoftheoptimalestimatorfortheaverageof normally distributed values onverges inversely proportionaly to the number of values. In[9℄,theproofofse ond-order onvergen eisbasedonthe onditionthatthe varian e
σ
t
hasa nite norml
2
(N)
. In general, the varian eof an estimator doesnotfulll thisrequirement.
3 Joint estimation and gossip averaging
Similarlyto the initial algorithmpresentedin [9℄, we propose to run an asyn- hronousspatialregularization(averaging)pro essworkingjointlywiththe lo- alestimationoftheparameters(temporalregularization). Themaindieren e standsin thefullasyn hronoi ityofours heme,the hosenweightsfor intera -tions,andtheretroa tionpro ess. Moreover,the onvergen eofouralgorithm is proovedin the following fora widerange of lo al estimatorsunder the sin-gle assumptionthat theirspatio-temporal ovarian esde reaseto 0with time, withoutanyassumptionneitheron:
their onvergen eratew.r.t. tothenumberofsamples.
datasamplingrates.
3.1 Measurement pro ess and estimation
Duringthemeasurementpro ess,nodes olle tsamplesrelatedtosomedataof interest. The goal ofthe estimation phase is to extra t someunkown param-eters or hara teristi sof theoriginal datadistribution only from samples. In parti ular, an estimatorof someparameter
θ
issaid to be unbiased, ifat any time,themeanoftheestimatorisθ
. Inthiswork,aproperestimationpro essis onsidered,i.e. onvergingtotheexpe tedparameterasthenumberofsamples grows(the varian etendstoward0
withtime): this estimationisreferedtoas temporalregularization. Asanexample,oneshouldbeinterestedinestimating themeanµ
ofsomereal-valueddistributionD
(ofnite varian eσ
2
). It iswell knownthat, given i.i.d. samples
v
k
from thedistribution, the sampleaverage onvergestothetruemeanµ
,i.e.:∀k ∈ N
∗
, v
k
∼ D(µ, σ)
⇒
E
1
n
P
n
k=1
v
k
= µ
V
1
n
P
n
k=1
v
k
=
σ
2
n
−−−−→
n→∞
0
Cov
1
m
m
P
k=1
v
k
,
n
1
n
P
l=1
v
l
=
σ
2
max(m,n)
−−−−−−−−−→
max(m,n)→∞
0
(7)Inrealexperiments, samplesmaybe orrelatedand theirdistribution may hangethroughtime. However,some onsistan iesinthemeasurementand esti-mationpro essesareassumed: measurements anbetakenfromatime-varying distributionbuttheparameterstoestimatemustremain onstantthroughtime. Forinstan e, interferen esin wireless ommuni ationsare subje t to the net-worka tivitydynami s, andareusually modelled asa entered randomnoise. In other words, their varian es (power) vary with time, but their means are onstantandequalto
0
.3.2 Des ription of the algorithm
Letus startwithsomenotationsand onventions. Foranynode
i
, the urrent estimationoftheparameterθ
i
givensamplesavailableatnodei
uptotimek
isdenotedby
θ
ˆ
k
i
. These estimatorsaregroupedinthen
-dimensionalve torZ
k
:Z
k
=
h
ˆ
θ
1
(k)
, . . . , ˆ
θ
(k)
n
i
T
(8)As lo al estimators are assumed unbiased, the ve tor
E
[Z
k
]
is onstant throughtimeandisequaltotheve torofparameterstobeestimated.¯
Z
= E [Z
∆
k
] = [θ
1
, . . . , θ
n
]
T
(9) Inthispaper,itisusefulto onsiderthedieren ebetweenZ
k
anditsmean, whi hisdenoted byB
k
:B
k
∆
= Z
k
− ¯
Z =
h
b
(k)
1
, . . . , b
(k)
n
i
(10)Wealsodenethe ovarian eterm
C
kl
ij
whi hmeasurestherelationbetweenomponentsof
Z
k
throughtime:C
kl
ij
∆
= E
b
θ
i
(k)
− θ
i
b
θ
j
(l)
− θ
j
= E
h
b
(k)
i
b
(l)
j
i
(11)Inthefollowingofthisarti le,theproofofthe onvergen ewillrelyonthe onlyassumptionthat these omponentsareasympoti allyun orrelated,i.e.
∀(i, j) ∈ [1, n]
2
, C
kl
ij
−−−−−−→
(k+l)→∞
0
(12)
Formostofsensornetworkappli ations,thisassumptionisquitenotrestri tive assu ientlyspa ed(temporallyand/orspatially)samplestendstobe un or-related too. Inmany ases, the Stolz-Cesarótheorem and its extensions help inndingsu ient onditionsonsample onvarian esforensuringassumption (12).
Theproposedalgorithmisbasedonthestandardgossipaveragingalgorithm (1),uponwhi hasimplefeedba kisaddedtoa ountforestimatedparameters updates. Thismodiedalgorithmtakestheformofanonhomogeneoussystem ofrst-orderlineardieren eequations,andisdened by:
X
k+1
= W
k
X
k
+ Z
k+1
− Z
k
Z
0
= X
0
= [0 . . . 0]
T
(13)
The proof of the e ien y of this algorithm relies on the onvergen e of every omponentofthestateve tor
X
k
tothespatialaverageoftheestimated parameters(θ
i
)
i=1..n
whilethoseofZ
k
should onvergetothelo alparameters:X
k
−−−−→
k→∞
1
n
n
X
i=1
θ
i
!
1
Z
k
−−−−→
k→∞
¯
Z = [θ
1
, . . . , θ
n
]
T
Insystem(13),the(random)matrix
W
k
fulllsthesame onditionsasinthe lassi gossipavaragingalgorithms: it anbe onstantthroughtimeortakenatrandom, but its meanmust be adoubly sto hasti 2
ergodi 3
matrix. The be-haviourofoursystemisveryeasytodes ribequalitatively. Thepara ontra ting matri es(see[20℄)
W
k
homogenizethevaluesofX
k
whilethe omponentsofZ
k
arestabilizing. Ifea hz
i
(k)
is ergodi ,Z
k+1
− Z
k
tendsto 0asX
k
tendstoa ve tor olinearto[1 . . . 1]
. ThetermZ
k+1
− Z
k
impliesapermanent orre tion of thetotalweightsofX
k
. One anthen on ludethat after aninnite time, thesumofthe(identi al) omponentsofX
k
isequalto thesumofthoseofZ
¯
. Inthenexttwoparts,aformalproofofthisanalysis isgiven.3.3 First-order moment onvergen e
The rst result asserts that the mean limit of
X
k
is naturally the average of estimatedparameters,i.e.θ =
¯
1
n
P
n
i=1
θ
i
: Proposition3.lim
k→∞
E
[X
k
] = ¯
θ1
(14)Proof. All along the proof, we takebenet of the re ursive form e ien y of updateequation(13):
E
[X
k+1
] = E [W
k
X
k
+ Z
k+1
− Z
k
]
(15)= E [W
k
] E [X
k
] + E [Z
k+1
] − E [Z
k
]
(16)= W E [X
k
] + ¯
Z − ¯
Z
(17)Thisrelationistrueex eptfor
k = 0
:E
[X
1
] = E [Z
1
] = ¯
Z
. Itfollows:E
[X
k+1
] = W
k
E
[Z
1
]
(18)= W
k
Z
¯
(19)
Theergodi ityof
W
statesthatW
k
onvergesto
11
T
n
ask
grows. Together withrelation(19),ityeldstotheexpe tedresult:E
[X
k
] −−−−→
k→∞
11
T
n
Z = ¯
¯
θ1
(20)3.4 Se ond-order moment onvergen e
Nowthat ouralgorithmis provedto onvergeonaveragetowardsthe onstant ve tor
θ1
¯
,thequalityofthis onvergen emustbeanalyzed. Inotherwords, an themean distan ebetween onsensusand trueaverage(statisti ally)bemade arbitrarily small ? A positiveansweris demonstrated below. Te hni al issues stand in the randomnature of matri es appearingin the algorithm, and also in thesimplefa tthat ovarian esarenotassumedto haveanyniteℓ
p
norm (in anydire tion). Moreinterestingly, the se ondorder onvergen e is usefull tostatethe onvergen ein probabilityasexplainedin thenextse tion.2
itsrowsand olumnssumto1 3
Proposition 4.
lim
k→∞
E
h
X
k
− ¯
θ1
2
i
= 0
(21)Proof. Theproofofproposition4ismorefastidiousthante hni al. Asthenext equationshows,thedeviationisdire tlyrelatedtothese ond-ordermomentof
X
k
:E
h
X
k
− ¯
θ1
2
i
= E
h
X
k
− ¯
θ1
T
X
k
− ¯
θ1
i
(22)Theright-handsideofequation(22) anbedeveloppedas:
E
X
k
T
X
k
− 2¯
θE
1
T
X
k
+ E
h
¯
θ1
2
i
(23)
whi hgivestrivially:
E
h
X
k
− ¯
θ1
2
i
= E
h
kX
k
k
2
i
− n¯
θ
2
(24)Thus,thegoalisnowtoprovethat
E
h
kX
k
k
2
i
onvergeston¯
θ
2
ask
in reases. Werewritethere ursivesystem(13)intoamoree ientway. Forthis,wedeneΨ(k, i)
andΦ(k, i)
inM
n
(R)
4 by:Ψ(k, i)
∆
=
I
ifi ≥ k
W
k−1
W
k−2
. . . W
i+1
W
i
otherwise (25)Φ(k, i)
∆
= Ψ(k, i) − Ψ(k, i + 1)
(26) ByputtingΦ(k, i)
andΨ(k, i)
intosystem(13),oneobtainsamoree ient formulationforX
k
:
X
k
= Z
k
+
k−1
P
i=1
Φ(k, i)Z
i
Z
0
= X
0
= [0 . . . 0]
T
(27)Thisnotationhelpsproving onvergen eofthese ondordermomentof
X
k
where lassi al upper bounding (see [14℄) would fail. As one should expe tfrom expression (27),
E
h
kX
k
k
2
i
ould be written with se ond-ordermoments
ofmeasures:
E
h
kX
k
k
2
i
= E
h
kZ
k
k
2
i
+ 2E
"
Z
T
k
k−1
X
i=1
Φ(k, i)Z
i
#
+ E
k−1
X
i=1
Φ(k, i)Z
i
2
(28)Thelimitofthersttermiseasilyderivedas:
Proposition 5.
lim
k→∞
E
h
kZ
k
k
2
i
=
¯
Z
2
(29)4
M
Proof.
E
h
kZ
k
k
2
i
=
n
X
p=1
E
θ
(k)
p
2
(30)−−−−→
k→∞
n
X
p=1
E
[θ
p
]
2
=
¯
Z
2
(31)Inordertomaketheproof learertothereader,thetwolasttermsof equa-tion(28) anbede omposedbyseparatingnoisyanddeterministi omponents:
E
"
Z
T
k
k−1
X
i=1
Φ(k, i)Z
i
#
= E
"
¯
Z
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
+ E
"
B
T
k
k−1
X
i=1
Φ(k, i)B
i
#
+ E
"
B
k
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
+ E
"
¯
Z
T
k−1
X
i=1
Φ(k, i)B
i
#
(32)E
k−1
X
i=1
Φ(k, i)Z
i
2
= E
k−1
X
i=1
Φ(k, i) ¯
Z
2
+ E
k−1
X
i=1
Φ(k, i)B
i
2
+ 2E
k−1
X
i=1
Φ(k, i) ¯
Z
!
T
k−1
X
i=1
Φ(k, i)B
i
!
(33)As noise ve tors
B
i
are entered, ross terms of equations (32) and (33) vanishesthroughlinear ombinations:E
"
Z
k
T
k−1
X
i=1
Φ(k, i)Z
i
#
= E
"
¯
Z
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
+ E
"
B
k
T
k−1
X
i=1
Φ(k, i)B
i
#
(34)E
k−1
X
i=1
Φ(k, i)Z
i
2
= E
k−1
X
i=1
Φ(k, i) ¯
Z
2
+ E
k−1
X
i=1
Φ(k, i)B
i
2
(35)Thelimit of the four termsremaining in (34) and (35)are obtained sepa-rately: Proposition6.
lim
k→∞
E
"
¯
Z
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
= n¯
θ
2
−
¯
Z
2
(36) Proposition7.lim
k→∞
E
k−1
X
i=1
Φ(k, i) ¯
Z
2
=
¯
Z
2
− n¯
θ
2
(37)Proposition 8.
E
"
B
T
k
k−1
X
i=1
Φ(k, i)B
i
#
−−−−−→
k→+∞
0
Proposition 9.lim
i→+∞
E
k−1
X
i=1
Φ(k, i)B
i
2
= 0
Theproofofthese fourpropositionsisthefastidioustaskweannoun edfor provingproposition(4),andisprovidedinappendix. Propositions6and7area reformulationof lassi alresultsongossipaveragingalgorithms([16℄)adaptated totheterminologyofthisarti le,anddonot ontainsmajordi ulties. Onthe ontrary,theproofof propositions 8and 9ismorete hni al andrelies onthe fa t thattheperturbationsdue tothemeasurement/estimationnoisevanish if theassumptionmadeon ovarian esin (12)isvalid.
Now,thanks to propositions 5to 9the limitof
E
h
kX
k
k
2
i
whenk
goesto+∞
an bederived:E
X
k
T
X
k
−−−−→
k→∞
¯
Z
+ 2(n¯θ
2
−
¯
Z
) + (
¯
Z
− n¯θ
2
)
(38)= n¯
θ
2
(39)Equation(24)andrelation(38)letusnishtheproofofproposition 4:
lim
k→∞
E
h
X
k
− ¯
θ1
2
i
= 0
(40)3.5 Convergen e in probability
Proposition4ensuresthe onvergen ein probabilityof
X
k
towardtheuniform ve torθ1
¯
. Thisistheprobabilisti ounterpartofthe onvergen eofsequen es onanormedve torspa e. Proposition10statesthattheprobabilityofhaving anerrorgreaterthan agiventhreshold anbemadearbitrarysmall, thetime indexbeinggreaterthanase ondthresholddependingontheerror.Proposition10. Foranypositiverealnumbers
α
andδ
,thereisanintegerN
, su hthat:∀k ≥ N, Pr
X
k
− ¯
θ1
≥ δ
≤ α
(41)Proof. If
(Ω, B, P)
isaprobabiltyspa e,andf
isameasurablereal-valued fun -tiononΩ
,Markov'sinequalitystatesthat foranyt ∈ R
+
:P
({ω ∈ Ω : |f(ω)| ≥ t}) ≤
1
t
E
P
[|f|]
(42)Inparti ular, for anyreal randomvariable
X
andδ > 0
, this isequivalent to:Applyinginequality(43)to
X
k
− ¯θ1
,oneobtains:
Pr
X
k
− ¯
θ1
≥ δ
= Pr
h
X
k
− ¯
θ1
2
≥ δ
2
i
(44)≤ δ
−2
E
h
X
k
− ¯
θ1
2
i
(45)Byproposition(4),on anndaninteger
N
su hthat∀k ≥ N, E
h
X
k
− ¯θ1
2
i
≤ αǫ
2
(46)
Together,equations (45)and (46) ensurethat
∀k ≥ N
, one anguarantee that:Pr
X
k
− ¯
θ1
≥ δ
≤ α
(47)Corollary 1.
∀α > 0
,δ > 0
,∃N ∈ N
, su h that∀k ≥ N
, the two following inequalities hold: i)Pr
max
1≤i≤n
|x
(k)
i
− ¯
θ| ≥ δ
≤ α
ii)∀i ∈ [1, n], Pr
h
|x
(k)
i
− ¯
θ| ≥ δ
i
≤ α
Proof. Thisisa onsequen eofthe lassi alnorminequalitystatingthat
∀X ∈ R
n
, kXk
∞
≤ kXk
2
(48)Convergen einprobabilityisthentriviallydedu ed fromthisinequality.
4 Simulation and analysis
No formal result is still available on the rate of onvergen e of JEGA. Some resultsmaybeexpli itlydedu edfromtheproof,butitseemsmoreinteresting to observe dire tly the behaviour of quadrati error through time under the inuen eofparameters(noise,meanparameters,...). Ourgoalistogetintuitive and empiri alknowledge ofglobal tenden ies. Forthispurpose, we onsidera wireless network modeled asa random unit dis graph
5
G
wheren
nodes are distributed unifomily on a square simulation plane of widthd
. For the sake of simpli ity, ea h node updates its lo al estimateat ea h iterationk
(it gets onesample). However,only onepairof nodes will intera t. For thispurpose, an initiatori
is hosen randomly among the set of verti es ofG
a ordingto uniform distribution,whilethedestinatorj
is hosenuniformilyrandomly too inthesetofneighborsofi
. Wethen omputeanestimateoftheaveragevalueof aparameter. Measurementnoisearei.i.d. zero-meangaussianrandomvariables withstandarddeviationσ
i
at nodei
.5
4.1 Impa t of noise varian e
Inthis set of simulations, theentries of the steady stateve tor
Z
¯
were taken on eat random uniformilyin the interval[0, 3]
6
and kept onstant duringthe simulations. Weanalysetheevolutionofthemeansquaredeviationunder sev-eralvaluesof
σ
intheset{10, 1, 0.1, 0.01}
,takenidenti allyforallnodes,and alsofornoiselessmeasurements(σ = 0
). Thislast ase orrespondtothe stan-dard gossip averagingpro ess. Werun the algorithm forN
iter
iterationsand pro essN
avg
simulations. ThetermMSEdenotesthemeansquarederrornorm ofthestateve torX
k
,i.e.E
h
X
k
− ¯
θ1
2
i
.n
40
d
100
r
2
σ
{0, 0.01, 0.1, 1, 10}
N
iter
1000
N
avg
3000
Table1: Simulationparameters
Inthenoiseless ase,onere ognizethe lassi alsum-exponential onvergen e rateof gossipaveragingalgorithms, dominatedforlarge
k
asfollows:E
h
X
k
− ¯
θ1
2
i
≤ [λ
2
(W )]
k
E
h
X
0
− ¯
θ1
2
i
(49)Proof of this inequality an be found in [14℄. In the general ase, i.e. when
σ 6= 0
,theerrorduetonoise an ellationshouldbesuperposedtothenoiseless error. We used standard sample mean estimators on ea h node, whi h are known to onverge proportionnaly to the inverse of the number of samples. This trend is onserved in the global mean squared error (MSE) as seen on gure2,whereweplottedthe inversenormalizedMSE. Thus,for smallσ
,the onvergen eshouldbeseenasexperien ingtwodierentsstates. Therststate orrespondsto the oarsehomogeneizationof the omponentsofX
k
just asif all estimates to be averaged were onstant ( lassi al gossip averaging, see [6℄ and [16℄), while these ond orresponds to the slow an ellation of estimation noise,asifspatialaveragingwasinstantaneous. Howeveroneshouldthinkthat there are asymptotes towards any non-zero values. This false impression is givenby thelogarithmi s aleand the fa t that estimatorvarian esde reases proportionnalytok
−1
).4.2 Impa t of message ex hange rate
The rst set of simulations shows an in rease of onvergen e speed when
σ
de reases. Thus, one should onsider two way of redu ing virtuallyσ
. On onehand,themessageex hangerate ould beredu edbysomefa tor,sayK
. Ontheotherhand,theaveragingpro essowsnormallybut datasamplesare6
1e-10
1e-08
1e-06
0.0001
0.01
1
100
10000
0
1000
2000
3000
4000
5000
6000
MSE
iteration number
sigma=10
sigma=1
sigma=0.1
sigma=0.01
sigma=0 (gossip)
Figure 1: MSEE
h
X
k
− ¯
θ1
2
i
(log-s ale)0
500
1000
1500
2000
2500
3000
3500
4000
4500
0
1000
2000
3000
4000
5000
6000
normalized inverse MSE
iteration number
sigma=10
sigma=1
sigma=0.1
sigma=0.01
Figure2: InversenormalizedMSE
E
h
1
σ
2
X
k
− ¯
θ1
2
i
−1
(log-s ale)buered and sent in bursts to estimators: this should avoid the propagation of information of poor quality. In fa t, the rst proposal indu es a laten y proportionalto
K
but leadsto energysavings fora hievingagivenerror(see gures3&4: lessex hangesareneededforaa hievingagivenerror),whilethe se ond s hemes seems not to improve onvergen e. In fa t, simulationsshow thatbuering reatesjigsawos illationsaroundtheerrora hievedbystandard s heme7
(see gures5&6).
5 Con lusion
Inthis paper, we introdu ed anew distributed algorithm for joint estimation and averagingwhi h generalizes the spa e-time diusion s heme presented in [9℄, and named itJEGA. We provedthe onvergen eof our solutionin terms of rst and se ond order moments of deviation to thetrue average,and then dedu ed the onvergen e in probability of ea h lo al averaged estimate. As itis here basedon peer-to-peer intera tions, this algorithm is learly adapted forsensornetworksappli ations. However,weproposetogeneralizetheproofs given hereto the ase of syn hronousintera tions hara terizedby a onstant transition matrix: su h an approa hrelies onnding ne essary onditionsfor ergodi ityand verifying their onsequen esonnetworking models. Theability oeredbyJEGAofperformingestimationandaveraginginparallelgivesriseto appli ations in artography, lo alization, orsyn hronization in wireless sensor networks. Despiteitssimpli ity,themain defaultof thisalgorithmisthe di- ultytonda losedexpressionforits onvergen erate. Nevertheless,bymean of simulation, our analysis provides a good heurisiti for qualitatively predi t themeanbehaviourofdeviation throughtime. Inafuture work,wewillfo us ourattention ona deepest analysis of onvergen e rate and on nding better solutionswiththehelpofpredi tive/polynomiallters([21℄).
7
1e-05
0.0001
0.001
0.01
0.1
1
10
100
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
MSE
delay (iteration number x subsampling factor)
K=1
K=2
K=4
K=5
K=10
K=25
K=50
K=100
Figure 3: MSEE
h
X
k
− ¯
θ1
2
i
(log-s ale)1e-05
0.0001
0.001
0.01
0.1
1
10
100
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
MSE
iteration number
K=1
K=2
K=4
K=5
K=10
Figure 4: MSEE
h
X
k
− ¯
θ1
2
i
(log-s ale)0.0001
0.001
0.01
0.1
1
10
100
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
MSE
iteration number
buffersize=1
buffersize=10
buffersize=100
Figure5: MSEE
h
X
k
− ¯θ1
2
i
(log-s ale)0.001
2000
2100
2200
2300
2400
2500
MSE
iteration number
buffersize=1
buffersize=10
buffersize=100
Figure6: MSEE
h
X
k
− ¯
θ1
2
i
(log-s ale): zoomof g5Appendix A: Proofs for propositions in part 3.4 Proof of proposition 6
E
"
¯
Z
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
= ¯
Z
T
E
"
k−1
X
i=1
Φ(k, i)
#
¯
Z
(50)wherethemiddlesum anbeeasily simplied:
E
"
k−1
X
i=1
Φ(k, i)
#
= E
"
k−1
X
i=1
(Ψ(k, i) − Ψ(k, i + 1))
#
(51)= E [Ψ(k, 1) − Ψ(k, k)]
(52)= E [W
k−1
W
k−2
. . . W
1
] − I
(53) Asthematri esW
k
arei.i.d.,one anaveragethemseparatelyandobtains:E
"
k−1
X
i=1
Φ(k, i)
#
= W
k−1
− I
(54)Thislast expression anbereinje tedineq. (50):
E
"
¯
Z
T
k−1
X
i=1
Φ(k, i) ¯
Z
#
= ¯
Z
T
W
k−1
− I
¯Z
(55)Bymeanofergodi ityof
W
, onehave:¯
Z
T
W
k−1
− I ¯Z −−−−→
k→∞
¯
Z
T
11
T
n
− I
¯
Z = n¯
θ
2
−
¯
Z
2
(56) Proof of proposition 7E
k−1
X
i=1
Φ(k, i) ¯
Z
2
= E
k−1
X
i=1
Φ(k, i)
!
¯
Z
2
(57)= E
h
(Ψ(k, 1) − I) ¯
Z
2
i
(58)= E
h
Z
¯
T
Ψ(k, 1)
T
Ψ(k, 1) ¯
Z
i
(59)− 2E
¯Z
T
Ψ(k, 1) ¯
Z
+
¯
Z
2
(60)Equation(59)isequivalenttothesquarednormofthestateve torin stan-dardgossipaveragingalgorithms[16℄,andthustendsnaturallytowardthenorm oftheaverage onsensusve tor:
E
h
Z
¯
T
Ψ(k, 1)
T
Ψ(k, 1) ¯
Z
i
−−−−→
k→∞
¯
θ1
2
= n¯
θ
2
(61)Aseq. (60)isequivalentto eq. (56),we an omputethelimitof eq. (57) foraninnite
k
:E
k−1
X
i=1
Φ(k, i) ¯
Z
2
−−−−→
k→∞
¯
Z
2
− n¯
θ
2
(62)Proof of proposition 8
Thisproofrelies ontheuseof thefollowingstatement, whi h is aspe ial ase ofthetheorems provedin[22℄:
Proposition11([22℄). Let
E
beeitherR
orC
,(u
n
)
n∈N
asequen eofelements ofE
su hthatlim
n→∞
ku
n
k = 0
,and
z ∈ E
with|z| < 1
. Then,lim
n→+∞
n
X
k=1
u
k
z
n−k
= 0
(63)W
is learly symmetri and positive-semidenite. It implies thatW
an be diagonalized by an orthogonal matrix. In the following,P
j
denotes the eigenve torofW
asso iatedwiththej
-th eigenvalueλ
j
.E
"
B
T
k
k−1
X
i=1
Φ(k, i)B
i
#
=
k−1
X
i=1
E
B
T
k
E
[Φ(k, i)] B
i
(64)=
k−1
X
i=1
E
B
k
T
W
k−i
− W
k−i−1
B
i
(65)=
n
X
j=1
k−1
X
i=1
λ
k−i
j
− λ
k−i−1
j
E
B
T
k
P
j
P
T
j
B
i
(66)where equation (65) results from the independan e of estimation noises and ex hangematri es. For
λ
j
= 1
,λ
k−i
j
− λ
k−i−1
j
vanishes. Ontheotherside,theovarian eterm anbeboundedbyanon-negativesequen ehavingazerolimit atinnity:
E
B
T
k
P
j
P
T
j
B
i
=
X
u,v
P
j
P
j
T
uv
C
ki
uv
(67)≤ n
2
max
u,v
max
k∈N
k>i
|C
ki
uv
|
∆
= n
2
C
i
(68)The prerequisite ondition on
C
ki
uv
ensures thatC
i
onverges to0
. Thus,E
B
T
k
P
j
P
T
j
B
i
does similarly for all normalized ve tor
P
j
. On another side,|λ
j
| < 1
wheneverλ
j
6= 1
. Bysplitting the summation overeigenvalues and developping the multipli ation byλ
k−i
j
− λ
k−i−1
j
, equation (66) an beeasilyexpressedasalinear ombinationof
2(n − 1)
sums,ea hofwhi hsaties proposition 11. This impliesthat (66) goes to0 ask
grows,independantlyof thede reaserateof ovarian es8 .
Proof of proposition 9
Letus startbyrewrittingequation(9):
E
k−1
X
i=1
Φ(k, i)B
i
2
=
k−1
X
i=1
k−1
X
j=1
E
h
B
T
j
Φ(k, j)
T
Φ(k, i)B
i
i
(69) 8thepartialsum(65)mustbe arefullymanipulatedinordertoavoiddivergen eatlarge
Aswedeveloptheterm
Φ(k, j)
T
Φ(k, i)
,weobserveon eagainthat the ompo-nents of ea h
B
i
andB
j
that are olinearto1
(asso iatedwith eigenvalue1) arenottransmittedthroughΦ(k, i)
:Φ(k, j)
T
Φ(k, i) = Ψ(k, j)
T
Ψ(k, i) − Ψ(k, j + 1)
T
Ψ(k, i)
− Ψ(k, j)
T
Ψ(k, i + 1) + Ψ(k, j + 1)
T
Ψ(k, i + 1)
(70) Forthe sakeof simpli ity,Ξ
k
(i, j)
denotesE
h
Ψ(k, j)
T
Ψ(k, i)
i
. It is usefulto noti ethatΞ
k
(i, j)
an befa torizedbyexternalizingtermsini
:Ξ
k
(i, j) = E
h
Ψ(k, j)
T
Ψ(k, i)
i
(71)= E
h
Ψ(k, j)
T
Ψ(k, j)Ψ(j, i)
i
(72)= E
h
Ψ(k, j)
T
Ψ(k, j)W
j−i
i
(73)= Ξ
k
(j, j)W
j−i
(74)Followingtheapproa hof[14℄helpsboundingthespe tralradiusof
Ξ
k
(j, j)
. Foranyn
-dimensionalve torX ⊥ 1
, thefollowinginequalitieshold:X
T
Ξ
k
(j, j)X = X
T
E
h
Ψ(k, j)
T
Ψ(k, j)
i
X
(75)= X
T
E
h
Ψ(k − 1, j)
T
W Ψ(k − 1, j)
i
X
(76)≤ λ
2
X
T
E
h
Ψ(k − 1, j)
T
Ψ(k − 1, j)
i
X
(77)≤ λ
k−j
2
kXk
2
(78)Usingthisinequalityin onjun tionwiththeRayleigh-Ritz hara terization theorem,oneobtains:
ρ
Ψ
(k, j)
= ρ
∆
Ξ
k
(j, j) −
11
T
n
(79)= max
X⊥1
kXk=1
n
X
T
E
h
Ψ(k, j)
T
Ψ(k, j)
i
X
o
(80)≤ λ
k−j
2
(81)Re alling that any omponent of
B
i
olinear to1
an be an elled, the modulusofits oordinatesonabasisofeigenve torsofW
arethenres aledby afa torlessthanor equaltoλ
j−i
2
whenweapplyW
j−i
. Thisgivesrisetothe followinginequality:
E
B
T
j
V
k
V
k
T
W
j−i
B
i
=
X
λ
j−i
p
E
h
B
T
j
V
k
V
k
T
V
p
V
T
p
B
i
i
(82)
≤ (n − 1)n
2
λ
j−i
2
max
u,v
|C
uv
ij
|
(83) whereV
k
isanyunitaryve torofR
n
,and
V
p
isaneigenve torofW
(andthen ofW
j−i
) asso iated with eigenvalue
λ
p
. LetS
ij
onstitutedofeigenvalues
λ
ψ
ofasso iatedeigenve torV
λ
Ψ
. Usingthisnotation,E
B
T
j
Ξ
k
(i, j)B
i
anbebroughtintoausefulform forbounding purposes:
E
B
T
j
Ξ
k
(i, j)B
i
=
E
B
T
j
Ξ
k
(j, j)W
j−i
B
i
(84)
≤
X
λΨ∈S
ij
k
λψ 6=1
λ
Ψ
E
B
T
j
V
λ
Ψ
V
T
λ
Ψ
W
j−i
B
i
(85)≤ (n − 1)
2
ρ
Ψ
(k, j)n
2
λ
j−i
2
max
u,v
|C
ij
uv
|
(86)Couplingequations(81)and(83),thefollowingmajorizationstates:
E
B
T
j
Ξ
k
(i, j)B
i
≤ (n − 1)
2
n
2
λ
k−i
2
max
u,v
|C
ij
uv
|
(87) Letus deneξ
∆
=
√
λ
2
, and rememberthati < j
: immediately0 ≤ λ
k−i
2
=
ξ
2k−2i
≤ ξ
2k−i−j
. Thisinequalityhen eimplies:
X
i<j
E
B
T
j
Ξ
k
(i, j)B
i
≤ (n − 1)
2
n
2
X
i<j
λ
k−i
2
max
u,v
|C
ij
uv
|
(88)≤ (n − 1)
2
n
2
X
i<j
ξ
2k−i−j
max
u,v
|C
ij
uv
|
|
{z
}
−−−−→
k→∞
0
(89)The onvergen e toward
0
of theright-handside is ensuredby proposition 12:Proposition 12. ([22℄)Let
E
be eitherR
orC
,(u
ij
)
(i,j)∈N
2
∈ E
N
2
su hthat
lim
(i+j)→∞
ku
ij
k = 0
,and
z ∈ E
with|z| < 1
. Thenlim
n→+∞
X
1≤i,j≤n
u
ij
z
2n−i−j
= 0
(90)When
i = j
,a lassi alresultfrom [14℄statesthat:E
B
T
i
Ξ
k
(i, i)B
i
≤ λ
k−i
2
E
h
kB
i
k
2
i
(91)
Then,summingover
i
givesthediagonallimit:0 ≤
k−1
X
i=1
E
B
i
T
Ξ
k
(i, i)B
i
≤
k−1
X
i=1
λ
k−i
2
E
h
kB
i
k
2
i
|
{z
}
−−−−→
k→∞
0
(92)In the same way, it is easy to bound terms with
Ξ
k
(i + 1, j)
,Ξ
k
(i, j + 1)
orΞ
k
(i + 1, j + 1)
in equation (69). We obtain 4 sums in the spirit of the ombinationofequations(88)and(92)thattendto0ask
goesto innity.Appendix B: Optimality of
1/2
weightsGiven a weight
α ∈ [0, 1]
for ex hanges, the transition matrixW
ij
takes the form:W
ij
= I − α(e
i
− e
j
)(e
i
− e
j
)
T
(93) Under theassumption thatW
k
i.i.d. randommatri es drawn from theset ofW
ij
,performan eboundsarederivedgiventhese ondsmallesteigenvalueofE
W
T
k
W
k
(see[15℄),i.e.:λ
(α)
2
∆
= ρ
E
W
k
T
W
k
−
11
T
n
(94)= max
X⊥1
kXk=1
X
T
E
W
T
k
W
k
X
(95)One antrytooptimizethis riteriona ordingtoparameter
α
. Proposition 13answerssimplyto thisproblem.Proposition13. Forpeer-to-peerex hangeswithxedlinkprobabilitiesandx ex hange weight
α ∈ [0, 1]
,λ
(α)
2
isoptimalforα = 1/2
.Proof. Let
p
ij
betheprobabiltythat,ifnodei
is hosenasinitiator,it onta ts nodej
. ThenE
W
T
k
W
k
anbeeasily rewrittenwiththesamenotationsasin equation(93):
E
W
k
T
W
k
=
1
n
X
i,j
p
ij
W
ij
T
W
ij
(96)= I − 2α(1 − α) (e
i
− e
j
) (e
i
− e
j
)
T
(97) Then,writtingλ
i
(M )
thei
-theigenvalueofM
inde reasingorder,onehas:λ
(α)
2
= ρ
1
n
X
i∼j
p
ij
W
ij
T
W
ij
(98)= λ
2
I − 2α(1 − α) (e
i
− e
j
) (e
i
− e
j
)
T
(99)= 1 − 2α(1 − α)λ
n−1
X
i∼j
p
ij
E
ij
(100)= 1 − 2α(1 − α)µ
(101) Now,derivatingλ
(α)
2
w.r.t. weightα
gives:∂λ
(α)
2
∂α
= (−2 + 4α)µ
(102)Sin e
E
[W
k
]
is ergodi ,µ
annot be null. Thus the only way forλ
(α)
2
tohaveanullderivativeisthat
α = 1/2
. ThematrixE
W
T
k
W
k
istriviallydoubly
sto hasti and then
λ
(α)
2
≤ 1 = λ
(0)
2
. As a onsequen e,λ
(α)
2
is minimum forα = 1/2
.Referen es
[1℄ K.Sohraby,D. Minoli,andT.Znati, WirelessSensorNetworks: T e hnol-ogy,Proto olsandAppli ations. Wiley-Inters ien e, 2007.
[2℄ P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein, Energy-e ient omputingforwildlifetra king:Designtradeosandearly experien eswithzebranet, in ASPLOS,SanJose, CA,o t2002.
[3℄ A. Mainwaring, D. Culler, J. Polastre, R. Szew zyk, and J. Anderson, Wirelesssensornetworksforhabitatmonitoring, inWSNA'02: Pro eed-ings of the 1st ACM international workshop on Wireless sensor networks andappli ations. NewYork,NY, USA:ACM,2002,pp.8897.
[4℄ A. Terzis, A. Anandarajah, K.Moore, andI.-J. Wang,Slipsurfa e lo al-ization in wireless sensornetworks for landslide predi tion, in IPSN '06: Pro eedings of thefthinternational onferen eon Informationpro essing in sensor networks. NewYork,NY,USA:ACM,2006,pp.109116.
[5℄ M. V. Ramesh, R. Raj, J. U. Freeman, S. Kumar, and P. V. Rangan, Fa torsandapproa hestowardsenergyoptimizedwirelesssensornetworks to dete trainfall indu edlandslides, inICWN,2007,pp.435438.
[6℄ L.Xiao,S.Boyd,andS.Lall,As hemeforrobustdistributedsensorfusion basedonaverage onsensus,in IPSN'05: Pro eedings ofthe 4th interna-tional symposiumon Information pro essing insensor networks. Pis at-away,NJ,USA:IEEEPress,2005,p.9.
[7℄ A. Jadbabaie, J. Lin, and A. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Transa tions on Automati Control,vol.48,no.6,pp.9881001,2002.
[8℄ L. Xiao and S. Boyd, Fast linear iterations for distributed averaging, 2003.[Online℄.Available: iteseer. omp.nus.edu.sg/xiao03fast.html
[9℄ L.Xiao,S. Boyd, andS. Lall, Aspa e-time diusions hemefor peer-to-peerleast-squaresestimation, in IPSN '06: Pro eedings of the fth inter-national onferen e on Information pro essing in sensor networks. New York,NY,USA:ACMPress,2006,pp.168176.
[10℄ N.Maré hal,J.B.Pierrot,andJ.M.Gor e,Finesyn hronizationfor wire-less sensornetworksusing gossipaveragingalgorithms, 2007,[Submitted to IEEEICC'08℄.
[11℄ M.Jelasity,A.Montresor,andO.Babaoglu,Gossip-basedaggregationin large dynami networks, ACM Trans. Comput. Syst., vol. 23, no. 3,pp. 219252,2005.
[12℄ D. Kempe,A.Dobra,andJ.Gehrke,Gossip-based omputationof aggre-gateinformation, fo s, vol.00,p.482,2003.
[13℄ R.Olfati-Saber,J.A.Fax,andR.M.Murray,Consensusand ooperation in networkedmulti-agentsystems, inPro .of the IEEE,Jan2007.
[14℄ S.Boyd,A.Ghosh,B.Prabhakar,andD.Shah,Gossipalgorithms: Design, analysisandappli ations, pp.16531664,2005.
[15℄ ,Analysisandoptimizationofrandomizedgossipalgorithms,inPro . of44thIEEEConferen eonDe isionandControl(CDC2004),De 2004, pp.53105315.
[16℄ ,Randomizedgossipalgorithms, IEEE/ACM Trans.Netw.,vol.14, no.SI,pp.25082530,2006.
[17℄ A.G.Dimakis,A.D. Sarwate,andM.J.Wainwright,Geographi gossip: E ientaggregationforsensornetworks,2006.
[18℄ M. E. Yildiz and A. S aglione, Dierential nested latti e en oding for onsensus problems, in IPSN '07: Pro eedings of the 6th international onferen eonInformationpro essinginsensornetworks. NewYork,NY, USA:ACMPress,2007,pp.8998.
[19℄ L.Xiao,S.Boyd,andS.-J.Kim,Distributedaverage onsensuswith least-mean-square deviation, J. Parallel Distrib. Comput., vol. 67, no. 1, pp. 3346,2007.
[20℄ W. Beyn andL.Elsner, Inniteprodu tsand para ontra tingmatri es, Ele troni Transa tions on Numeri al Analysis, vol.2,pp.183193,1997. [Online℄.Available: iteseer.ist.psu.edu/elsner97innite.html
[21℄ E. Kokiopoulou and P. Frossard, Polynomial Filtering for Fast Conver-gen einDistributedConsensus,IEEETransa tions onSignalPro essing, 2008.
[22℄ N. Maré hal, On the a tion of a onvolution-like operator on spa es of multidimensional sequen es having isotropi limits, 2007, [CEA internal report-available ondemand℄.
[23℄ R. A. Horn and C. R. Johnson,Matrix analysis. New York,NY, USA: CambridgeUniversityPress,1986.
[24℄ S.N.Elaydi,Anintrodu tiontodieren eequations. Se au us,NJ,USA: Springer-VerlagNewYork,In .,1996.
[25℄ C. Godsil andG. Royle, Algebrai Graph Theory, ser.Graduate Texts in Mathemati s. Springer-Verlag,2001.
[26℄ D. S. S herberand H. C. Papadopoulos, Lo ally onstru ted algorithms fordistributed omputationsinad-ho networks,inIPSN'04: Pro eedings of the third international symposium on Information pro essing in sensor networks. NewYork,NY, USA:ACM Press,2004,pp.1119.
[27℄ D. L. Hall and J. Llinas, Handbook of Multisensor Data Fusion. CRC Press,2001.
[28℄ D.Estrin,R.Govindan,J.Heidemann,andS.Kumar,Next entury hal-lenges: s alable oordinationin sensor networks, in MobiCom '99: Pro- eedings of the5th annualACM/IEEEinternational onferen eon Mobile omputing andnetworking,NewYork,NY, USA,1999,pp.263270.