Naming game:
dynamics on complex networks
A. Barrat, LPT, Université Paris-Sud, France
A. Baronchelli (La Sapienza, Rome, Italy) L. Dall’Asta (LPT, Orsay, France)
V. Loreto (La Sapienza, Rome, Italy)
http://www.th.u-psud.fr/
-Phys. Rev. E 73 (2006) 015102(R) -Europhys. Lett. 73 (2006) 969
-Preprint (2006)
Naming game
Interactions of N agents who communicate on how to associate a name to a given object
Agents:
-can keep in memory different words -can communicate with each other
Example of social dynamics or agreement dynamics
Minimal naming game:
dynamical rules
At each time step:
-2 agents, a speaker and a hearer, are randomly selected -the speaker communicates a name to the hearer
(if the speaker has nothing in memory –at the beginning- it invents a name)
-if the hearer already has the name in its memory: success
-else: failure
Minimal naming game:
dynamical rules
success => speaker and hearer retain the uttered word as the correct one and cancel all other words from their memory
failure => the hearer adds to its memory the word given by
the speaker
Minimal naming game:
dynamical rules
Speaker
Speaker Speaker
Speaker Hearer
FAILURE
Hearer Hearer
Hearer
SUCCESS
ARBATI ZORGA GRA
ARBATI ZORGA GRA
ZORGA ARBATI ZORGA GRA
ZORGA REFO
TROG ZEBU
REFO TROG ZEBU ZORGA
ZORGA
TROG
ZEBU
Naming game:
other dynamical rules
Speaker
Speaker Speaker
Speaker Hearer
FAILURE
Hearer Hearer
Hearer
SUCCESS
1.ARBATI 2.ZORGA 3.GRA
1.ARBATI 2.ZORGA 3.GRA
1.ZORGA 2.ARBATI 3.GRA
1.ARBATI 2.GRA
3.ZORGA
1.TROG 2.ZORGA 3.ZEBU 1.REFO
2.TROG 3.ZEBU
1.REFO 2.TROG 3.ZEBU 4.ZORGA
1.TROG 2.ZEBU 3.ZORGA
Possibility of giving weights to words, etc...
=> more complicate rules
Simplest case: complete graph
interactions among individuals create complex networks:
a population can be represented as a graph on which
interactions
agents nodes
edges
a node interacts equally with all the
others, prototype of mean-field behavior
Naming game:
example of social dynamics
Baronchelli et al. 2005 (physics/0509075)
Complete graph
Total number of words=total memory used N=1024 agents
Number of
different words
Success rate Memory peak
Building of correlations
Convergence
Complete graph:
Dependence on system size
● Memory peak: t max / N 1.5 ; N maxw / N 1.5
average maximum memory per agent / N 0.5
● Convergence time: t conv / N 1.5
Baronchelli et al. 2005 (physics/0509075)
diverges as
N 1
Local consensus is reached very quickly through repeated interactions.
Then:
-clusters of agents with the same unique word start to grow,
-at the interfaces series of successful and unsuccessful interactions take place.
coarsening phenomena (slow!)
Few neighbors:
Another extreme case:
agents on a regular lattice
Baronchelli et al., PRE 73 (2006) 015102(R)
Another extreme case:
agents on a regular lattice
N=1000 agents
MF=complete graph
1d, 2d: agents on a regular lattice
N w =total number of words; N d =number of distinct words; R=sucess rate
Regular lattice:
Dependence on system size
● Memory peak: t max / N ; N maxw / N
average maximum memory per agent: finite!
● Convergence by coarsening: power-law decrease of N w /N towards 1
● Convergence time: t conv / N 3 =>Slow process!
(in d dimensions / N 1+2/d )
Two extreme cases
Complete graph dimension 1 maximum
memory
/ N 1.5 / N
convergenc e
time
/ N 1.5 / N 3
Naming Game on a Small-world
Watts & Strogatz,
Nature 393, 440 (1998)
N = 1000
•Large clustering coeff.
•Short typical path
N nodes forms a regular lattice.
With probability p,
each edge is rewired randomly
=>Shortcuts
1D Random topology p: shortcuts
(rewiring prob.)
(dynamical) crossover expected:
●
short times: local 1D topology implies (slow) coarsening
●
distance between two shortcuts is O(1/p), thus when a cluster is of order 1/p the mean-field behavior emerges.
Dall'Asta et al., EPL 73 (2006) 969
Naming Game on a small-world
Naming Game on a small-world
increasing p p=0
p=0: linear chain
p À 1/N : small-world
Naming Game on a small-world
convergence time:
/ N 1.4
maximum memory:
/ N
Complete
graph dimension 1 small-world maximum
memory
/ N 1.5 / N / N
convergence
time / N 1.5 / N 3 / N 1.5
What about other types of networks ? Better not to have
all-to-all communication,
nor a too regular network structure
1.Usual random graphs: Erdös-Renyi model (1960)
N points, links with proba p:
static random graphs
Poisson distribution
(p=O(1/N))
Networks:
Homogeneous and heterogeneous
P(k) ~k -3
(1) GROWTH : At every timestep we add a new node with m edges (connected to the nodes already present in the system).
(2) PREFERENTIAL ATTACHMENT : The probability Π that a new node will be connected to node i depends on the connectivity k
iof that node
A.-L.Barabási, R. Albert, Science 286, 509 (1999)
Networks:
Homogeneous and heterogeneous
2.Scale-free graphs: Barabasi-Albert (BA) model
/ k i
Definition of the Naming Game on heterogeneous networks
recall original definition of the model:
select a speaker and a hearer at random among all nodes
=>various interpretations once on a network:
-select first a speaker i and then a hearer among i’s neighbours -select first a hearer i and then a speaker among i’s neighbours
-select a link at random and its 2 extremities at random as hearer and speaker
can be important in heterogeneous networks because:
-a randomly chosen node has typically small degree
-the neighbour of a randomly chosen node has typically large degree
NG on heterogeneous networks
Different behaviours
shows the importance
of understanding the role of the hubs!
Example: agents on a BA network:
NG on heterogeneous networks
Speaker first: hubs accumulate more words
Hearer first: hubs have less words and “polarize” the system,
hence a faster dynamics
NG on homogeneous and heterogeneous networks
-Long reorganization phase with creation of correlations, at almost constant N w and decreasing N d
-similar behaviour for BA
and ER networks
NG on complex networks:
dependence on system size
● Memory peak: t max / N ; N maxw / N
average maximum memory per agent: finite!
● Convergence time: t conv / N 1.5
Effects of average degree
larger <k>
●
larger memory,
●
faster convergence
larger clustering
●
smaller memory,
●