• Aucun résultat trouvé

Event-Scheduling Algorithms with Kalikow Decomposition for Simulating Potentially Infinite Neuronal Networks

N/A
N/A
Protected

Academic year: 2021

Partager "Event-Scheduling Algorithms with Kalikow Decomposition for Simulating Potentially Infinite Neuronal Networks"

Copied!
16
0
0

Texte intégral

(1)

HAL Id: hal-02321497

https://hal.archives-ouvertes.fr/hal-02321497

Submitted on 30 Nov 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Event-Scheduling Algorithms with Kalikow

Decomposition for Simulating Potentially Infinite

Neuronal Networks

Tien Cuong Phi, Alexandre Muzy, Patricia Reynaud-Bouret

To cite this version:

Tien Cuong Phi, Alexandre Muzy, Patricia Reynaud-Bouret. Event-Scheduling Algorithms with

Ka-likow Decomposition for Simulating Potentially Infinite Neuronal Networks. SN Computer Science,

Springer, 2020, 1 (1), �10.1007/s42979-019-0039-3�. �hal-02321497�

(2)

arXiv:1910.10576v1 [stat.CO] 23 Oct 2019

decomposition for simulating potentially infinite

neuronal networks

T.C. Phi† and A. Muzy‡ and P. Reynaud-Bouret†

Abstract: Event-scheduling algorithms can compute in continuous time the next occurrence of points (as events) of a counting process based on their current conditional intensity. In particular event-scheduling algorithms can be adapted to perform the simulation of finite neuronal networks activity. These algorithms are based on Ogata’s thinning strategy [17], which always needs to simulate the whole network to access the behaviour of one particular neuron of the network. On the other hand, for discrete time models, theoretical algorithms based on Kalikow decomposition can pick at random influencing neurons and perform a perfect simulation (meaning without approximations) of the behaviour of one given neuron embedded in an infinite network, at every time step. These algorithms are currently not computation-ally tractable in continuous time. To solve this problem, an event-scheduling algorithm with Kalikow decomposition is proposed here for the sequential simulation of point processes neu-ronal models satisfying this decomposition. This new algorithm is applied to infinite neuneu-ronal networks whose finite time simulation is a prerequisite to realistic brain modeling.

MSC 2010 subject classifications:,, .

Keywords and phrases:Kalikow decomposition, Discrete event simulation, Point process, Infinite neuronal networks.

1. Introduction

Point processes in time are stochastic objects that model efficiently event occurrences with a huge variety of applications: time of deaths, earthquake occurrences, gene positions on DNA strand, etc. [1,22,20]).

Most of the time, point processes are multivariate [7] in the sense that either several processes are

considered at the same time, or in the sense that one process regroups together all the events of the different processes and marks them by their type. A typical example consists in considering either two processes, one counting the wedding events of a given person and one counting the children birth dates of the same person or only one marked process which regroups all the possible dates of birth or weddings independently and adds one mark per point, here wedding or birth.

Consider now a network of neurons each of them emitting action potentials (spikes). These spike trains can be modeled by a multivariate point process with a potentially infinite number of marks, each mark representing one given neuron. The main difference between classical models of multivariate point processes and the ones considered in particular for neuronal networks is the size

of the network. A human brain consists in about 1011neurons whereas a cockroach contains already

about 106 neurons. Therefore the simulation of the whole network is either impossible or a very

difficult and computationally intensive task for which particular tricks depending on the shape of

the network or the point processes have to be used [19,6,14].

October 24, 2019

Universit´e Cˆote d’Azur, CNRS, LJAD, FranceUniversit´e Cˆote d’Azur, CNRS, I3S,France

(3)

Another point of view, which is the one considered here, is to simulate, not the whole network, but the events of one particular node or neuron, embedded in and interacting with the whole network. In this sense, one might consider an infinite network. This is the mathematical point of

view considered in a series of papers [9,10, 18] and based on Kalikow decomposition [12] coupled

with perfect simulation theoretical algorithms [4, 8]. However these works are suitable in discrete

time and only provide a way to decide at each time step if the neuron is spiking or not. They cannot operate in continuous time, i.e. they cannot directly predict the next event (or spike). Up to our knowledge, there exists only one attempt of using such decomposition in continuous

time [11], but the corresponding simulation algorithm is purely theoretical in the sense that the

corresponding conditional Kalikow decomposition should exist given the whole infinite realization of a multivariate Poisson process, with an infinite number of marks, quantity which is impossible to simulate in practice.

The aim of the present work is to present an algorithm which

• can operate in continuous time in the sense that it can predict the occurrence of the next event. In this sense, it is an event-scheduling algorithm;

• can simulate the behavior of one particular neuron embedded in a potentially infinite network without having to simulate the whole network;

• is based on an unconditional Kalikow decomposition and in this sense, can only work for point processes with this decomposition.

In Section 2, we specify the links between event-scheduling algorithms and the classical point process theory. In Section 3, we give the Kalikow decomposition. In Section 4, we present the backward-forward perfect simulation algorithm and prove why it almost surely ends under certain conditions. In Section 5, we provide simulation results and a conclusion is given in Section 6.

2. Event-scheduling simulation of point processes

On the one hand, simulation algorithms of multivariate point processes [17] are quite well known in

the statistical community but as far as we know quite confidential in the simulation (computer

scien-tist) community. On the other hand, event-scheduling simulation first appeared in the mid-1960s [21]

and was formalized as discrete event systems in the mid-1970s [23] to interpret very general

simu-lation algorithms scheduling “next events”. A basic event-scheduling algorithm “jumps” from one

event occurring at a time stamp t ∈ R+0 to a next event occurring at a next time stamp t′ ∈ R+0,

with t′≥ t. In a discrete event system, the state of the system is considered as changing at times t, t

and conversely unchanging in between [24]. In [14], we have written the main equivalence between

the point processes simulation algorithms and the discrete event simulation set-up, which led us to a significant improvement in terms of computational time when huge but finite networks are into play. Usual event-scheduling simulation algorithms have been developed considering independently the components (nodes) of a system. Our approach considers new algorithms for activity tracking

simulation [5]. The event activity is tracked from active nodes to children (influencees).

Here we just recall the main ingredients that are useful for the sequel.

To define point processes, we need a filtration or history (Ft)t≥0. Most of the time, and this will

be the case here, this filtration (or history) is restricted to the internal history of the multivariate

process (Fint

t )t≥0, which means that at time t−, i.e. just before time t, we only have access to the

events that have already occurred in the past strictly before time t, in the whole network. The

(4)

firing rate, that is the frequency of spikes, given the past contained in Fint

t−. Let us just mention

two very famous examples.

If φi(t|Ft−int) is a deterministic constant, say M , then the spikes of neuron i form a homogeneous

Poisson process with intensity M . The occurrence of spikes are completely independent from what occurs elsewhere in the network and from the previous occurrences of spikes of neuron i.

If we denote by I the set of neurons, we can also envision the following form for the conditional intensity:

φi(t|Ft−int) = νi+ X

j∈I

wij(Nbj[t−A,t)∧ M ). (2.1)

This is a particular case of generalized Hawkes processes [3]. More precisely νiis the spontaneous

rate (assumed to be less than the deterministic upper bound M > 1) of neuron i. Then every neuron in the network can excite neuron i: more precisely, one counts the number of spikes that have been

produced by neuron j just before t, in a window of length A, this is Nbj[t−A,t); we clip it by the

upper bound M and modulate its contribution to the intensity by the positive synaptic weight

between neuron i and neuron j, wij. For instance, if there is only one spike in the whole network

just before time t, and if this happens on neuron j, then the intensity for neuron i becomes νi+ wij.

The sum over all neurons j mimics the synaptic integration that takes place at neuron i. As a

renormalization constraint, we assume that supi∈IP

j∈Iwij < 1. This ensures in particular that

such a process has always a conditional intensity bounded by 2M .

Hence, starting for instance at time t, and given the whole past, one can compute the next event in the network by computing for each node of the network the next event in absence of other spike

apparition. To do so, remark that in absence of other spike apparition, the quantity φi(s|Fs−int) for

s > t becomes for instance in the previous example

φabsi (s, t) = νi+

X

j∈I

wij(Nbj[s−A,t)∧ M ),

meaning that we do not count the spikes that may occur after t but before s. This can be generalized

(5)

Algorithm 1 Classical point process simulation algorithm ⊲With [t0, t1] the interval of simulation

1: Initialize the family of points P = ∅

⊲Each point is a time T with a mark, jT, which is the neuron on which T appears

2: Initialize t ← t0

3: repeat

4: foreach neuron i ∈ I do

5: Draw independently an exponential variable Eiwith parameter 1

6: Apply the inverse transformation, that is, find Tisuch that Z Ti

t

φabsi (s, t)ds = Ei.

7: end for

8: Compute the time T of the next spike of the system after t, and the neuron where the spike occurs by

T← mini∈ITi, with jT← arg mini∈ITi

9: if T≤ t1then

10: append T with mark jT to P

11: end if

12: t← T

13: until t > t1

Note that the quantity φabs

i (s, t) can be also seen as the hazard rate of the next potential point

Ti(1) after t. It is a discrete event approach with the state corresponding to the function φabs

i (., t).

Ogata [17], inspired by Lewis’ algorithm [13], added a thinning (also called rejection) step on

top of this procedure because the integral RTi(1)

t φ

abs

i (s, t)ds can be very difficult to compute. To

do so (and simplifying a bit), assume that φi(t|Ft−int) is upper bounded by a deterministic constant

M . This means that the point process has always less points than a homogeneous Poisson process

with intensity M . Therefore Steps 5-6 of Algorithm 1 can be replaced by the generation of an

exponential of parameter M , E′

iand deciding whether we accept or reject the point with probability

φabs

i (t + Ei′, t)/M . There are a lot of variants of this procedure: Ogata’s original one uses actually

the fact that the minimum of exponential variables, is still an exponential variable. Therefore one can propose a next point for the whole system, then accept it for the whole system and then decide on which neuron of the network the event is actually appearing. More details on the multivariate

Ogata’s algorithm can be found in [14].

As we see here, Ogata’s algorithm is very general but clearly needs to simulate the whole system

to simulate only one neuron. Moreover starting at time t0, it does not go backward and therefore

cannot simulate a Hawkes process in stationary regime. There has been specific algorithms based on clusters representation that aim at perfectly simulate particular univariate Hawkes processes

[16]. The algorithm that we propose here, will also overcome this flaw.

3. Kalikow decomposition

Kalikow decomposition relies on the concept of neighborhood, denoted by v, which are picked at random and which gives the portion of time and neuron subsets that we need to look at, to move forward. Typically, for a positive constant A, such a v can be:

(6)

• {(i, [−2A, 0)), (j, [−2A, −A))}, that is, we need the spikes of neuron i in the window [−2A, 0) and the spikes of neuron j in the window [−2A, −A);

• the emptyset ∅, meaning that we do not need to look at anything to pursue.

We need to also define l(v) the total time length of the neighborhood v whatever the neuron is. For instance, in the first case, we find l(v) = A, in the second l(v) = 3A and in the third l(v) = 0.

We are only interested by stationary processes, for which the conditional intensity, φi(t | Ftint− ),

only depends on the intrinsic distance between the previous points and the time t and not on the precise value of t per se. In this sense the rule to compute the intensity may be only defined at time 0 and then shifted by t to have the conditional intensity at time t. In the same way, the timeline of

a neighborhood v is defined as a subset of R∗

− so that information contained in the neighborhood

is included in Fint

0−, and v can be shifted (meaning its timeline is shifted) at position t if need be.

We assume that I the set of neurons is countable and that we have a countable set of possibilities for the neighborhoods V.

Then, we say that the process admits a Kalikow decomposition with bound M and neighborhood

family V, if for any neuron i ∈ I, for all v ∈ V there exists a non negative M -bounded quantity φv

i,

which is Fint

0− measurable and whose value only depends on the points appearing in the neighborhood

v, and a probability density function λi(.) such that

φi(0 | F0int−) = λi(∅)φ∅i + X v∈V,v6=∅ λi(v) × φvi (3.1) with λi(∅) + P v∈V,v6=∅ λi(v) = 1.

Note that because of the stationarity assumptions, the rule to compute the φv

i’s can be shifted

at time t, which leads to a predictable function that we call φvt

i (t) which only depends on what is

inside vt, which is the neighborhood v shifted by t. Note also that φ∅i, because it depends on what

happens in an empty neighborhood, is a pure constant.

The interpretation of (3.1) is tricky and is not as straightforward as in the discrete case (see

[18]). The best way to understand it is to give the theoretical algorithm for simulating the next

event on neuron i after time t (cf. Algorithm2).

Algorithm 2 Kalikow theoretical simulation algorithm

⊲With [t0, t1] the interval of simulation for neuron i

1: Initialize the family of points P = ∅

⊲ NB: since we are only interested by points on neuron i, jT= i is a useless mark here.

2: Initialize t ← t0

3: repeat

4: Draw an exponential variable E with parameter M , and compute T = t + E.

5: Pick a random neighborhood according to the distribution λi(.) given in the Kalikow decomposition and shift

the neighborhood at time T : this is VT.

6: Draw XT a Bernoulli variable with parameter φ

VT i (T ) M 7: if XT= 1 and T ≤ t1 then 8: append T to P 9: end if 10: t← T 11: until t > t1

(7)

This Algorithm is close to Algorithm1but adds a neighborhood choice (Step 5) with a thinning step (Steps 6-9).

In Appendix A, we prove that this algorithm indeed provides a point process with an intensity

given by (3.1) shifted at time t.

The previous algorithm cannot be put into practice because the computation of φVT

i depends

on the points in VT, that are not known at this stage. That is why the efficient algorithm that we

propose in the next section goes backward in time before moving forward. 4. Backward Forward algorithm

Let us now describe the complete Backward Forward algorithm (cf. Algorithm3). Note that to do

so, the set of points P is not reduced, as in the two previous algorithms, to the set of points that we want to simulate but this contains all the points that need to be simulated to perform the task.

Algorithm 3 Backward Forward Algorithm

⊲With [t0, t1] the interval of simulation for neuron i ∈ I

1: Initialize the family V of non empty neighborhoods with {(i, [t0, t1])}

2: Initialize the family of points P = ∅

⊲Each point is a time T with 3 marks: jT is the neuron on which T appears, VT for the choice of

neighborhood, XT for the thinning step (accepted/rejected)

3: Draw E an exponential variable with parameter M 4: Schedule Tnext= t0+ E

5: while Tnext< t1 do

6: Append to P, the point Tnext, with 3 marks: jTnext= i, VTnext= n.a. and XTnext= n.a. (n.a. stand for not assigned yet)

7: whileThere are points T in P with VT= n.a. do

8: for each point T in P with VT= n.a. do

9: Update VT by drawing VT according to λjT shifted at time T .

10: if VT6= ∅ then

11: Find the portion of time/neurons in VT which does not intersect the existing non

empty neighborhoods in V

12: Simulate on it a Poisson process with rate M

13: Append the simulated points, T′, if any, to P with their neuron jT′ and with VT′ =

XT′=n.a

14: Append VT to V

15: end if

16: end for

17: end while

18: Sort the T ’s in P with XT=n.a. in increasing order 19: foreach of them starting with the most backward do

20: Draw XT as a Bernoulli variable with parameter

φVTjT(T ) M

21: end for

22: Draw E′another exponential variable with parameter M

23: Tnext← Tnext+ E′

24: end while

25: The desired points are the points in P with marks jT= i, XT= 1 and that appear in [t0, t1]

⊲It is possible that the algorithm generated points before t0and they have to be removed

Init.

Init. Marks

Backward

Forward

(8)

random all the points that may influence the thinning step. The fact that this loop ends comes from the following Proposition.

Proposition 1. If sup i∈I X v∈V λi(v)l(v)M < 1. (4.1)

then the backward part of Algorithm 3ends almost surely in finite time.

The proof is postponed to AppendixB. It is based on branching process arguments. Basically if

in Steps 8-16, we produce in average less than one point, either because we picked the empty set

in VT or because the simulation of the Poisson process ended up with a small amount of points,

eventually none, then the loop ends almost surely because there is an extinction of the corresponding branching process.

In the backward part, one of the most delicate part consists in being sure that we add new points only if we have not visited this portion of time/neurons before (see Steps 11-13). If we do not make this verification, we may not have the same past depending on the neuron we are looking at and the procedure would not simulate the process we want.

In the forward part, because the backward algorithm stopped just before, we are first sure to

have assess all VT’s. Since φVjt(t) is Ft−intmeasurable, for all t, φVjTT(T ) only depends on the points in

Pwith mark XT = 1 inside VT. The problem in Algorithm2, phrased differently, is that we do not

know the marks XT of the previous points when we have to compute φVjTT(T ). But in the forward

part of Algorithm 3, we are sure that the most backward point for which the thinning (XT =n.a.)

has not taken place, satisfies

• either VT = ∅

• or VT 6= ∅ but either there are no simulated points in the corresponding VT or the points

there come from previous rounds of the loop (Step 5). Therefore their marks XT have been

assigned.

Therefore, with the Backward Forward algorithm, and at the difference to Algorithm 2, we take

the points in an order for which we are sure that we know the previous needed marks.

Figure 1 describes an example to go step by step through Algorithm 3. The backward steps

determine all the points that may influence the acceptation/rejection of point Tnext. Notice that

whereas usual activity tracking algorithms for point processes [14] automatically detect the active

children (influencees), activity tracking in the backward steps detect the parents (influencers). The

(9)

time

neurons

1 2 3 4 6 5 7 8 9 10 11

Fig 1: Main flow example for Algorithm 3, with backward steps in green (cf. Algorithm 3, Steps

7-17) and forward steps in purple (cf. Algorithm 3, Steps 18-25). Following arrow numbers: (1)

The next point Tnext= t + E (cf. Algorithm 3, Step 4) is scheduled, (2) The neighborhood VTnext

is selected in the first backward step, a first generation of three points (a, b on neuron k and c

on neuron ℓ) is drawn (cf. Algorithm 3, Step 9), thanks to a Poisson process, (cf. Algorithm 3,

Steps 11-12) and appended to P (cf. Algorithm 3, Step 13), (3) at the second generation, a non

empty neighborhood is found, i.e. Vb 6= ∅ (cf. Algorithm 3, Steps 9-1), but the Poisson process

simulation does not give any point in it (cf. Algorithm3, Step 12), (4) at the second generation, the

neighborhood Va is picked, it is not empty and overlap the neighborhood of the first generation (cf.

Algorithm3, Steps 9-11): therefore there is no new simulation in the overlap (c is kept and belongs

to Vb as well as Va) but there is a new simulation thanks to a Poisson process outside of the overlap

leading to a new point d (cf. Algorithm3, Step 12)(5) at the second generation, for point c, one pick

the empty neighborhood, i.e. Vc= ∅ (cf. Algorithm3, Step 9) and therefore we do not simulate any

Poisson process, (6) at third generation, similarly no point and no interval are generated, i.e. Vd = ∅

(cf. Algorithm 3, Step 9). This is the end of the backward steps and the beginning of the forward

ones, (7) the point d is not selected, acceptation/selection taking place with probability φ

∅ ℓ

M (cf.

Algorithm3, Step 20), (8) the point c is accepted, here again with probability φ∅ℓ

M (cf. Algorithm3,

Step 20), (9) the point b is not selected, acceptation taking place, here, with probability φVbk (b)

M (cf.

Algorithm3, Step 20), (10) the point a is selected, acceptation taking place, here, with probability

φVak (a)

M (cf. Algorithm 3, Step 20), (11) The neighborhood of neuron i contains two points, one on

neuron k and one on neuron ℓ and one selects Tnextwith probability φ

VTnext

i (Tnext)

M .

(10)

5. Illustration

To illustrate in practice the algorithm, we have simulated a Hawkes process as given in (2.1).

Indeed such a process has a Kalikow decomposition (3.1) with bound M and neighborhood family

V constituted of the v’s of the form v = {(j, [−A, 0))} for some neuron j in I. To do that, we need the following choices:

λi(∅) = 1 − X j∈I wij and φ∅i = νi λi(∅) and for v of the form v = {(j, [−A, 0))} for some neuron j in I,

λi(v) = wij and φvi = Nb

j

[−A,0)∧ M.

We have taken I = Z2 and the w

ij proportional to a discretized centred symmetric bivariate

Gaussian distribution of standard deviation σ. More precisely, once λi(∅) = λ∅ fixed, picking

ac-cording to λi consists in

• choosing whether V is empty or not with probability λ∅

• if V 6= ∅, choosing V = {(j, [−A, 0))} with j − i = round(W ) and W obeys a bivariate

N (0, σ2).

In all the sequel, the simulation is made for neuron i = (0, 0) with t0= 0, t1= 100 (see Algorithm

3). The parameters M, λ∅ and σ vary. The parameters νi= ν and A are fixed accordingly by

ν = 0.9M λ∅and A = 0.9M−1(1 − λ∅)−1,

to ensure that φ∅

i < M and (4.1), which amounts here to (1 − λ∅)AM < 1.

On Figure 2(a), with M = 2, σ = 1 and λ∅ small, we see the overall spread of the algorithm

around the neuron to simulate (here (0, 0)). Because we chose a Gaussian variable with small

variance for the λi’s, the spread is weak and the neurons very close to the neuron to simulate are

requested a lot of time at Steps 9-11 of the algorithm. This is also where the algorithm spent the biggest amount of time to simulate Poisson processes. Note also that roughly to simulate 80 points, we need to simulate 10 times more points globally in the infinite network. Remark also on Figure

2(b), the avalanche phenomenon, typical of Hawkes processes: for instance the small cluster of black

points on neuron with label 0 (i.e. (0,0)) around time 22, is likely to be due to an excitation coming for the spikes generated (and accepted) on neuron labeled 8 and self excitation. The beauty of the algorithm is that we do not need to have the whole time line of neuron 8 to trigger neuron 0, but only the small blue pieces: we just request them at random, depending on the Kalikow decomposition.

On Figure 3, we can first observe that when the parameter which governs the range of λi’s

increase, the global spread of the algorithm increase. In particular, comparing the top left of Figure

3to Figure2 where the only parameter that changes is σ, we see that the algorithm is going much

further away and simulates much more points for a sensible equal number of points to generate (and accept) on neuron (0,0). Moreover we can observe that

• From left to right, by increasing λ∅, it is more likely to pick an empty neighborhood and as

a consequence, the spread of the algorithm is smaller. By increasing ν = 0.9M λ∅, this also

increases the total number of points produced on neuron (0;0).

• From top to bottom, by increasing M , there are more points which are simulated in the Poisson processes (Step 12 of Algorithm 3) and there is also a stronger interaction (we do not

(11)

truncate that much the number of points in φv). Therefore, the spread becomes larger and more uniform too, because there are globally more points that are making requests. Moreover, by having a basic rate M which is 10 times bigger, we have to simulate roughly 10 times more points. ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 1 2 3 4 5 6 7 8 −4 0 4 −4 0 4 Accepted at (0,0): 79 Produced : 782 Nb_Requests ● ● ● 10 20 30 4 8 12 16 Time_Spent

(a) Summary of one simulation

20 25 30 35 40 Time Neurons 0 1 2 3 4 5 6 7 8 ●● ●● ● ● ● ● ●●● ●● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●●

(b) Extract of the time simulation for neu-rons 0 to 9

Fig 2: Simulation for M = 2, σ = 1, λ∅ = 0.25. For each neuron in Z2, that have been requested

in Steps 9:11, except the neuron of interest (0, 0), have been counted the total number of requests,

that is the number of time a VT pointed towards this neuron (Steps 9 and 11) and the total time

spent at this precise neuron simulating a homogeneous Poisson process (Step 12). Note that since the simulation is on [0, 100] the time spent at position (0, 0) is at least 100. On (a), the summary for one simulation with below the plot, number of points accepted at neuron (0, 0) and total number of points that have been simulated. Also annotated on (a), with labels between 0 and 8, the 9 neurons for which the same simulation in [20,40] is represented in more details on (b). More precisely on (b), in abscissa is time and the neuron labels in ordinate. A plain dot represents an accepted point, by the thinning step (Step 20 of Algorithm 3), and an empty round, a rejected point. The blue pieces of line represent the non empty neighborhoods that are in V.

(12)

● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −20 0 20 −20 0 20 Accepted at (0,0): 86 Produced : 2001 2 4 6 8 Time_Spent Nb_Requests ● ● 5 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −30 −20 −10 0 10 20 −20 −10 0 10 20 Accepted at (0,0): 154 Produced : 1204 1 2 3 4 5 Time_Spent Nb_Requests ● ● ● ● ● ● 1 2 3 4 5 6 ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −20 0 20 40 −40 −20 0 20 40 Accepted at (0,0): 490 Produced : 15901 1 2 3 4 Time_Spent Nb_Requests ● ● ● 20 40 60 ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −20 0 20 −20 0 20 Accepted at (0,0): 969 Produced : 15378 1 2 3 4 Time_Spent Nb_Requests ● ● ● ● ● 10 20 30 40 50

Fig 3: Simulation for 4 other sets of parameters, all of them with σ = 3. Summaries as explained

in Figure 2. On top, M = 2; on bottom, M = 20. On the left part, λ∅ = 0.25, on the right part,

λ∅= 0.5.

6. Conclusion

We derived a new algorithm for simulating the behavior of one neuron embedded in an infinite network. This is possible thanks to the Kalikow decomposition which allows picking at random the influencing neurons. As seen in the last section, it is computationnally tractable in practice to simulate open systems in the physical sense. A question that remains open for future work is whether we can prove that such a decomposition exists for a wide variety of processes, as it has

been shown in discrete time (see [9,10,18]).

Acknowledgements

This work was supported by the French government, through the UCAJediInvestissements d’Avenir

(13)

Insti-tute for Modeling in Neuroscience and Cognition (NeuroMod) of the Universit´e Cˆote d’Azur. The

authors would like to thank Professor E.L¨ocherbach from Paris 1 for great discussions about Kalikow

decomposition and Forward Backward Algorithm.

Appendix A: Link between Algorithm 2 and the Kalikow decomposition

To prove that Algorithm 2 returns the desired processes, let us use some additional and more

mathematical notation. Note that all the points simulated on neuron i before being accepted or not

can be seen as coming from a common Poisson process of intensity M , denoted Πi. For any i ∈ I,

we denote the arrival times of Πi, (Tni)n∈Z, with T1i being the first positive time.

As in Step 6 of Algorithm2, we attach to each point of Πi a stochastic mark X given by,

Xni =

(

1 if Ti

nis accepted in the thinning procedure

0 otherwise. (A.1)

Let us also define Vi

nthe neighborhood choice of Tnipicked at random and independently of anything

else according to λi and shifted at time Tni.

In addition, for any i ∈ I, define Ni= (Ti

n, Xni)n∈Zan E-marked point process with E = {0; 1}.

In particular, following the notation in Chapter VIII of [2], for any i ∈ I, let

Nti(mark) =

X

n∈Z

1Xi

n=mark1Tni≤t for mark ∈ E

FN t− = _ i∈I σ(Ni s(0), Nsi(1); s < t) and F N(1) t− = _ i∈I σ(Ni s(1); s < t).

Moreover note that (Ni

t(1))t∈R is the counting process associated to the point processs P

simu-lated by Algorithm2. Let us denote by ϕi(t), the formula given by (3.1) and shifted at time t. Note

that since the φv

i’s are F0−int= F

N(1)

0− , ϕi(t) is FtN−(1) measurable. We also denote ϕvi(t) the formula

of φv

i shifted at time t.

With this notation, we can prove the following.

Proposition 2. The process (Ni

t(1))t∈R admits ϕi(t) as FtN(1)− -predictable intensity.

Proof. Following the technique in Chapter 2 of [2], let us take Cta non negative predictable function

with respect to (w.r.t) FtNi(1) thatis Ft−N(1) measurable and therefore FN

t− measurable . We have, for any i ∈ I, E   ∞ Z 0 CtdNti(1)  = ∞ X n=1 E CTi n1Xni=1 

Note that by Theorem T35 at Appendix A1 of [2], any point T should be understood as a stopping

time, and that by Theorem T30 at Appendix A2 of [2],

FN T −= _ j σ{Tmj, Xmj such that Tmj < T } So E   ∞ Z 0 CtdNti(1)  = ∞ X n=1 ECTi nE(1Xni=1|F N Ti n −, Vni)  = ∞ X n=1 E CTi n ϕVni i (Tni) M ! .

(14)

Let us now integrate with respect to the choice Vi

n, which is independent of anything else.

E   ∞ Z 0 CtdNti(1)  = ∞ X n=1 E   CTni λi(∅)ϕ∅i + P v∈V,v6=∅ λi(v) × ϕvi(Tni) M   = E   ∞ Z 0 Ct ϕi(t) M dΠ i(t)  .

Since Πi is a Poisson process with respect to (FN

t )t with intensity M , and since CtϕMi(t) is Ft−N

measurable, we finally have that

E   ∞ Z 0 CtdNti(1)  = E   ∞ Z 0 Ctϕi(t)dt  ,

which ends the proof.

Appendix B: Proof of Proposition 1

Proof. We do the proof for the backward part, starting with T = Tnext as the next point after

t0 (Step 4 of Algorithm 3), the proof being similar for the other Tnext generated at Step 23. We

construct a tree with root (i, T ). For each point (jT′, T′) in the tree, the points which are simulated

in VT′ (Step 12 of Algorithm3) define the children of (jT′, T′) in the tree. This forms the tree ˜T .

Let us now build a tree ˜C with root (i, T ) (that includes the previous tree) by mimicking the

previous procedure in the backward part, except that we simulate on the whole neighborhood even if it has a part that intersects with previous neighborhoods (if they exist) (Step 11-12 of Algorithm

3). By doing so, we make the number of children at each node independent of anything else.

If the tree ˜C goes extinct then so does the tree ˜T and the backward part of the algorithm

terminates.

But if one only counts the number of children in the tree ˜C, we have a marked branching process

whose reproduction distribution for the mark i is given by

• no children with probability λi(∅)

• Poissonian number of children with parameter l(v)M if v is the chosen neighborhood with

probability λi(v)

This gives that the average number of children issued from a node with the mark i is

ζi= λi(∅) × 0 +

X

v∈V,v6=∅

λi(v)l(v)M.

If we denote ˜Ck as the collection of points in the tree ˜C at generation k, and by K

T′ the set of

points generated independently as a Poisson process of rate M inside VT′, we see recursively that

˜ Ck+1= [ T′∈ ˜Ck KT′ But E(|KT′||T′) = ζj T ′.

(15)

Therefore, if we denote the total number of sites in ˜Ck by Z(k), we have E(Z(k+1)| ˜Ck) ≤ Z(k)sup

i∈I ζi. One can then conclude by recursion that,

E(Z(k)) ≤ (sup

i∈I

ζi)k < 1.

The last inequality use the sparsity neighborhood assumption. Then we deduce that, the mean number of children in each generation goes to 0 as k tends to infinity. So by using classical branching

techniques in [15], we conclude that the tree ˜C will go extinct almost surely. This also implies that,

the backward steps end a.s. References

[1] Andersen P.K., Borgan O., Gill R. and Keiding, N., Statistical Models Based on Counting

Pro-cesses, Springer, (1996).

[2] P. Br´emaud,Point Processes and Queues: Martingale Dynamics, Springer-Verlag (1981) [3] Br´emaud, P. and Massouli´e, L., Stability of nonlinear Hawkes processes, The Annals of

Proba-bility, 24, 1563–1588 (1996)

[4] Comets, F., Fernandez, R. and Ferrari, P. A., Processes with long memory: Regenerative

con-struction and perfect simulation, Ann. Appl. Probab.,3, 921–943 (2002)

[5] Muzy, A., Exploiting activity for the modeling and simulation of dynamics and learning processes

in hierarchical (neurocognitive) systems, Magazine of Computing in Science & Engineering, 21,

83–93 (2019).

[6] Dassios, A. and Zhao, H., Exact simulation of Hawkes process with exponentially decaying

in-tensity, Electronic Communications in Probability, 18(62), (2013).

[7] Didelez, V.,Graphical models of markes point processes based on local independence, J.R. Statist. Soc. B, 70(1),245–264 (2008).

[8] Fernand´ez, R., Ferrari, P. and Galves A., Coupling, renewal and perfect simulation of chains of

infinite order, Notes of a course in the V Brazilian School of Probability, (2001)

[9] Galves, A. and L¨ocherbach, E. Infinite Systems of Interacting Chains with Memory of Variable

Length A Stochastic Model for Biological Neural Nets, Journal of Statistical Physics, 151 (5),

896–921, (2013)

[10] Galves, A. and L¨ocherbach, E. Modeling networks of spiking neurons as interacting processes

with memory of variable length, Journal de la Soci´et´e Fran¸caise de Statistiques, 157, 17–32 (2016)

[11] Hodara, P. and L¨ocherbach, E., Hawkes Processes with variable length memory and an infinite

number of components, Adv. Appl. Probab., 49, 84–107 (2017) title = Hawkes Processes with

variable length memory and an infinite number of components, year = 2016

[12] Kalikow, S. Random markov processes and uniform martingales, Israel Journal of Mathematics, 71(1),33–54 (1990).

[13] Lewis, P.A.W. and Shedler, G.S., Simulation of nonhomogeneous Poisson processes , Naval Postgraduate School, Monterey, California (1978).

[14] Mascart, C. and Muzy, A. and Reynaud-Bouret, P., Centralized and distributed simulations

of point processes using local independence graphs: A computational complexity analysis, under

(16)

[15] M´el´eard, S. Al´eatoire: introduction `a la th´eorie et au calcul des probabilit´es, Ecole Polytechnique

(2010)

[16] Møller, J. and Rasmussen, J. G., Perfect simulation of Hawkes processes, Adv. Appl. Probab., 37, 629–646 (2005).

[17] Ogata, Y.On Lewis Simulation Method for Point Processes, IEEE Transactions on Information Theory, 27, 23–31 (1981)

[18] Ost, G. and Reynaud-Bouret, P. Sparse space-time models: concentration inequalities and

Lasso, submitted (2018)

[19] Peters, E.A.J.F. and de With, G., Rejection-free MonteCarlo sampling for general potentials, Physical Review E, 85, 026703 (2012).

[20] Reynaud-Bouret, P. and Schbath, S., Adaptive estimation for Hawkes processes; application to

genome analysis, Annals of Statististics, 38(5),2781–2822 (2010).

[21] Tocher, K.D.,PLUS/GPS III Specification, United Steel Companies Ltd, Department of Oper-ational Research (1967).

[22] Vere-Jones, D. and Ozaki, T., Some examples of statistical estimation applied to earthquake

data., Ann. Inst. Statist. Math., 34(B), 189–207 (1982).

[23] Zeigler, B.P., Theory of Modelling and Simulation, Wiley-Interscience Publication (1976). [24] Zeigler, B.P., Muzy, A., Kofman, E., Theory of Modeling and Simulation: Discrete Event &

Figure

Fig 1: Main flow example for Algorithm 3, with backward steps in green (cf. Algorithm 3, Steps 7-17) and forward steps in purple (cf
Fig 2: Simulation for M = 2, σ = 1, λ ∅ = 0.25. For each neuron in Z 2 , that have been requested in Steps 9:11, except the neuron of interest (0, 0), have been counted the total number of requests, that is the number of time a V T pointed towards this neu
Fig 3: Simulation for 4 other sets of parameters, all of them with σ = 3. Summaries as explained in Figure 2

Références

Documents relatifs

An accurate simulation of the biomass gasification process requires an appropriately detailed chemistry model to capture the various chemical processes inside the

Fatigue levels were high in this large cohort of patients with symptoms suggestive of early.. axSpA, independently of the fulfillment of the ASAS

Thus, addressing anxiety and depression through the concentrated breath work practices, physical postures, and meditation of yoga may be an additional way for mental

Our proposed finite method SeROAP computes rank-1 approximations based on a sequence of singular value decompositions followed by a sequence of projections onto Kronecker vectors.

And what is more, the relevant measure really is not subject to quantification by any practicable clinical trials; realistic assessment of its magnitude requires

In order to achieve this goal, we use algebraic methods developed in constructive algebra, D-modules (differential modules) theory, and symbolic computation such as the so-called

Parmi ces nombreux auteurs, nous nous sommes arrêtés sur Dostoïevski et Boudjedra non point pour leurs parcours ou leur notoriété mais pour deux raisons : la première est la

Heute sind Jugendliche längst nicht mehr Subjekt eigener Forschungen sondern Objekt von WissenschaftlerInnen, JournalistInnen und PädagogInnen, deren Intentionen sie in der Regel