• Aucun résultat trouvé

Linear response for spiking neuronal networks with unbounded memory

N/A
N/A
Protected

Academic year: 2021

Partager "Linear response for spiking neuronal networks with unbounded memory"

Copied!
75
0
0

Texte intégral

(1)

HAL Id: hal-02921842

https://hal.inria.fr/hal-02921842

Submitted on 25 Aug 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Linear response for spiking neuronal networks with

unbounded memory

Bruno Cessac, Rodrigo Cofré, Ignacio Ampuero

To cite this version:

Bruno Cessac, Rodrigo Cofré, Ignacio Ampuero. Linear response for spiking neuronal networks with unbounded memory. Dynamics Days Europe, Aug 2020, Nice / Online, France. �hal-02921842�

(2)

Context Question Method Results Conclusions

Linear response for spiking neuronal networks with

unbounded memory

Bruno Cessac

Biovision Team, INRIA Sophia Antipolis,France. 25-08-2020

with

Rodrigo Cofr´e, CIMFAV, Facultad de Ingenier´ıa, Universidad de Valpara´ıso, Valpara´ıso, Chile. and

Ignacio Ampuero, Departamento de Inform´atica, Universidad T´ecnica Federico Santa Mar´ıa, Valpara´ıso, Chile

(3)

Context Question Method Results Conclusions

Spontaneous spiking activity

(4)

Context Question Method Results Conclusions

Spontaneous spiking activity

(5)

Context Question Method Results Conclusions

Spontaneous spiking activity

(6)

Context Question Method Results Conclusions

Spontaneous spiking activity

(7)

Context Question Method Results Conclusions

Spontaneous spiking activity

Sample averaging (non stationary)

(8)

Context Question Method Results Conclusions

Spontaneous spiking activity

Sample averaging (non stationary)

(9)

Context Question Method Results Conclusions

Spontaneous spiking activity

Variation in the correlation C12,15(t, t + 2)

Correlation variations are triggered by the stimulus, but they are

constrained by the network dynamics.

(10)

Context Question Method Results Conclusions

Question

How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?

Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?

Linear response for spiking neuronal networks with unbounded memory,

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,

submitted to J. Math. Neuro, arXiv:1704.05344

(11)

Context Question Method Results Conclusions

Question

How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?

Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?

Linear response for spiking neuronal networks with unbounded memory,

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,

submitted to J. Math. Neuro, arXiv:1704.05344

(12)

Context Question Method Results Conclusions

Question

How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?

Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?

Linear response for spiking neuronal networks with unbounded memory,

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,

submitted to J. Math. Neuro, arXiv:1704.05344

(13)

Context Question Method Results Conclusions

Question

How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?

Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?

Linear response for spiking neuronal networks with unbounded memory,

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,

submitted to J. Math. Neuro, arXiv:1704.05344

(14)

Context Question Method Results Conclusions

From spiking neurons dynamics to linear response

(15)

An Integrate and Fire neural network model with

unbounded memory

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Spikes

Voltage dynamics is time-continuous.

Spikes are time-discrete events(time resolution δ > 0).

Spike state ωk(n) ∈ 0, 1.

Spike pattern ω(n).

Spike block ωn

(16)

An Integrate and Fire neural network model with

unbounded memory

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Spikes

Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).

tk(l )∈ [nδ, (n + 1)δ[ ⇒ ωk(n) = 1 Spike state ωk(n) ∈ 0, 1. Spike pattern ω(n). Spike block ωn m.

(17)

An Integrate and Fire neural network model with

unbounded memory

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Spikes

Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).

Spike state ωk(n) ∈ 0, 1.

Spike pattern ω(n).

Spike block ωn

(18)

An Integrate and Fire neural network model with

unbounded memory

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Spikes

Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).

Spike state ωk(n) ∈ 0, 1.

Spike pattern ω(n).

Spike block ωn

(19)

An Integrate and Fire neural network model with

unbounded memory

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Spikes

Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).

Spike state ωk(n) ∈ 0, 1.

Spike pattern ω(n).

Spike block ωn

(20)

A conductance-based Integrate and Fire model

M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses αkj(t) = t τe − t τkjH(t),

(21)

A conductance-based Integrate and Fire model

M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Sub-threshold dynamics: CkdVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t t=1 t=1.2 t=1.6 t=3 g(x) gkj(t) = gkj(tj) + Gkjαkj(t − tj) t > tj

(22)

A conductance-based Integrate and Fire model

M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Sub-threshold dynamics: CkdVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t) = GkjX n≥0 αkj(t − tj(n))

(23)

A conductance-based Integrate and Fire model

M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t,ω) = GkjX n≥0 αkj(t−nδ)ωj(n)

(24)

A conductance-based Integrate and Fire model

M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)

R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t,ω) = GkjX n≥0 αkj(t−nδ)ωj(n)

(25)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Wkj def = GkjEj αkj(t, ω) =X n≥0 αkj(t−nδ)ωj(n) ik(t, ω) = gL,kEL+ X j Wkjαkj(t, ω)+Sk(t)+σBξk(t)

(26)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e− 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du

(27)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du

(28)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) + V (noise) k (t, ω)

(29)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e− 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω)

(30)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω),

(31)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1

(32)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1

(33)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t, ω) Γk(t1, t, ω)αkj(t1, ω)dt1

(34)

A conductance-based Integrate and Fire model

Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1

(35)

A conductance-based Integrate and Fire model

Variable length Markov chain with unbounded memory Pn ω(n) ωn−1−∞ ≡ Π ω(n),

θ − Vk(det)(n − 1, ω) σk(n − 1, ω)

(36)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ.

(37)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωmn ] = n Y l =m+D P h ω(l ) ω l −1 l −D i µ h ωmm+D−1 i , ∀m < n ∈Z

(38)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωmn ] = n Y l =m+D P h ω(l ) ω l −1 l −D i µ h ωmm+D−1 i , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i

(39)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωnm] = exp n X l =m+D φl  ωl −Dl  µ h ωm+D−1m i , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i

(40)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ h ωnm| ωmm+D−1i= exp n X l =m+D φl  ωl −Dl  , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i

(41)

Markov chains and Gibbs distributions

Markov chain: P ω(n) ωn−Dn−1  > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ h ωnm| ωmm+D−1i= exp n X l =m+D φl  ωl −Dl  , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i

(42)

Gibbs distribution

Equilibrium stat. phys.

(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.

Ergodic theory.

Markov chains - finite memory. Chains with complete

connections - infinite memory (Left Interval Specification).

P [ ω ] =Z1e−βH{ω}

H {ω} =P

αλαXα{ω}

Xα(ω) = Product of spike events Hammersley, Clifford, 1971

O. Onicescu and G. Mihoc. CRAS Paris, 1935

R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...

(43)

Gibbs distribution

Equilibrium stat. phys.

(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.

Ergodic theory.

Markov chains - finite memory. Chains with complete

connections - infinite memory (Left Interval Specification).

P [ ω ] =Z1e−βH{ω}

H {ω} =P

αλαXα{ω}

Xα(ω) = Product of spike events Hammersley, Clifford, 1971

O. Onicescu and G. Mihoc. CRAS Paris, 1935

R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...

(44)

Gibbs distribution

Equilibrium stat. phys.

(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.

Ergodic theory.

Markov chains - finite memory. Chains with complete

connections - infinite memory (Left Interval Specification).

P [ ω ] =Z1e−βH{ω}

H {ω} =P

αλαXα{ω}

Xα(ω) = Product of spike events Hammersley, Clifford, 1971

O. Onicescu and G. Mihoc. CRAS Paris, 1935

R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...

(45)

Gibbs distribution

Equilibrium stat. phys.

(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.

Ergodic theory.

Markov chains - finite memory.

Chains with complete

connections - infinite memory (Left Interval Specification).

P [ ω ] =Z1e−βH{ω}

H {ω} =P

αλαXα{ω}

Xα(ω) = Product of spike events Hammersley, Clifford, 1971

O. Onicescu and G. Mihoc. CRAS Paris, 1935

R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...

(46)

Gibbs distribution

Equilibrium stat. phys.

(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.

Ergodic theory.

Markov chains - finite memory. Chains with complete

connections - infinite memory (Left Interval Specification).

P [ ω ] =Z1e−βH{ω}

H {ω} =P

αλαXα{ω}

Xα(ω) = Product of spike events Hammersley, Clifford, 1971

O. Onicescu and G. Mihoc. CRAS Paris, 1935

R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...

(47)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

(48)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

φ(n, ω) = log Pn ω(n) ω n−1 −∞ ≡ log Π ω(n), θ − Vk(sp)(t, ω) − Vk(S)(t, ω) σk(n − 1, ω) !

(49)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

φ(n, ω) = log Pn ω(n) ω n−1 −∞ ≡ log Π ω(n), θ −Vk(sp)(t, ω)−Vk(S)(t, ω) σk(n − 1, ω) !

(50)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

(51)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

(52)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

(53)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).

κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f (t, ·), H (1) k (r , ·) σk(r − 1, ·) Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #

(54)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).

κk,f( t − t1) = 1 Ck n=[ t ] X r = −∞ C(sp) " f (t,·), H (1) k (r ,·) σk(r − 1,·) Z r −1 τk(r − 1, .) Sk(t1)Γk(t1, r − 1,·)dt1 # History dependence.

(55)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).

κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f(t, ·), H (1) k (r , ·) σk(r − 1, ·) Zr −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #

(56)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).

κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp)   f (t , ·), H(1)k (r , ·) σk(r − 1, ·)   Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1

(57)

Context Question Method Results Conclusions

Response to stimuli

Bruno Cessac Linear response for spiking neuronal networks with unbounded memory

Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344

GIF model without stimulus ⇒stationary Gibbs distribution.

time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.

How is the average of an observable f (ω, t) affected by the stimulus ?

If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).

κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f (t, ·), H (1) k (r , ·) σk(r − 1, ·) Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #

(58)

Context Question Method Results Conclusions

An illustrative example

(59)

Context Question Method Results Conclusions

The simplest example of the gIF model is the ... discrete time leaky-integrate and fire model

Vk(n+1) = γ Vk(n)+ X

j

Wkjωj(n)+I0+Sk(t)+σBξk(n), if Vk(n) < θ

(60)

Context Question Method Results Conclusions

Linear response

δµ(1)[ f (n) ] ∼ − N X k=1 D+1 X m=1 1 νk X l =0 γlK(1)kmSk(n − m − l ) K(1)k,m= C(sp)[ f (m, ·), ζk(0, .) ] ζk(r − 1, ω) = H(1)k (r , ω) σk(r − 1, ω)

(61)

Context Question Method Results Conclusions

Firing rates

Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).

Figure:Linear response of f (ω, n) = ωkc(n) for different values of stimulus amplitude A. From https://arxiv.org/abs/1704.05344

(62)

Context Question Method Results Conclusions

Pairwise correlations

Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).

Figure:Correlation functions corresponding to the firing rate of the neuron kc=N2 as a function of the

neuron index k (abscissa), for different values of the time delay m. From https://arxiv.org/abs/1704.05344

(63)

Context Question Method Results Conclusions

Higher order observable

Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).

Figure:Linear response of the observable f (n, ω) = ωkc −2(n − 3)ωkc(n).

(64)

Context Question Method Results Conclusions

Range of validity

Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).

Figure:L2distance between the curves δµ(1)[ f (n) ], δµ(HC1)[ f (n) ] and

the empirical curve, as a function of the stimulus amplitude A. Left panel show distance between rate curves and right panel distance between pairwise observable with delay.

(65)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(66)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(67)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(68)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ?

Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(69)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ?

The potential is shaped from dynamics.

There is no ”thermodynamic principle” to constrain its shape.

Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;

arXiv:1405.3621)

Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”

Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(70)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ?

The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.

Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;

arXiv:1405.3621)

Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”

Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(71)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ?

The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.

Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;

arXiv:1405.3621)

Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”

Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(72)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ?

The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.

Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;

arXiv:1405.3621)

Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”

Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(73)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

(74)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Similar to Volterra-expansion.

Higher order terms computed by D. Ruelle (Nonlinearity, 11(1):518, 1998. ) for hyperbolic dynamical systems ⇒ Complex expansion hard to handle from data.

Sigmoid are ”hard” to approximate with Taylor expansions.

Toward an extension of the concept of receptive-field ?

(75)

Context Question Method Results Conclusions

Main conclusions

The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.

Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.

How to determine the potential (from data) ? Higher order terms in the response ?

Toward an extension of the concept of receptive-field ?

Références

Documents relatifs

So far, we have proved that the particle system is consistent with the mean field representation, and that there are weak solutions to (7)-(8), but to prove the convergence of

/ La version de cette publication peut être l’une des suivantes : la version prépublication de l’auteur, la version acceptée du manuscrit ou la version de l’éditeur. For

Das föderale Ex- perimentierlabor der Kantone bietet sich für eine solche Untersuchung an, zumal sich die kantonale Autonomie im Bereich Integrationspolitik in einer

For instance Reynaud-Bouret and Roy [29] obtained exponential concentration inequalities for ergodic theorems for the linear Hawkes process with bounded memory, in which necessarily h

– Corollary 3.8 guarantees that

the Laguerre ensemble, where the common distribution of the entries is complex Gaussian, one has an explicit correspondence between the largest eigenvalue of Y and the passage time

In this paper, evaporation of a volatile, perfectly wetting liquid confined in an initially filled capillary tube of square internal cross section is studied, when conditions are

nigra seedlings on a new sandy-gravelly bar appeared after fluvial maintenance operations in a mosaic of five islands (13 hectares, included in the Conservation Unit of genetic