HAL Id: hal-02921842
https://hal.inria.fr/hal-02921842
Submitted on 25 Aug 2020HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Linear response for spiking neuronal networks with
unbounded memory
Bruno Cessac, Rodrigo Cofré, Ignacio Ampuero
To cite this version:
Bruno Cessac, Rodrigo Cofré, Ignacio Ampuero. Linear response for spiking neuronal networks with unbounded memory. Dynamics Days Europe, Aug 2020, Nice / Online, France. �hal-02921842�
Context Question Method Results Conclusions
Linear response for spiking neuronal networks with
unbounded memory
Bruno Cessac
Biovision Team, INRIA Sophia Antipolis,France. 25-08-2020
with
Rodrigo Cofr´e, CIMFAV, Facultad de Ingenier´ıa, Universidad de Valpara´ıso, Valpara´ıso, Chile. and
Ignacio Ampuero, Departamento de Inform´atica, Universidad T´ecnica Federico Santa Mar´ıa, Valpara´ıso, Chile
Context Question Method Results Conclusions
Spontaneous spiking activity
Context Question Method Results Conclusions
Spontaneous spiking activity
Context Question Method Results Conclusions
Spontaneous spiking activity
⇒
Context Question Method Results Conclusions
Spontaneous spiking activity
⇒
Context Question Method Results Conclusions
Spontaneous spiking activity
⇒
Sample averaging (non stationary)
Context Question Method Results Conclusions
Spontaneous spiking activity
⇒
Sample averaging (non stationary)
Context Question Method Results Conclusions
Spontaneous spiking activity
Variation in the correlation C12,15(t, t + 2)
Correlation variations are triggered by the stimulus, but they are
constrained by the network dynamics.
Context Question Method Results Conclusions
Question
How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?
Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?
Linear response for spiking neuronal networks with unbounded memory,
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,
submitted to J. Math. Neuro, arXiv:1704.05344
Context Question Method Results Conclusions
Question
How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?
Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?
Linear response for spiking neuronal networks with unbounded memory,
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,
submitted to J. Math. Neuro, arXiv:1704.05344
Context Question Method Results Conclusions
Question
How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?
Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?
Linear response for spiking neuronal networks with unbounded memory,
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,
submitted to J. Math. Neuro, arXiv:1704.05344
Context Question Method Results Conclusions
Question
How to compute the spatio-temporal response of a neuronal network to a time dependent stimulus ?
Can we compute the time variation of spike-observables as a function of the stimulus and spontaneous dynamics ?
Linear response for spiking neuronal networks with unbounded memory,
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e,
submitted to J. Math. Neuro, arXiv:1704.05344
Context Question Method Results Conclusions
From spiking neurons dynamics to linear response
An Integrate and Fire neural network model with
unbounded memory
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Spikes
Voltage dynamics is time-continuous.
Spikes are time-discrete events(time resolution δ > 0).
Spike state ωk(n) ∈ 0, 1.
Spike pattern ω(n).
Spike block ωn
An Integrate and Fire neural network model with
unbounded memory
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Spikes
Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).
tk(l )∈ [nδ, (n + 1)δ[ ⇒ ωk(n) = 1 Spike state ωk(n) ∈ 0, 1. Spike pattern ω(n). Spike block ωn m.
An Integrate and Fire neural network model with
unbounded memory
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Spikes
Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).
Spike state ωk(n) ∈ 0, 1.
Spike pattern ω(n).
Spike block ωn
An Integrate and Fire neural network model with
unbounded memory
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Spikes
Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).
Spike state ωk(n) ∈ 0, 1.
Spike pattern ω(n).
Spike block ωn
An Integrate and Fire neural network model with
unbounded memory
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Spikes
Voltage dynamics is time-continuous. Spikes are time-discrete events(time resolution δ > 0).
Spike state ωk(n) ∈ 0, 1.
Spike pattern ω(n).
Spike block ωn
A conductance-based Integrate and Fire model
M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses αkj(t) = t τe − t τkjH(t),
A conductance-based Integrate and Fire model
M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Sub-threshold dynamics: CkdVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t t=1 t=1.2 t=1.6 t=3 g(x) gkj(t) = gkj(tj) + Gkjαkj(t − tj) t > tj
A conductance-based Integrate and Fire model
M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Sub-threshold dynamics: CkdVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t) = GkjX n≥0 αkj(t − tj(n))
A conductance-based Integrate and Fire model
M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t,ω) = GkjX n≥0 αkj(t−nδ)ωj(n)
A conductance-based Integrate and Fire model
M. Rudolph, A. Destexhe, Neural Comput. 2006, (GIF model)
R.Cofr´e, B. Cessac, Chaos, Solitons and Fractals, 2013
Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −X j gkj(t, ω)(Vk− Ej) +Sk(t) + σBξk(t) Synapses 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x) gkj(t,ω) = GkjX n≥0 αkj(t−nδ)ωj(n)
A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Wkj def = GkjEj αkj(t, ω) =X n≥0 αkj(t−nδ)ωj(n) ik(t, ω) = gL,kEL+ X j Wkjαkj(t, ω)+Sk(t)+σBξk(t)A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e− 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) duA conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) duA conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) + V (noise) k (t, ω)A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e− 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω)A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω),A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t, ω) Γk(t1, t, ω)αkj(t1, ω)dt1A conductance-based Integrate and Fire model
Sub-threshold dynamics: Ck dVk dt + gk( t, ω ) Vk = ik(t, ω). Linear in V . Spike history-dependent. Dynamics integration Γk(t1, t, ω) = e − 1 Ck Rt t1∨τk (t,ω)gk( u,ω ) du Vk(t, ω) = Vk(sp)(t, ω) + V (S) k (t, ω) | {z } Vk(det)(t,ω) +Vk(noise)(t, ω) Vk(sp)(t, ω) = Vk(syn)(t, ω) + Vk(L)(t, ω), Vk(syn)(t, ω) = 1 Ck N X j =1 Wkj Z t τk(t,ω) Γk(t1, t, ω)αkj(t1, ω)dt1A conductance-based Integrate and Fire model
Variable length Markov chain with unbounded memory Pn ω(n)ωn−1−∞ ≡ Π ω(n),
θ − Vk(det)(n − 1, ω) σk(n − 1, ω)
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ.
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωmn ] = n Y l =m+D P h ω(l ) ω l −1 l −D i µ h ωmm+D−1 i , ∀m < n ∈Z
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωmn ] = n Y l =m+D P h ω(l ) ω l −1 l −D i µ h ωmm+D−1 i , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ [ ωnm] = exp n X l =m+D φl ωl −Dl µ h ωm+D−1m i , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ h ωnm| ωmm+D−1i= exp n X l =m+D φl ωl −Dl , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i
Markov chains and Gibbs distributions
Markov chain: P ω(n)ωn−Dn−1 > 0 ⇒ ∃ a unique, invariant, ergodic distribution µ. µ h ωnm| ωmm+D−1i= exp n X l =m+D φl ωl −Dl , ∀m < n ∈Z φωl −Dl = log Phω(l ) ω l −1 l −D i
Gibbs distribution
Equilibrium stat. phys.
(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.
Ergodic theory.
Markov chains - finite memory. Chains with complete
connections - infinite memory (Left Interval Specification).
P [ ω ] =Z1e−βH{ω}
H {ω} =P
αλαXα{ω}
Xα(ω) = Product of spike events Hammersley, Clifford, 1971
O. Onicescu and G. Mihoc. CRAS Paris, 1935
R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...
Gibbs distribution
Equilibrium stat. phys.
(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.
Ergodic theory.
Markov chains - finite memory. Chains with complete
connections - infinite memory (Left Interval Specification).
P [ ω ] =Z1e−βH{ω}
H {ω} =P
αλαXα{ω}
Xα(ω) = Product of spike events Hammersley, Clifford, 1971
O. Onicescu and G. Mihoc. CRAS Paris, 1935
R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...
Gibbs distribution
Equilibrium stat. phys.
(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.
Ergodic theory.
Markov chains - finite memory. Chains with complete
connections - infinite memory (Left Interval Specification).
P [ ω ] =Z1e−βH{ω}
H {ω} =P
αλαXα{ω}
Xα(ω) = Product of spike events Hammersley, Clifford, 1971
O. Onicescu and G. Mihoc. CRAS Paris, 1935
R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...
Gibbs distribution
Equilibrium stat. phys.
(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.
Ergodic theory.
Markov chains - finite memory.
Chains with complete
connections - infinite memory (Left Interval Specification).
P [ ω ] =Z1e−βH{ω}
H {ω} =P
αλαXα{ω}
Xα(ω) = Product of spike events Hammersley, Clifford, 1971
O. Onicescu and G. Mihoc. CRAS Paris, 1935
R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...
Gibbs distribution
Equilibrium stat. phys.
(Max. Entropy Principle). Non equ. stat. phys. Onsager theory.
Ergodic theory.
Markov chains - finite memory. Chains with complete
connections - infinite memory (Left Interval Specification).
P [ ω ] =Z1e−βH{ω}
H {ω} =P
αλαXα{ω}
Xα(ω) = Product of spike events Hammersley, Clifford, 1971
O. Onicescu and G. Mihoc. CRAS Paris, 1935
R. Fernandez, G. Maillard, A. Le Ny, J.R. Chazottes, ...
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
φ(n, ω) = log Pn ω(n) ω n−1 −∞ ≡ log Π ω(n), θ − Vk(sp)(t, ω) − Vk(S)(t, ω) σk(n − 1, ω) !
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
φ(n, ω) = log Pn ω(n) ω n−1 −∞ ≡ log Π ω(n), θ −Vk(sp)(t, ω)−Vk(S)(t, ω) σk(n − 1, ω) !
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).
κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f (t, ·), H (1) k (r , ·) σk(r − 1, ·) Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).
κk,f( t − t1) = 1 Ck n=[ t ] X r = −∞ C(sp) " f (t,·), H (1) k (r ,·) σk(r − 1,·) Z r −1 τk(r − 1, .) Sk(t1)Γk(t1, r − 1,·)dt1 # History dependence.
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).
κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f(t, ·), H (1) k (r , ·) σk(r − 1, ·) Zr −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).
κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) f (t , ·), H(1)k (r , ·) σk(r − 1, ·) Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1
Context Question Method Results Conclusions
Response to stimuli
Bruno Cessac Linear response for spiking neuronal networks with unbounded memory
Bruno Cessac, Ignacio Ampuero, Rodrigo Cofr´e, submitted to J. Math. Neuro, arXiv:1704.05344
GIF model without stimulus ⇒stationary Gibbs distribution.
time-dependent stimulus S (t) ⇒non stationary Gibbs distribution.
How is the average of an observable f (ω, t) affected by the stimulus ?
If S is weak enough: δµ [ f (t) ] = [ κf ∗ S ] ( t ) , (linear response).
κk,f( t − t1) = 1 Ck n=[ t ] X r =−∞ C(sp) " f (t, ·), H (1) k (r , ·) σk(r − 1, ·) Z r −1 τk(r −1,.) Sk(t1)Γk(t1, r − 1, ·)dt1 #
Context Question Method Results Conclusions
An illustrative example
Context Question Method Results Conclusions
The simplest example of the gIF model is the ... discrete time leaky-integrate and fire model
Vk(n+1) = γ Vk(n)+ X
j
Wkjωj(n)+I0+Sk(t)+σBξk(n), if Vk(n) < θ
Context Question Method Results Conclusions
Linear response
δµ(1)[ f (n) ] ∼ − N X k=1 D+1 X m=1 1 νk X l =0 γlK(1)kmSk(n − m − l ) K(1)k,m= C(sp)[ f (m, ·), ζk(0, .) ] ζk(r − 1, ω) = H(1)k (r , ω) σk(r − 1, ω)Context Question Method Results Conclusions
Firing rates
Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).
Figure:Linear response of f (ω, n) = ωkc(n) for different values of stimulus amplitude A. From https://arxiv.org/abs/1704.05344
Context Question Method Results Conclusions
Pairwise correlations
Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).
Figure:Correlation functions corresponding to the firing rate of the neuron kc=N2 as a function of the
neuron index k (abscissa), for different values of the time delay m. From https://arxiv.org/abs/1704.05344
Context Question Method Results Conclusions
Higher order observable
Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).
Figure:Linear response of the observable f (n, ω) = ωkc −2(n − 3)ωkc(n).
Context Question Method Results Conclusions
Range of validity
Computations have been done using the INRIA Pranas software, B. Cessac et al, Frontiers in Neuroinformatics, Vol 11, page 49, (2017).
Figure:L2distance between the curves δµ(1)[ f (n) ], δµ(HC1)[ f (n) ] and
the empirical curve, as a function of the stimulus amplitude A. Left panel show distance between rate curves and right panel distance between pairwise observable with delay.
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ?
Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ?
The potential is shaped from dynamics.
There is no ”thermodynamic principle” to constrain its shape.
Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;
arXiv:1405.3621)
Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”
Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ?
The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.
Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;
arXiv:1405.3621)
Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”
Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ?
The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.
Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;
arXiv:1405.3621)
Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”
Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ?
The potential is shaped from dynamics. There is no ”thermodynamic principle” to constrain its shape.
Hammersley-Clifford expansion, whose first non trivial term is the ”Ising” term, contains too much terms. (Cofre, Cessac, PRE, 2014;
arXiv:1405.3621)
Which ones are relevant ? ⇒ Requires a method to reduce the ”dimensionality of the potential”
Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Similar to Volterra-expansion.
Higher order terms computed by D. Ruelle (Nonlinearity, 11(1):518, 1998. ) for hyperbolic dynamical systems ⇒ Complex expansion hard to handle from data.
Sigmoid are ”hard” to approximate with Taylor expansions.
Toward an extension of the concept of receptive-field ?
Context Question Method Results Conclusions
Main conclusions
The response to a time dependent stimulus is expressed in terms of correlations of the unperturbed dynamics.
Linear response is characterized by a convolution kernel depending on the observable, on the network dynamics and the afferent Gibbs distribution.
How to determine the potential (from data) ? Higher order terms in the response ?
Toward an extension of the concept of receptive-field ?