• Aucun résultat trouvé

Space-time correlations in spike trains and the neural code

N/A
N/A
Protected

Academic year: 2021

Partager "Space-time correlations in spike trains and the neural code"

Copied!
83
0
0

Texte intégral

(1)

HAL Id: hal-00861405

https://hal.inria.fr/hal-00861405

Submitted on 12 Sep 2013

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Space-time correlations in spike trains and the neural

code

Bruno Cessac, Rodrigo Cofre

To cite this version:

Bruno Cessac, Rodrigo Cofre. Space-time correlations in spike trains and the neural code. MATHE-MATICS AND NEUROSCIENCE A DIALOGUE, Sep 2013, Utrecht, Netherlands. �hal-00861405�

(2)

Space-time correlations in spike trains and the neural code

Bruno Cessac, Rodrigo Cofré

NeuroMathComp Team,INRIA Sophia Antipolis,France.

03-09-13

(3)
(4)
(5)
(6)

Multi Electrodes Array

(7)

Raster plot

(8)

Statistical decoding

Stimulus S → spike response R. Try to computeP [ R | S ]then P [ S | R ].

(9)

Ex: Moving bar

O. Marre, D. Amodei, N. Deshmukh,K. Sadeghi,F. Soo, T. E. Holy, M. J. Berry, "Mapping a Complete Neural Population in the Retina", J Neurosci. 32(43), 2012

(10)

Ex: Moving bar

O. Marre, D. Amodei, N. Deshmukh,K. Sadeghi,F. Soo, T. E. Holy, M. J. Berry, "Mapping a Complete Neural Population in the Retina", J Neurosci. 32(43), 2012

(11)

To t

Extract a probabilistic model of P [ R | S ], P [ S | R ] from data.

To predict

Apply the probabilistic model to predict the behaviour of test samples.

To explain

How does a neural network "encode" a stimulus.

To predict is not to explain.

(12)

To t

Extract a probabilistic model of P [ R | S ], P [ S | R ] from data. ⇓

To predict

Apply the probabilistic model to predict the behaviour of test samples.

To explain

How does a neural network "encode" a stimulus.

To predict is not to explain.

(13)

To t

Extract a probabilistic model of P [ R | S ], P [ S | R ] from data. ⇓

To predict

Apply the probabilistic model to predict the behaviour of test samples. ⇓

To explain

How does a neural network "encode" a stimulus.

To predict is not to explain.

(14)

To t

Extract a probabilistic model of P [ R | S ], P [ S | R ] from data. ⇓

To predict

Apply the probabilistic model to predict the behaviour of test samples. ⇓

To explain

How does a neural network "encode" a stimulus.

To predict is not to explain.

(15)

Johannes Kepler

(16)

Johannes Kepler (1571 - 1630)

Isaac Newton

(17)

Johannes Kepler

(1571 - 1630) Isaac Newton(1642-1727)

Albert Einstein

(18)

Johannes Kepler

(1571 - 1630) Isaac Newton(1642-1727) Albert Einstein(1879-1955)

Bernhard Riemann (1826-1866)

(19)

Johannes Kepler

(20)

Mathematics can bring in neuroscience not only techniques to solving problems but also new concepts and questions.

Mathematics can also help to propose laws (in the same sense as in Physics) predictive and explanatory, governing the behaviour of the brain.

(21)

The map is not the territory. (A. Korzibsky). A model is a representation of reality.

Mathematics must be fed and controlled by experiments. A theorem is not a sucient justication.

(22)

The map is not the territory. (A. Korzibsky). A model is a representation of reality. Mathematics must be fed and controlled by experiments.

(23)

The map is not the territory. (A. Korzibsky). A model is a representation of reality. Mathematics must be fed and controlled by experiments. A theorem is not a sucient justication.

(24)
(25)
(26)

This spike train has been generated by an hidden dynamics / stochastic

process.

Can we infer this process from the spike train's analysis ?

(27)

Spike events

Figure: Spike state.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern n k n Nk 1 Spike block n m m m 1 n Raster plot def T 0

(28)

Spike events

Figure: Spike pattern.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )Nk=1 Spike block n m m m 1 n Raster plot def T 0

(29)

Spike events

Figure: Spike pattern.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )Nk=1  1 0 0 1  Spike block n m m m 1 n Raster plot def T 0

(30)

Spike events

Figure: Spike block.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )Nk=1 Spike block ωnm= { ω(m) ω(m + 1) . . . ω(n) } Raster plot def T 0

(31)

Spike events

Figure: Spike block.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )Nk=1 Spike block ωnm= { ω(m) ω(m + 1) . . . ω(n) }  1 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 0 1 1  Raster plot def T 0

(32)

Spike events

Figure: Raster plot/Spike train.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )Nk=1 Spike block ωnm= { ω(m) ω(m + 1) . . . ω(n) } Raster plot ωdef= ω0T

(33)

Main idea

Construct: Pn ω(n) ωn−Dn−1



(34)

Main idea

1 Assume that spike statistics is generated by a (Markov) process. 2 Assume a parametric form for the transition probabilities of this

process (Ex: Linear-Non Linear, Generalized Linear Model, . . . ).

3 Fit the parameters (Maximum likelihood, Kullback-Leibler divergence

minimization, learning methods, . . . ).

4 Generate sample probabilities and compare to data: does the modelt

and predict correctly (condence plots, Kullback-Leibler divergence, correlations, . . . ) ?

5 Handle correctly thenite size sampling of data (standard statistical

(35)
(36)

Example 1: The Generalized-Linear Model (GLM)

Paradigms of rates and receptive elds.

(37)

The Generalized-Linear Model (GLM)

λk(t|Ht) →Pn[ ωk(n) = 1 | Hn−1] ≈ λk(n|Hn−1)∆t = pk(n) Considering Conditional independence:

Pn ω(n) ωn−1n−D  =

N

Y

k=1

(38)

GLM Experimental Validation

"Modeling the impact of common noise inputs on the network activity of retinal ganglion cells". M.Vidne, Y. Ahmadian, J. Shlens, J. Pillow, J.Kulkarn, A. Litke , E. J. Chichilnisky E.Simoncelli, L. Paninski. Journal of Computational Neuroscience( 2011)

(39)

Example 2: Handling correlations with the Maximum

Entropy Principle

(40)

Example 2: Handling correlations with the Maximum

Entropy Principle

(41)

Example 2: Handling correlations with the Maximum

Entropy Principle

Measuring the statistics of characteristic spike events single spikes,

(42)

Example 2: Handling correlations with the Maximum

Entropy Principle

Measuring the statistics of characteristic spike events single spikes, pairs,

(43)

Example 2: Handling correlations with the Maximum

Entropy Principle

Measuring the statistics of characteristic spike events single spikes, pairs,

(44)

Example 2: Handling correlations with the Maximum

Entropy Principle

Measuring the statistics of characteristic spike events single spikes, pairs, triplets,

(45)

Example 2: Handling correlations with the Maximum

Entropy Principle

Measuring the statistics of characteristic spike events single spikes, pairs, triplets, . . . , what else ?.

(46)

Example 2: Handling correlations with the Maximum

Entropy Principle

E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated network states in a neural population". Nature, 440(7087):1007-1012, 2006.

(47)

Example 2: Handling correlations with the Maximum

Entropy Principle

E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated network states in a neural population". Nature, 440(7087):1007-1012, 2006.

(48)

Example 2: Handling correlations with the Maximum

Entropy Principle

Are pairwise correlations signicant, although weak ?

1 Compute the pairwise correlations.

2 Find the probability distribution which maximizes the statistical

entropy and reproduces the observed pairwise correlations ⇒Gibbs distribution.

(49)

Example 2: Handling correlations with the Maximum

Entropy Principle

E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. "Weak pairwise correlations imply strongly correlated network states in a neural population". Nature, 440(7087):1007-1012, 2006.

(50)

Example 2: Handling correlations with the Maximum

Entropy Principle

Extensions:

Ganmor-Schneidman-Segev, 2012: taking into account instantaneous triplets, quadruplets;

Marre et al, 2009: One step memory pairwise Markov process;

Vasquez et al, 2012: General form of events can be taken into account from general theory of Gibbs distributions and Perron-Frobenius theorem;

Nasser et al, 2013: Monte Carlo approach to spatio-temporal Gibbs sampling.

(51)

A statistical model makes assumptions

GLM:

1 Assumption of conditional independence; 2 Questionable interpretation of parameters.

MaxEnt:

1 Assumption of stationarity;

2 Questionable interpretation of parameters; 3 Which events to choose ?

4 Exponential complexity; 5 Overtting ?

(52)
(53)
(54)

What could be the hidden process ?

R.Cofré,B. Cessac: "Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses", Chaos, Solitons and Fractals, 2013.

(55)

What could be the hidden process ?

The sub-threshold variation of the membrane potential of neuron k at time t is given by: CkdVdtk = −gL,k(Vk−EL) −X j gkj(t, ω)(Vk −Ej) −X j ¯ gkj(Vk −Vj) +Ik(t).

Ck is the membrane capacity of neuron k. Ik(t) = ik(ext)(t) + σBξk(t),

(56)

What could be the hidden process ?

gkj(t) = gkj(tr j(ω)) +Gkjαkj(t − tjr(ω)), t > tjr(ω), αkj(t) = t τkje −τt kjH(t),

(57)

What could be the hidden process ?

Mathematical answers

In this example, the hidden process is non Markovian: it has an innite memory, although it can be well approximated by a Markov process. Without gap-junctions the transition probabilities can be explicitly computed. The form is similar to GLM (conditional independence and interpretation of parameters).

With gap-junctions the conditional independence breaks down. The explicit form of the transition probabilities has (not yet) been computed.

The statistics of spike is described by aGibbs distribution(even in the non stationary case). In the stationary case, it obeys a Maximum Entropy Principle.

(58)

Is there any relation between GLM like models and MaxEnt

?

(59)

Is there any relation between GLM like models and MaxEnt

?

(60)
(61)
(62)
(63)
(64)

Can we hear the shape of a Maximum entropy potential

Two distinct potentials H(1),H(2) of range R = D + 1 correspond to the

same Gibbs distribution (are equivalent"), if and only if there exists a range D function f such that (Chazottes-Keller (2009)):

H(2)ω0D  = H(1)ω0D  −f ω0D−1  +f ω1D  + ∆, (1) where ∆ = P(Hβ(2)) − P(Hβ(1)).

(65)

Can we hear the shape of a Maximum entropy potential

Summing over periodic orbits we get rid of the function f

R X n=1 φ(ωσnl1) = R X n=1 H∗(ωσnl 1) −RP(H∗), (2)

(66)

Can we hear the shape of a Maximum entropy potential

Conclusion

Given a set of transition probabilities Phω(D) ω

D−1 0

i

>0 there is a unique, up to a constant, MaxEnt potential, written as a linear combination of constraints (average of spike events) with a minimal number of terms. This potential can be explicitly (and algorithmically) computed.

(67)
(68)

Combinatorial Explosion of constraints

A GLM like model has typically O(N2) parameters where N is the

number of neurons.

The equivalent MaxEnt potential has generically 2NR 2N R 1

parameters, non linear and redundant functions of the GLM parameters.

Intractable determination of parameters; Stimulus dependent parameters;

Overtting.

(69)

Combinatorial Explosion of constraints

A GLM like model has typically O(N2) parameters where N is the

number of neurons.

The equivalent MaxEnt potential has generically2NR2N(R−1)

parameters, non linear and redundant functions of the GLM parameters.

Intractable determination of parameters; Stimulus dependent parameters;

Overtting.

(70)

Combinatorial Explosion of constraints

A GLM like model has typically O(N2) parameters where N is the

number of neurons.

The equivalent MaxEnt potential has generically2NR2N(R−1)

parameters, non linear and redundant functions of the GLM parameters.

⇒ Intractable determination of parameters; Stimulus dependent parameters;

Overtting.

(71)

Combinatorial Explosion of constraints

A GLM like model has typically O(N2) parameters where N is the

number of neurons.

The equivalent MaxEnt potential has generically2NR2N(R−1)

parameters, non linear and redundant functions of the GLM parameters.

⇒ Intractable determination of parameters; Stimulus dependent parameters;

Overtting.

BUT

(72)

Combinatorial Explosion of constraints

A GLM like model has typically O(N2) parameters where N is the

number of neurons.

The equivalent MaxEnt potential has generically2NR2N(R−1)

parameters, non linear and redundant functions of the GLM parameters.

⇒ Intractable determination of parameters; Stimulus dependent parameters;

Overtting.

(73)

Remark

MaxEnt approach might be useful if there is some hidden law of nature/ symmetry which cancels most of the terms of its expansion.

(74)
(75)

Finite size eects

Having a nice mathematical model for spike statistics will be really ecient if one can control/ predict nite size eects:

Fluctuations (Central Limit theorem; innitely divisible distributions) Errors on parameters estimations.

Convergence rate (Large deviations; concentrations inequalities) Statistical tests (Neymann-Pearson, . . . );

(76)
(77)

Paradigm changes ?

(78)

Johannes Kepler

(79)

Receptive eld

The receptive eld of a sensory neuron is a region of space in which the presence of a stimulus will alter the ring of that neuron. (wikipedia)

http: //thebrain.mcgill.ca/flash/d/d_ 02/d_02_cl/d_02_cl_vis/d_02_cl_ vis.html http://www.luc.edu/faculty/ asutter/RecField.html http: //theses.ulaval.ca/archimede/ fichiers/24200/ch01.html

(80)

Receptive eld

The receptive eld of a sensory neuron is a region of space in which the presence of a stimulus will alter the ring of that neuron. (wikipedia)

http: //thebrain.mcgill.ca/flash/d/d_ 02/d_02_cl/d_02_cl_vis/d_02_cl_ vis.html http://www.luc.edu/faculty/ asutter/RecField.html http: //theses.ulaval.ca/archimede/ fichiers/24200/ch01.html

(81)

Receptive eld

The receptive eld of a sensory neuron is a region of space in which the presence of a stimulus will alter the ring of that neuron. (wikipedia)

http: //thebrain.mcgill.ca/flash/d/d_ 02/d_02_cl/d_02_cl_vis/d_02_cl_ vis.html http://www.luc.edu/faculty/ asutter/RecField.html http: //theses.ulaval.ca/archimede/ fichiers/24200/ch01.html

(82)

Receptive eld

The receptive eld of a sensory neuron is a region of space in which the presence of a stimulus will alter the ring of that neuron. (wikipedia)

http: //thebrain.mcgill.ca/flash/d/d_ 02/d_02_cl/d_02_cl_vis/d_02_cl_ vis.html http://www.luc.edu/faculty/ asutter/RecField.html http: //theses.ulaval.ca/archimede/ fichiers/24200/ch01.html

(83)

Références

Documents relatifs

We can see in the two graphics of Figure 2 the performance of the FNT prediction for the NARMA test data set using Gaussian activation function and Fermi activation function.. Figure

Due to a theorem by Dubins and Schwarz, by changing the time scale we can turn the local martingale into a Brownian motion and the problem of computing the pdfs of the spikes times

Let X be the normed vector space of all real null sequences with metric d induced by the usual

The experimental studies of the electrical properties of ice with the electrodes close to ideally blocking ones, performed on the basis of the SC-dispersion theory,

[ 5 ] In this context, this paper aims at investigating statis- tically the spatial and temporal variability of rain in terms of meteorological processes over the western

Global existence for a semi-linear wave equation with the quadratic and cubic terms satisfying the null condition has been shown by Godin in [9] using an algebraic trick to remove

But, by the uniform boundedness principle, if the fn converge in this topology, then they must be bounded in the essential supremum norm as functions on (— jc, n). Therefore, they

Also, on the basis on the observations that quantum mechanics shows that the electron seems to follow several trajectories simultaneously and the theory