• Aucun résultat trouvé

Back-engineering of spiking neural networks parameters

N/A
N/A
Protected

Academic year: 2021

Partager "Back-engineering of spiking neural networks parameters"

Copied!
3
0
0

Texte intégral

(1)

HAL Id: hal-00784453

https://hal.inria.fr/hal-00784453

Submitted on 4 Feb 2013

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Back-engineering of spiking neural networks parameters

Horacio Rostro-Gonzalez, Bruno Cessac, Juan Vasquez, Thierry Viéville

To cite this version:

Horacio Rostro-Gonzalez, Bruno Cessac, Juan Vasquez, Thierry Viéville. Back-engineering of spiking

neural networks parameters. BMC Neuroscience, BioMed Central, 2009, 10 (Suppl 1), pp.P289.

�hal-00784453�

(2)

BioMed Central

Page 1 of 2

(page number not for citation purposes)

BMC Neuroscience

Open Access

Poster presentation

Back-engineering of spiking neural networks parameters

Horacio Rostro-Gonzalez*

1

, Bruno Cessac

1,2

, Juan Carlos Vasquez

1

and

Thierry Viéville

3

Address: 1NEUROMATHCOMP, INRIA Sophia-Antipolis Méditerranée, France, 2LJAD, University of Nice, Sophia, France and 3CORTEX,

INRIA-LORIA, France

Email: Horacio Rostro-Gonzalez* - hrostro@sophia.inria.fr * Corresponding author

Introduction

We consider the deterministic evolution of a time-discre-tized spiking network of neurons with connection weights with delays, taking network of generalized integrate and fire (gIF) neuron model with synapses into account [1]. The purpose is to study a class of algorithmic methods able to calculate the proper parameters (weights and delayed weights) allowing the reproduction of a spike train produced by an unknown neural network.

Methods

The problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. It is clear that this does not change the maximal complexity of the prob-lem, whereas the practical complexity is now dramatically reduced at the implementation level. More precisely we make explicit the fact that the back-engineering of a spike train (i.e., finding out a set parameters, given a set of ini-tial conditions), is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, for a gIF model. Numerical robustness is discussed. We also explain how it is the use of a gener-alized IF neuron model instead of a leaky IF model that allows to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is distributed and has the same architecture as a "Hebbian" rule. A step fur-ther, this paradigm is easily generalizable to the design of input-output spike train transformations.

Results

Numerical implementations are proposed in order to ver-ify that is always possible to simulate an expected spike train. The results obtained shows that this is true, expect for singular cases. In a first experiment, we consider the linear problem and use the singular value decomposition (SVD) in to obtain a solution, allowing a better under-standing the geometry of the problem. When the aim is to find the proper parameters from the observation of spikes only, we consider the related LP problem and the numer-ical solutions are derived thanks to the well-established improved simplex method as implemented in GLPK library. Several variants and generalizations are carefully discussed showing the versatility of the method.

Discussion

Learning parameters for the neural network model is a complex issue. In biological context, this learning mecha-nism is mainly related to synaptic weights plasticity and as far as spiking neural network are concerned STDP [2]. In the present study, the point of view is quite different since we consider supervised learning, in order to implement the previous capabilities. To which extends we can "back-engineer" the neural network parameters in order to con-straint the neural network activity is the key question addressed here.

Acknowledgements

Partially supported by the ANR MAPS & the MACCAC ARC projects.

from Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Berlin, Germany. 18–23 July 2009 Published: 13 July 2009

BMC Neuroscience 2009, 10(Suppl 1):P289 doi:10.1186/1471-2202-10-S1-P289

<supplement> <title> <p>Eighteenth Annual Computational Neuroscience Meeting: CNS*2009</p> </title> <editor>Don H Johnson</editor> <note>Meeting abstracts – A single PDF containing all abstracts in this Supplement is available <a href="http://www.biomedcentral.com/content/files/pdf/1471-2202-10-S1-full.pdf">here</a>.</note> <url>http://www.biomedcentral.com/content/pdf/1471-2202-10-S1-info.pdf</url> </supplement>

This abstract is available from: http://www.biomedcentral.com/1471-2202/10/S1/P289 © 2009 Rostro-Gonzalez et al; licensee BioMed Central Ltd.

(3)

Publish with BioMed Central and every scientist can read your work free of charge

"BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime."

Sir Paul Nurse, Cancer Research UK

Your research papers will be:

available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright

Submit your manuscript here:

http://www.biomedcentral.com/info/publishing_adv.asp

BioMedcentral

BMC Neuroscience 2009, 10(Suppl 1):P289 http://www.biomedcentral.com/1471-2202/10/S1/P289

Page 2 of 2

(page number not for citation purposes)

References

1. Cessac B, Viéville T: On dynamics of integrate-and-fire neural

networks with adaptive conductances. Front Comput Neurosci

2008, 2:2. doi:10.3389/neuro.10.002.2008.

2. Bohte SM, Mozer MC: Reducing the variability of neural

responses: A computational theory of spike-timing-depend-ent plasticity. Neural Computation 2007, 19:371-403.

3. Baudot P: Nature is the code: high temporal precision and low

Références

Documents relatifs

By using a range sensor, the odometry and a discrete map of an indoor environment, a robot has to detect a kidnapping situation, while performing a pose tracking, and then perform

The whole set of measured line positions, intensities, as well as self- and N 2 -broadening coefficients for the ν 6 band of CH 3 Br is given in Table 2, in which the

In this section, we will give some properties of these Green functions that we will need later. The proof of the following lemma is based on elementary potential theory on the

[5] introduced a new structure close to the dasg, which accepts exactly the same language, which is still acyclic (with a slightly modification) and deterministic, called

The idea of the proof of Theorem 2 is to construct, from most of the restricted partitions of n into parts in A , many unrestricted partitions of n.. In Section 4, we will prove

In a microbeam study, Lindau-Webb and Ladefoged (1989) showed that the x and y coordinates of two pellets located along the tongue midline may suffice to reconstruct

To get a bound on the speed of convergence of the Markov chain recording the location of hitting points on the boundary of the stochastic billiard, the strategy is the following.

For an alphabet of size at least 3, the average time complexity, for the uniform distribution over the sets X of Set n,m , of the construction of the accessible and