• Aucun résultat trouvé

Beyond dynamical mean-field theory of neural networks

N/A
N/A
Protected

Academic year: 2021

Partager "Beyond dynamical mean-field theory of neural networks"

Copied!
3
0
0

Texte intégral

(1)

HAL Id: hal-00842309

https://hal.inria.fr/hal-00842309

Submitted on 8 Jul 2013

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Beyond dynamical mean-field theory of neural networks

Massimiliano Muratori, Bruno Cessac

To cite this version:

Massimiliano Muratori, Bruno Cessac. Beyond dynamical mean-field theory of neural networks.

Twenty Second Annual Computational Neuroscience Meeting : CNS 2013, Jul 2013, Paris, France.

14 (Suppl 1), pp.P60, 2013. �hal-00842309�

(2)

P O S T E R P R E S E N T A T I O N

Open Access

Beyond dynamical mean-field theory of neural

networks

Massimiliano Muratori

*

, Bruno Cessac

From

Twenty Second Annual Computational Neuroscience Meeting: CNS*2013

Paris, France. 13-18 July 2013

We consider a set of N firing rate neurons with discrete time dynamics and a leak term g. The nonlinearity of the sigmoid is controlled by a parameter g and each neuron has a firing threshold θ, Gaussian distributed (thresholds are uncorrelated). The network is fully connected with correlated Gaussian random synaptic weights, with mean zero and covariance matrix C/N. When synaptic weights are uncorrelated the dynamic mean field theory developed in [1-3] allows us to draw the bifurcation diagram of the model in the thermodynamic limit (N tending to infinity): in particular there is sharp transition from fixed point to

chaos characterized by the maximum Lyapunov exponent, which is known analytically in the thermodynamic limit. The bifurcation diagram is drawn in Figure 1 A. However, mean-field theory is exact only in the thermodynamic limit and when synaptic weights are uncorrelated. What are the deviations from mean-field theory observed when one departs from these hypotheses? We have first studied the finite size dynamics. For finite N the maximal Lyapu-nov exponent has a plateau at 0 corresponding to a transi-tion to chaos by quasi-periodicity where dynamics is at the edge of chaos (Figure 1 B). This plateau disappears in the

NeuroMathComp team (INRIA, UNSA LJAD), Sophia Antipolis, France

Figure 1 (A) Bifurcation map. 1 : one stable fixed point; 2 two stable fixed points; 3 one fixed point and one strange attractor; 4 one strange attractor. (B) Finite N and Mean-Field Maximal Lyapunov exponent (θ = 0.1, g = 0). (C) Finite N Maximal Lyapunov exponent with weak correlation (ε = 0.01 and Mean-Field Maximal Lyapunov Exponent without correlation (ε = 0).

Muratori and Cessac BMC Neuroscience 2013, 14(Suppl 1):P60 http://www.biomedcentral.com/1471-2202/14/S1/P60

© 2013 Muratori and Cessac; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

(3)

thermodynamic limit. Thus, mean-field theory neglects an important finite-sized effect since neuronal dynamics at the edge of chaos has strong implications on learning per-formances of the network [4]. We also studied the effect of a weak correlation (of amplitude ε) on dynamics. Even, when ε is small one detects an important deviation on the maximal Lyapunov exponent (Figure 1 C).

Acknowledgements

This work was supported by INRIA, ERC-NERVI number 227747, KEOPS ANR-CONICYT and European Union Project # FP7-269921 (BrainScales), Renvision grant agreement N 600847 and Mathemacs FP7-ICT_2011.9.7.

Published: 8 July 2013 References

1. Cessac B, Doyon B, Quoy M, Samuelides M: Mean-field equations, bifurcation map and route to chaos in discrete time neural networks. Physica D1994, 74:24-44.

2. Cessac B: Increase in Complexity in Random Neural Networks. J Phys I France1995, 5:409-432.

3. Moynot O, Samuelides M: Large deviations and mean-field theory for asymmetric random recurrent neural networks. Probability Theory and Related Fields2002, 123:41-75, Springer-Verlag.

4. Legenstein R, Maass W: Edge of Chaos and Prediction of Computational Performance for Neural Circuit Models. Neural Networks 2007, 20:323-334.

doi:10.1186/1471-2202-14-S1-P60

Cite this article as: Muratori and Cessac: Beyond dynamical mean-field theory of neural networks. BMC Neuroscience 2013 14(Suppl 1):P60.

Submit your next manuscript to BioMed Central and take full advantage of:

• Convenient online submission

• Thorough peer review

• No space constraints or color figure charges

• Immediate publication on acceptance

• Inclusion in PubMed, CAS, Scopus and Google Scholar

• Research which is freely available for redistribution

Submit your manuscript at www.biomedcentral.com/submit

Muratori and Cessac BMC Neuroscience 2013, 14(Suppl 1):P60 http://www.biomedcentral.com/1471-2202/14/S1/P60

Références

Documents relatifs

2014 Consideration of the eigenproblem of the synaptic matrix in Hopfield’s model of neural networks suggests to introduce a matrix built up from an orthogonal set,

3, where terminals start the game with a uniform energy distribution, it can be seen that the amount of terminals with high energy (20 J) decreases with time, whereas an

A brief summary is given of recent results on the use of noise in the optimal training of neural networks for the retrieval of static patterns and on competition between the..

This leads to a new connection between ANN training and kernel methods: in the infinite-width limit, an ANN can be described in the function space directly by the limit of the NTK,

Our work is motivated by a desire to provide a theoretical underpinning for the convergence of stochastic gradient type algorithms widely used for non-convex learning tasks such

One of our main contributions is the first order condition of the optimization on the probability space given a marginal constraint (Theorem 3.1), which naturally provides a

In Section 3 we announce the main results of the paper, namely, the wellposedness of the system of mean-field Langevin equations and the convergence of the marginal laws of the