ELECTROPHYSIOLOGICAL MONITORING OF BRAIN MODELS
In-vitro models are used extensively to study the molecular mechanisms that regulate the transmission and processing of information in the brain through synaptic communication. Neurochips have been developed to allow the interrogation of the electrophysiological activity of cells or tissues under various conditions. Multi- electrode arrays (MEAs)  enable stimulation and monitoring of neuronalnetworks taken from brain tissue or created when isolated brain cells reconnect in culture.
Integrate and Fire model 101,102 .
One of the contemporary challenges in neuroscience is to understand how our brain processes external world infor- mation. For example, our retina receives the light coming from a visual scene and efficiently converts it into trains of impulses (action potentials) sent to the brain via the optic nerve. The visual cortex is then able to decode this flow of information in a fast and efficient way. How does a neu- ronal network, like the retina, adapts its internal dynamics to stimuli, yet providing a response that can be success- fully deciphered by another neuronal network ? Even if this question is far from being resolved, there exist success- ful methods and strategies providing partial answers. To some extent, as developed in this paper, this question can be addressed from the point of non equilibrium statisti- cal physics and linear response theory. Although neuronalnetworks are outside the classical scope of non equilibrium statistical physics - interactions (synapses) are not sym- metric, equilibrium evolution is not time-reversible, there is no known conserved quantity, no Lyapunov function - an extended notion of Gibbs distribution can be proposed, directly constructed from the dynamics where the linear response can be derived explicitly, including network pa- rameters dependence.
2. Linear Response, Gibbs Distributions and Probabilistic Chains with Unbounded Memory
Neuronalnetworks can be considered either as dynamical systems (when the dynam- ics is known) or as spike generating processes characterized by transition probabilities computed from spike train observations. In the first case, it is natural to seek a linear response from dynamics itself, using approximations (e.g., mean-field [ 36 ]). In the second case, one has to define a probability distribution on the spike trains in order to investigate the effect of a perturbation. In this section, we show how these two approaches are related, making a link between the classical statistical physics approach of linear response, dynam- ical systems and ergodic theory, and neuronalnetworks. We introduce then the general formalism of chains with unbounded memory allowing the handling of non-equilibrium linear response for spiking neuronalnetworks. All of the material in this section is known in different domains, statistical physics, ergodic theory, stochastic processes, neuronalnetworks, and is presented here for a better understanding of the next sections.
2014 ) a novel application of the matrix χ concerning the identiﬁcation of insensitive regions in parameter space of pairwise maximum entropy models, where the global network statistics is slightly altered. This study only consider a simpliﬁed version of χ (only considering spatial monomials) Regions of high sensitivity are also identiﬁed. This work is done considering a purely spatial pairwise MaxEnt model. The authors argue that this form of degeneracy endows neuronalnetworks with the ﬂexibility to continuously remodel and explore large regions of parameter space without compromising stability and function. Using tools exposed in this chapter we can extend this analysis to the spatio-temporal case. Indeed, we are able to compute the matrix χ for spatio-temporal observables. From this matrix the Fisher information matrix can be computed and the same analysis can reveal interesting mechanism taking place when time in taken into account. As the correlation matrix χ can be obtained also from neural network models, a more ambitious project is to link this two approaches.
if the initial data is concentrated enough near the firing potential denoted by V F in the sequel, see .
This also remains true when the discrete nature of interactions is kept . The intuitive explanation is that each firing neuron induces a discharge of the others; thus increases the activity and consequently the discharge rate of the full network. Finally, synchronous states, where the firing rate does not tend asymptotically to constant in time but network produces spontaneous activity, have also been observed in several neuronalnetworks models: systems of coupled nonlinear oscillators , inhibitory NNLIF with synaptic integration , excitatory-inhibitory coupled NNLIF , Fokker-Planck equations for uncoupled neurons [14, 15], kinetic models [19, 6] and elpased time models .
Dynamical properties of fMRI functional connectivity in neuronalnetworks mediating consciousness
Rapha¨el Li´egeois 1 , Mohamed Bahri 2 , Mattia Zorzi 1,3 , Steven Laureys 2 and Rodolphe Sepulchre 1,3
1 Department of Electrical Engineering and Computer Science, University of Li`ege, Belgium 2 Coma Science Group, Cyclotron Research Centre, University of Li`ege, Belgium 3 Department of Engineering, Trumpington Street, University of Cambridge, United Kingdom
monolayers have been shown  to support cultures of hippocampal
neurons – arguably one of the most environmentally demanding cell types. In this case nanodiamond monolayers promote the attachment and
formation of functional neuronalnetworks even in the absence of the otherwise prerequisite procedure of laminating culture surfaces with adhesion promoting extracellular matrix (ECM) proteins (e.g. laminin) prior to cell seeding. Moreover, DND monolayers show a remarkable ability to support neuronalnetworks in stark comparison to the similar material of nanocrystalline diamond (NCD) thin films . Considering the difference in ability of DNDs and NCD to support neuronal cultures, yet the seemingly similar surface properties of these materials, the
C. Spontaneous seizure: From interictal to ictal activity
After SE, animals experience a latent period during which complex network reorganizations take place. During such period, although neuronalnetworks exhibit interictal-like activity [ 38 ], there are no spontaneous seizures. The latter occur during the chronic phase, a few days or weeks after SE. They are difficult to predict; the brain appears to operate “normally” before an abrupt change happens, characterized by 2 to 10-fold larger amplitude oscillations, which is the seizure. Our model reproduces the most important features of such transitions i.e. an abrupt fast firing discharge pattern at seizure onset, and a decrease of spike-wave frequency to- wards the end of seizure. We predict interictal spikes and spike-wave discharges are generated from synchronized activity of inhibitory neurons, and are affected by synaptic coupling strengths within and between the two populations of neurons. Fig 6 displays a simulation of about a minute of activity in which a seizure takes place, together with its experimental coun- terpart. The model produces the different states of seizure evolution without any change of pa- rameters; the states include pre-ictal population spikes, abrupt transitions to tonic firing, and seizure offset. Hysteresis effects have been predicted in the Epileptor [ 11 ] and are preserved in the coupled neuronal population dynamics relayed by the slow permittivity variable. As Fig 5. Population activities at different stages of status epilepticus in simulated and experimental traces. I, II, III, and IV correspond to the area of the parameter space spanned in Fig 3 . P1 (excitatory) and P2 (inhibitory) are neural populations ’ raster plots, with activation threshold at 0 mV. Black points are action potentials, of which the firing rate and synchronization properties change according to the different stages of SE. The mean activity is calculated as the sum of the average of P1 neurons and P2 neurons activity, with 80% and 20% contribution respectively. All experimental traces are recorded from the same rat and shown here before and after chemically-induced SE.
The course of our developments lead us to cast aside the assumption of full connectivity or exchange- ability between neurons. Incidentally, this work therefore shows that the notion of exchangeability, widely use in large stochastic particle systems, can be significantly weakened, in favor of statistical equivalent, and more structured global exchangeability properties such as the translation invariance. This opens the way to develop a these ideas towards invariant architectures under the action of specific groups of transformation. This constitute an active research that we are currently developing. This method also has a number of possible implications in neuroscience and in complex systems more generally, and may help understanding the dynamics of large neural networks. Enriching this model considering different populations in the applications section is a straightforward extension of the manuscript, and analyzing those results would allow going even deeper in the analysis of neuronalnetworks and macroscopic syn- chronization of them as an effect of random pairs delays and synaptic weights. Considering different kind of architectures is also a possible path to follow and could bring new relationships with the specific corti- cal functions. A deep question is whether one can obtain information on the microscopic configurations related to the macroscopic regimes observed. This motivates to develop the analysis of the presence of structured activity (localized bumps, traveling waves, traveling pulses) and their probability of appear- ance as a function of disorder, noise and the parameters of the system. This is an exciting question well worth investigating. One limitation of the qualitative analysis provided here is that the moment reduction is rigorously exact only in very specific models where solutions are Gaussian. Such models do not reproduce the excitability properties of the cells. Extending this analysis to excitable systems, i.e. analyzing equation (2.3.2) with nonlinear dynamics and nonlinear interactions, is a deep and challenging mathematical question in the domain of stochastic processes and functional analysis.
Abstract: Event-scheduling algorithms can compute in continuous time the next occurrence of points (as events) of a counting process based on their current conditional intensity. In particular event-scheduling algorithms can be adapted to perform the simulation of finite neuronalnetworks activity. These algorithms are based on Ogata’s thinning strategy [ 17 ], which always needs to simulate the whole network to access the behaviour of one particular neuron of the network. On the other hand, for discrete time models, theoretical algorithms based on Kalikow decomposition can pick at random influencing neurons and perform a perfect simulation (meaning without approximations) of the behaviour of one given neuron embedded in an infinite network, at every time step. These algorithms are currently not computation- ally tractable in continuous time. To solve this problem, an event-scheduling algorithm with Kalikow decomposition is proposed here for the sequential simulation of point processes neu- ronal models satisfying this decomposition. This new algorithm is applied to infinite neuronalnetworks whose finite time simulation is a prerequisite to realistic brain modeling.
We are developing planar patch-clamp array technology in an attempt to combine key benefits of both conventional patch-clamp and MEAs on a chip [10,101] . The concept enables simultaneous
high-resolution patch-clamp interrogation of individual cultured neurons at multiple sites in communicating neuronalnetworks, where individual neurons are probed through apertures that con- nect to dedicated subterranean microfluidic channels. Neurons are first aligned to these apertures by stamped chemical adhesion or guidance cues  , and can subsequently form synaptic con-
Despite these alterations in their passive membrane properties, irradiated neurons were still capable to fire action potentials similar to those of homologous neurons in non-irradiated rats. This is consistent with the persistence of high-frequency, current-evoked, neuronal discharges in cortical slices from irradiated rats (Zhou et al., 2009). However, in our study, the responsiveness of irradiated cells for a given stimulus was substantially attenuated, indicating a global decrease in their intrinsic excitability that was mainly due to the membrane hyperpolarization. The spontaneous activity of irradiated S1Cx cells is also considerably dampened in-between and during the RPOs. In particular, the depolarizing background synaptic activities were diminished in amplitude and frequency, a finding in accordance with the reduced rate of excitatory synaptic events found in irradiated neocortical networks (Xiang et al., 2006). Here, the decline in the excitatory synaptic drive of irradiated S1Cx was likely due to a partial loss of local excitatory neurons and to an alteration of axonal myelination affecting the propagation of synaptic activities. These structural changes together with the membrane hyperpolarization and the shorter membrane time constant were likely responsible for the low spontaneous firing and the lack of paroxysmal depolarizing shifts. Consequently, this may have precluded the synchronization among cortical cells and the generation of fully developed SWDs.
I.2. SIMPLIFIED SINGLE NEURON MODELS 11 variable onset potential, cannot be described by the single compartment HH model. One hypothesis that was suggested was that these behaviours can be reproduced by cooperative activation of sodium channels in the cell membrane as opposed to the in- dependent channel opening in the HH model [Naundorf et al., 2006] [Volgushev et al., 2008]. However, there is no experimental evidence to support this hypothesis. Another hypothesis proposes a multi-compartmental HH model where the action potential back- propagates to the soma after being initiated in the axon. During its backpropagation to the soma, the action potential is sharpened by the active conductances in the axon, thus resulting in the kink in the action potential observed in the soma [Yu et al., 2008]. However, there are evidences that indicate that it is not just the action potential in the soma that is sharp but also the initiation of the action potential. To explain this, Brette hypothesized that that the kink comes from the specific morphology of the neuron [Brette, 2013]. In particular, he showed using biophysical modelling that the kink can be reproduced by placing the sodium channels in the thin axon. If the distance to the soma exceeds a critical value, then these channels open abruptly as a function of the somatic voltage. Another criticism of the HH model comes from the studies in hippocampal mossy fiber neurons. It was shown that the energy demand per action potential was only 1.3 times the theoretical minimum as opposed to 4 times [Hodgkin, 1975] the theoretical minimum as predicted by the HH model [Alle et al., 2009]. The ionic conductance parameters of Na + and K + channels in non-myelinated axons are such that the action potentials minimize the metabolic demands. Moreover, the analysis of the underlying dynamics is complicated in the HH model. It is a quite complex model and hence, is not well suited for analytical derivations. For this reason, simple models were explored for modeling the neuron dynamics. These models try to capture the essential features of the neuronal dynamics. For instance, in neocortical slices of rats, when the membrane potential of a neuron reaches about -55 to -50 mV, it typically fires an action potential as shown in Figure I.5. During the action poten- tial, we notice that the voltage increases rapidly in a stereotypical trajectory and then hyperpolarizes rapidly to a value below the baseline level before coming back to base- line level (refractoriness). The mechanism behind the action potential is captured by the dynamics of the Na + and the K + channels. To simplify the model and accelerate numerical simulations, we can add an action potential artificially each time the neuron reaches a threshold. This is the basis of integrate-and-fire models described next.
An Ornstein Uhlenbeck process with jumps as a neuronal model
R´ esum´ e Le chapitre 5 ´etudie le mod`ele de saut-diffusion dans lequel la diffu- sion est donn´ee par un processus d’Ornstein Uhlenbeck. De mani`ere similaire au chapitre prec´edent nous analysons ici les effets des processus de sauts dont la distribution temporelle suit une distribution exponentielle ou une distribu- tion gaussienne inverse. Dans les deux cas nous observons des distributions multimodales. Dans une marge restreinte de l’espace des param`etres nous observons la pr´esence d’un ph´enom`ene nouveau, d´ecrit ici pour la premi`ere fois. Il s’agit d’un ph´enom`ene de type r´esonnant (“resonant like”) dˆ u ` a la composition du processus diffusif et des processus de saut correspondant aux aff´erences excitatrices et aff´erences inhibitrices. Cette observation sugg`ere que pour certaines intensit´es des processus de saut aff´erents (“bruit de fond”) un neurone peut participer ` a plusieures assembl´ees de cellules (“cell assemblies”).
Our application of this methodology provides a description of spatially-structured networks with a local oscillatory dynamics in the SSO regime, based on E-I recurrent interactions. Our first general finding is that , as previously obtained in the Wilson- Cowan rate model framework [21, 22, 27], the detail of long-range connectivity matters. Based on observations in several neural areas [15–17, 61, 62], we have confined ourselves to considering long-range excitatory connections. Long-range excitatory connections only targeting distant excitatory neurons result in more complex dynamical properties than long-range excitatory connections that have the same connection specificity as local ones. Experimental investigations of the targeting specificity of long-range excitatory connections appear rather scarce at present (but see [60, 63]). Results such as ours and previous ones [21, 22, 27] will hopefully provide an incentive to gather further data on this question for which experimental tools are now available.
Furthermore, we have shown that the stimulation of single driver LC cells was not only able to alter the collective activity but also to deeply modify the role of neurons in the network, such that some neurons can be promoted to the role of driver hubs or driver hubs can even lose their role (see Fig 6(b) and 6(c) ). At variance with purely excitatory networks [ 12 ], the syn- chronized dynamics of the present network, composed of excitatory and inhibitory neurons, is less vulnerable to targeted attacks to the hubs [ 46 , 47 ]. As demonstrated by the fact that differ- ent firing sequences of hub neurons can lead to population burst ignitions (see Fig 3(e) ) and that hubs can be easily substituted in their role by driver LC cells when properly stimulated. The robustness of the synchronized dynamics is confirmed by the fact that the presence of channel noise, up to quite large noise strength, does not substantially modify the composition neither of the functional clique nor of the LC drivers (for more details on the analysis see S4 Text and S11 Fig ).