• Aucun résultat trouvé

Stability of the stationary solutions of neural field equations with propagation delays

N/A
N/A
Protected

Academic year: 2021

Partager "Stability of the stationary solutions of neural field equations with propagation delays"

Copied!
29
0
0

Texte intégral

(1)

HAL Id: hal-00784425

https://hal.inria.fr/hal-00784425

Submitted on 4 Feb 2013

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Stability of the stationary solutions of neural field equations with propagation delays

Romain Veltz, Olivier Faugeras

To cite this version:

Romain Veltz, Olivier Faugeras. Stability of the stationary solutions of neural field equations with propagation delays. Journal of Mathematical Neuroscience, BioMed Central, 2011, 1 (1), pp.1-28.

�10.1186/2190-8567-1-1�. �hal-00784425�

(2)

DOI10.1186/2190-8567-1-1

R E S E A R C H Open Access

Stability of the stationary solutions of neural field equations with propagation delays

Romain Veltz·Olivier Faugeras

Received: 22 October 2010 / Accepted: 3 May 2011 / Published online: 3 May 2011

© 2011 Veltz, Faugeras; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License

Abstract In this paper, we consider neural field equations with space-dependent de- lays. Neural fields are continuous assemblies of mesoscopic models arising when modeling macroscopic parts of the brain. They are modeled by nonlinear integro- differential equations. We rigorously prove, for the first time to our knowledge, suf- ficient conditions for the stability of their stationary solutions. We use two methods 1) the computation of the eigenvalues of the linear operator defined by the linearized equations and 2) the formulation of the problem as a fixed point problem. The first method involves tools of functional analysis and yields a new estimate of the semi- group of the previous linear operator using the eigenvalues of its infinitesimal genera- tor. It yields a sufficient condition for stability which is independent of the character- istics of the delays. The second method allows us to find new sufficient conditions for the stability of stationary solutions which depend upon the values of the delays. These conditions are very easy to evaluate numerically. We illustrate the conservativeness of the bounds with a comparison with numerical simulation.

1 Introduction

Neural fields equations first appeared as a spatial-continuous extension of Hopfield networks with the seminal works of Wilson and Cowan, Amari [1,2]. These networks describe the mean activity of neural populations by nonlinear integral equations and play an important role in the modeling of various cortical areas including the vi- sual cortex. They have been modified to take into account several relevant biological

R Veltz

IMAGINE/LIGM, Université Paris Est., Paris, France R Veltz ()·O Faugeras

NeuroMathComp team, INRIA, CNRS, ENS Paris, Paris, France e-mail:romain.veltz@sophia.inria.fr

(3)

mechanisms like spike-frequency adaptation [3,4], the tuning properties of some populations [5] or the spatial organization of the populations of neurons [6]. In this work we focus on the role of the delays coming from the finite-velocity of signals in axons, dendrites or the time of synaptic transmission [7,8]. It turns out that delayed neural fields equations feature some interesting mathematical difficulties. The main question we address in the sequel is that of determining, once the stationary states of a non-delayed neural field equation are well-understood, what changes, if any, are caused by the introduction of propagation delays? We think this question is impor- tant since non-delayed neural field equations are pretty well understood by now, at least in terms of their stationary solutions, but the same is not true for their delayed versions which in many cases are better models closer to experimental findings. A lot of work has been done concerning the role of delays in waves propagation or in the linear stability of stationary states but except in [9] the method used reduces to the computation of the eigenvalues (which we callcharacteristic values) of the linearized equation in some analytically convenient cases (see [10]). Some results are known in the case of a finite number of neurons [11,12] and in the case of a few number of distinct delays [13,14]: the dynamical portrait is highly intricated even in the case of two neurons with delayed connections.

The purpose of this article is to propose a solid mathematical framework to char- acterize the dynamical properties of neural field systems with propagation delays and to show that it allows us to find sufficient delay-dependent bounds for the linear sta- bility of the stationary states. This is a step in the direction of answering the question of how much delays can be introduced in a neural field model without destabiliza- tion. As a consequence one can infer in some cases without much extra work, from the analysis of a neural field model without propagation delays, the changes caused by the finite propagation times of signals. This framework also allows us to prove a linear stability principle to study the bifurcations of the solutions when varying the nonlinear gain and the propagation times.

The paper is organized as follows: in Section2we describe our model of delayed neural field, state our assumptions and prove that the resulting equations are well- posed and enjoy a unique bounded solution for all times. In Section3 we give two different methods for expressing the linear stability of stationary cortical states, that is, of the time independent solutions of these equations. The first one, Section3.1, is computationally intensive but accurate. The second one, Section3.2, is much lighter in terms of computation but unfortunately leads to somewhat coarse approximations.

Readers not interested in the theoretical and analytical developments can go directly to the summary of this section. We illustrate these abstract results in Section4 by applying them to a detailed study of a simple but illuminating example.

2 The model

We consider the following neural field equations defined over an openboundedpiece of cortex and/or feature spaceRd. They describe the dynamics of the mean

(4)

membrane potential of each ofpneural populations.

d

dt +li

Vi(t,r)= p j=1

Jij(r,r)S¯ σj

Vj

tτij(r,r),¯ r¯

hj

dr¯ +Iiext(r, t ), t0,1ip,

Vi(t,r)=φi(t,r), t∈ [−T ,0].

(1)

We give an interpretation of the various parameters and functions that appear in (1).

is afinitepiece of cortex and/or feature space and is represented as an open bounded set ofRd. The vectorsrandr¯represent points in.

The functionS:R(0,1)is the normalized sigmoid function:

S(z)= 1

1+e−z. (2)

It describes the relation between the firing rateνi of populationi as a function of the membrane potential, for example,Vi :νi =S[σi(Vi hi)]. We noteVthe p- dimensional vector(V1, . . . , Vp).

Thepfunctionsφi,i=1, . . . , p, represent the initial conditions, see below. We noteφthep-dimensional vector1, . . . , φp).

ThepfunctionsIiext,i=1, . . . , p, represent external currents from other cortical areas. We noteIextthep-dimensional vector(I1ext, . . . , Ipext).

Thep×p matrix of functionsJ= {Jij}i,j=1,...,prepresents the connectivity be- tween populationsiandj, see below.

Thep real values hi,i=1, . . . , p, determine the threshold of activity for each population, that is, the value of the membrane potential corresponding to 50% of the maximal activity.

Thep real positive valuesσi,i=1, . . . , p, determine the slopes of the sigmoids at the origin.

Finally thep real positive valuesli,i=1, . . . , p, determine the speed at which each membrane potential decreases exponentially toward its rest value.

We also introduce the function S: Rp Rp, defined by S(x)= [S(σ1(x1 h1)), . . . , S(σp(xphp))], and the diagonalp×pmatrixL0=diag(l1, . . . , lp).

A difference with other studies is the intrinsic dynamics of the population given by the linear response of chemical synapses. In [9,15],(dtd +li)is replaced by(dtd +li)2 to use the alpha function synaptic response. We use(dtd +li)for simplicity although our analysis applies to more general intrinsic dynamics, see Proposition3.10in Sec- tion3.1.3.

For the sake of generality, the propagation delays are not assumed to be identi- cal for all populations, hence they are described by a matrixτ(r,r)¯ whose element τij(r,r)¯ is the propagation delay between populationjatr¯and populationiatr. The reason for this assumption is that it is still unclear from physiology if propagation delays are independent of the populations. We assume for technical reasons thatτ is continuous, that is,τ C0(2,Rp+×p). Moreover biological data indicate thatτ is not a symmetric function (that is,τij(r,r)¯ =τj i(r,¯ r)), thus no assumption is made about this symmetry unless otherwise stated.

(5)

In order to compute the righthand side of (1), we need to know the voltageVon some interval[−T ,0]. The value ofT is obtained by considering the maximal delay:

τm= max

i,j,(r,¯r)×

τij(r,r).¯

Hence we chooseT =τm.

2.1 The propagation-delay function

What are the possible choices for the propagation-delay functionτ(r,r)? There are¯ few papers dealing with this subject. Our analysis is built upon [16]. The authors of this paper study, inter alia, the relationship between the path length along axons from soma to synaptic buttonsversusthe Euclidean distance to the soma. They observe a linear relationship with a slope close to one. If we neglect the dendritic arbor, this means that if a neuron located atris connected to another neuron located atr, the¯ path length of this connection is very close tor− ¯r, in other words, axons are straight lines. According to this, we will choose in the following:

τ (r,r)¯ =cr− ¯r2, wherecis the inverse of the propagation speed.

2.2 Mathematical framework

A convenient functional setting for the non-delayed neural field equations (see [17–

19]) is to use the spaceF=L2(,Rp)which is a Hilbert space endowed with the usual inner product:

V,UF p i=1

Vi(r)Ui(r) dr.

To give a meaning to (1), we define thehistory spaceC=C0([−τm,0],F)with φC=supt∈[−τm,0]φ (t )F, which is the Banach phase space associated with equa- tion (3) below. Using the notationVt(θ )=V(t+θ ),θ∈ [−τm,0], we write (1) as:

V(t )˙ = −L0V(t )+L1S(Vt)+Iext(t ),

V0=φC, (3)

where

L1:C−→F, φ

J(·,¯r)φ

¯

r,τ(·,r)¯ dr¯

is the linear continuous operator satisfying (the notation| · |is defined in Defini- tionA.2of AppendixA)|L1| ≤ JL2(2,Rp×p). Notice that most of the papers on this subject assumeinfinite, hence requiringτm= ∞. This raises difficult mathe- matical questions which we do not have to worry about, unlike [9,15,20–24].

We first recall the following proposition whose proof appears in [25].

(6)

Proposition 2.1 If the following assumptions are satisfied:

1. JL2(2,Rp×p),

2. the external currentIextC0(R,F), 3. τC0(2,Rp+×p), sup2τ τm.

Then for anyφC,there exists a unique solutionVC1([0,),F)C0([−τm,

),F)to(3).

Notice that this result gives existence onR+, finite-time explosion is impossible for this delayed differential equation. Nevertheless, a particular solution could grow indefinitely, we now prove that this cannot happen.

2.3 Boundedness of solutions

A valid model of neural networks should only feature bounded membrane potentials.

We find a bounded attracting set in the spirit of our previous work with non-delayed neural mass equations. The proof is almost the same as in [19] but some care has to be taken because of the delays.

Theorem 2.2 All the trajectories of the equation(3)are ultimately bounded by the same constantR(see the proof)ifImaxtR+Iext(t )F<∞.

Proof Let us definef :R×CR+as f (t,Vt)def=

L0Vt(0)+L1S(Vt)+Iext(t ),V(t )

F=1 2

dV2F dt . We notel=mini=1,...,pliand from LemmaB.2(see AppendixB.1):

f (t,Vt)≤ −lV(t )2

F+

p|||J|F+IV(t )

F.

Thus, ifV(t )F 2p||·|lJ|F+I def=R,f (t,Vt)≤ −lR22 def= −δ <0.

Let us show that the open ball ofFof center 0 and radiusR,BR, is stable under the dynamics of equation (3). We know that V(t )is defined for allt 0s and that f <0 on∂BR, the boundary ofBR. We consider three cases for the initial condition V0.

IfV0C< Rand setT =sup{t|∀s∈ [0, t],V(s)BR}. Suppose thatT R, then V(T )is defined and belongs toBR, the closure ofBR, becauseBRis closed, in effect to∂BR. We also have dtdV2F|t=T =f (T ,VT)≤ −δ <0 becauseV(T )∂BR. Thus we deduce that forε >0 and small enough,V(T +ε)BR which contradicts the definition ofT. ThusT /RandBRis stable.

Becausef <0 on∂BR,V(0)∂BRimplies thatt >0,V(t )BR.

Finally we consider the caseV0BR. Suppose that∀t >0, V(t ) /∈ ¯BR, then

t >0, dtdV2F≤ −2δ, thusV(t )F is monotonically decreasing and reaches the value ofRin finite time whenV(t )reaches∂BR. This contradicts our assumption.

ThusT >0|V(T )BR.

(7)

3 Stability results

When studying a dynamical system, a good starting point is to look for invariant sets.

Theorem2.2provides such an invariant set but it is a very large one, not sufficient to convey a good understanding of the system. Other invariant sets (included in the pre- vious one) are stationary points. Notice that delayed and non-delayed equations share exactly the same stationary solutions, also called persistent states. We can therefore make good use of the harvest of results that are available about these persistent states which we noteVf. Note that in most papers dealing with persistent states, the authors compute one of them and are satisfied with the study of the local dynamics around this particular stationary solution. Very few authors (we are aware only of [19,26]) address the problem of the computation of the whole set of persistent states. Despite these efforts they have yet been unable to get a complete grasp of the global dynam- ics. To summarize, in order to understand the impact of the propagation delays on the solutions of the neural field equations, it is necessary to know all their stationary solutions and the dynamics in the region where these stationary solutions lie. Unfor- tunately such knowledge is currently not available. Hence we must be content with studying the local dynamics aroundeach persistent state (computed, for example, with the tools of [19]) with and without propagation delays. This is already, we think, a significant step forward toward understanding delayed neural field equations.

From now on we noteVf a persistent state of (3) and study its stability.

We can identify at least three ways to do this:

1. to derive a Lyapunov functional, 2. to use a fixed point approach,

3. to determine the spectrum of the infinitesimal generator associated to the lin- earized equation.

Previous results concerning stability bounds in delayed neural mass equations are

‘absolute’ results that do not involve the delays: they provide a sufficient condition, independent of the delays, for the stability of the fixed point (see [15,20–22]). The bound they find is similar to our second bound in Proposition3.13. They ‘proved’

it by showing that if the condition was satisfied, the eigenvalues of the infinitesimal generator of the semi-group of the linearized equation had negative real parts. This is not sufficient because a more complete analysis of the spectrum (for example, the essential part) is necessary as shown below in order to proof that the semi-group is exponentially bounded. In our case we prove this assertion in the case of a bounded cortex (see Section3.1). To our knowledge it is still unknown whether this is true in the case of an infinite cortex.

These authors also provide a delay-dependent sufficient condition to guarantee that no oscillatory instabilities can appear, that is, they give a condition that forbids the existence of solutions of the formei(k·r+ωt ). However, this result does not give any information regarding stability of the stationary solution.

We use the second method cited above, the fixed point method, to prove a more general result which takes into account the delay terms. We also use both the sec- ond and the third method above, the spectral method, toprovethe delay-independent bound from [15,20–22]. We then evaluate the conservativeness of these two suffi- cient conditions. Note that the delay-independent bound has been correctly derived

(8)

in [25] using the first method, the Lyapunov method. It might be of interest to explore its potential to derive a delay-dependent bound.

We write the linearized version of (3) as follows. We choose a persistent stateVf and perform the change of variableU=VVf. The linearized equation writes

d

dtU(t )= −L0U(t )+ ˜L1UtLUt, U0=φC,

(4)

where the linear operatorL˜1is given by

L˜1:C−→F, φ

J(·,r)DS¯ Vf(r)¯

φ

¯

r,τ(·,r)¯ dr.¯ It is also convenient to define the following operator:

J˜defJ·DS Vf

: F−→F, U

J(·,r)DS¯ Vf(r)¯

U(r) d¯ r.¯

3.1 Principle of linear stability analysisviacharacteristic values

We derive the stability of the persistent stateVf (see [19]) for the equation (1) or equivalently (3) using the spectral properties of the infinitesimal generator. We prove that if the eigenvalues of the infinitesimal generator of the righthand side of (4) are in the left part of the complex plane, the stationary stateU=0 is asymptotically stable for equation (4). This result is difficult to prove because the spectrum (the main definitions for the spectrum of a linear operator are recalled in AppendixA) of the infinitesimal generator neither reduces to the point spectrum (set of eigenvalues of finite multiplicity) nor is contained in a cone of the complex plane C(such an operator is said to be sectorial). The ‘principle of linear stability’ is the fact that the linear stability ofUis inherited by the stateVf for the nonlinear equations (1) or (3).

This result is stated in the Corollaries3.7and3.8.

Following [27–31], we note(T(t ))t0the strongly continuous semigroup of (4) on C(see DefinitionA.3in AppendixA) andAits infinitesimal generator. By definition, ifUis the solution of (4) we haveUt=T(t )φ. In order to prove the linear stability, we need to find a condition on the spectrum(A)ofAwhich ensures thatT(t )0 ast→ ∞.

Such a ‘principle’ of linear stability was derived in [29,30]. Their assumptions implied that(A)was a pure point spectrum (it contained only eigenvalues) with the effect of simplifying the study of the linear stability because, in this case, one can link estimates of the semigroupTto the spectrum ofA. This is not the case here (see Proposition3.4).

When the spectrum of the infinitesimal generator does not only contain eigenval- ues, we can use the result in [27, Chapter 4, Theorem 3.10 and Corollary 3.12] for

(9)

eventually norm continuous semigroups (see DefinitionA.4in AppendixA) which links the growth bound of the semigroup to the spectrum ofA:

inf

wR: ∃Mw1 such thatT(t )Mwewt,t0

=sup(A). (5) Thus,Uis uniformly exponentially stable for (4) if and only if

sup(A) <0.

We prove in Lemma3.6(see below) that (T(t ))t0 is eventually norm continuous.

Let us start by computing the spectrum ofA.

3.1.1 Computation of the spectrum ofA

In this section we useL1forL˜1for simplicity.

Definition 3.1 We defineLλL(F)forλCby:

LλUL eλθU

≡ −L0U+L1

eλθU

= −L0U+J(λ)U, θeλθUC, whereJ(λ)is the compact(it is a Hilbert-Schmidt operator,see[32,ChapterX.2]) operator

J(λ):U

J(·,r)DS¯ Vf(r)¯

eλτ(·,r)¯U(r) d¯ r.¯

We now apply results from the theory of delay equations in Banach spaces (see [27,28,31]) which give the expression of the infinitesimal generator= ˙φ as well as its domain of definition

Dom(A)=

φC| ˙φCandφ(0˙ )= −L0φ (0)+L1φ .

The spectrum(A)consists of thoseλCsuch that the operator(λ) of L(F) defined by(λ)=λId+L0J(λ)is non-invertible. We use the following definition:

Definition 3.2(Characteristic values (CV)) The characteristic values ofAare theλs such that(λ)has a kernel which is not reduced to0,that is,is not injective.

It is easy to see that the CV are the eigenvalues ofA.

There are various ways to compute the spectrum of an operator in infinite dimen- sions. They are related to how the spectrum is partitioned (for example, continuous spectrum, point spectrum. . .). In the case of operators which are compact perturba- tions of the identity such as Fredholm operators, which is the case here, there is no continuous spectrum. Hence the most convenient way for us is to compute the point spectrum and the essential spectrum (see AppendixA). This is what we achieve next.

Remark 1 In finite dimension(that is, dimF<∞),the spectrum ofAconsists only of CV.We show that this is not the case here.

(10)

Notice that most papers dealing with delayed neural field equations only compute the CV and numerically assess the linear stability (see [9,24,33]).

We now show that we can link the spectral properties ofAto the spectral properties ofLλ. This is important since the latter operator is easier to handle because it acts on a Hilbert space. We start with the following lemma (see [34] for similar results in a different setting).

Lemma 3.3 λess(A)λess(Lλ).

Proof Let us define the following operator. IfλC, we define TλL(C,F) by Tλ(φ)=φ (0)+L(0

· eλ(·−s)φ (s) ds),φC. From [28, Lemma 34],Tλ is surjec- tive and it is easy to check thatφR(λIdA)iifTλ(φ)R(λIdLλ), see [28, Lemma 35]. MoreoverR(λIdA)is closed inC iffR(λIdLλ)is closed inF, see [28, Lemma 36].

Let us now prove the lemma. We already know thatR(λIdA)is closed inCif R(λIdLλ)is closed inF. Also, we haveN(λIdA)= {θeθ λU,UN(λId Lλ)}, hence dimN(λIdA) < iif dimN(λIdLλ) <∞. It remains to check that codimR(λIdA) <iif codimR(λIdLλ) <∞.

Suppose that codimR(λIdA) <∞. There existφ1, . . . , φNCsuch thatC= Span(φi)+R(λIdA). ConsiderUi Tλi)F. BecauseTλ is surjective, for allUF, there existsψCsatisfyingU=Tλ(ψ ). We writeψ=N

i=1xiφi+f, f R(λIdA). ThenU=N

i=1xiUi +Tλ(f )whereTλ(f )R(λIdLλ), that is, codimR(λIdLλ) <∞.

Suppose that codimR(λIdLλ) <∞. There exist U1, . . . ,UN F such that F=Span(Ui)+R(λIdLλ). AsTλ is surjective for alli=1, . . . , N there exists φi Csuch thatUi=Tλi). Now considerψC.Tλ(ψ )can be writtenTλ(ψ )= N

i=1xiUi+ ˜UwhereU˜ R(λIdLλ). ButψN

i=1xiφiR(λIdA)because TλN

i=1xiφi)= ˜UR(λIdLλ). It follows that codimR(λIdA) <. Lemma3.3is the key to obtain(A). Note that it is true regardless of the form ofLand could be applied to other types of delays in neural field equations. We now prove the important following proposition.

Proposition 3.4 Asatisfies the following properties:

1. ess(A)=(L0).

2. (A)is at most countable.

3. (A)=(−L0)CV.

4. For λ(A)\(−L0),the generalized eigenspace

kN(λI A)k is finite dimensional andkN,C=N((λIA)k)R((λIA)k).

Proof

1. λ ess(A) λ ess(Lλ) = ess(−L0 + J(λ)). We apply [35, Theo- rem IV.5.26]. It shows that the essential spectrum does not change under com- pact perturbation. As J(λ)L(F) is compact, we find ess(−L0+J(λ))= ess(−L0).

(11)

Let us show that ess(−L0)=(−L0). The assertion ‘⊂’ is trivial. Now if λ(−L0), for example,λ= −l1, thenλId+L0=diag(0,l1+l2, . . .).

ThenR(λId+L0)is closed butL2(,R)× {0} × · · · × {0} ⊂N(λId+L0).

Hence dimN(λId+L0)= ∞. AlsoR(λId+L0)= {0} ×L2(,Rp1), hence codimR(λId+L0)= ∞. Hence, according to DefinitionA.7,λess(L0).

2. We apply [35, Theorem IV.5.33] stating (in its first part) that ifess(A)is at most countable, so is(A).

3. We apply again [35, Theorem IV.5.33] stating that ifess(A)is at most countable, any point in(A)\ess(A)is an isolated eigenvalue with finite multiplicity.

4. Becauseess(A)ess,Arino(A), we can apply [28, Theorem 2] which precisely

states this property.

As an example, Figure 1 shows the first 200 eigenvalues computed for a very simple model one-dimensional model. We notice that they accumulate atλ= −1 which is the essential spectrum. These eigenvalues have been computed using TraceDDE, [36], a very efficient method for computing the CVs.

Last but not least, we can prove that the CVs are almost all, that is, except for possibly a finite number of them, located on the left part of the complex plane. This indicates that the unstable manifold is always finite dimensional for the models we are considering here.

Corollary 3.5 Card(A)∩ {λC,λ >−l}<wherel=minili.

Proof Ifλ=ρ+(A)andρ >l, thenλ is a CV, that is, N(Id(λId+ L0)1J(λ))= ∅stating that 1P((λId+L0)1J(λ))(P denotes the point spec- trum).

But |(λId +L0)1J(λ)|F ≤ |(λId +L0)1|F · |J(λ)|F 1

ω2++l)2×

|J(λ)|F12forλbig enough since|J(λ)|F is bounded.

Fig. 1 Plot of the first 200 eigenvalues of A in the scalar case (p=1, d=1) and L0=Id, J (x)= −1+1.5 cos(2x). The delay functionτ (x)is theπperiodic saw-like function shown in Figure2.

Notice that the eigenvalues accumulate atλ= −1.

Références

Documents relatifs

In this paper, we have studied the existence of traveling wave solutions and spreading properties for single-layer delayed neural field equations when the kinetic dynamics are

We show existence and uniqueness of traveling front solutions to a class of neural field equations set on a lattice with infinite range interactions in the regime where the kinetics

Cats have been shown several stimuli while their cortical activity was simultaneously monitored by VSDOI and extracellular recordings: a static square stimulus, corresponding to

In this report, we study homogeneous stationary solutions (i.e independent of the spatial vari- able) and bump stationary solutions (i.e. localized areas of high activity) in two

Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France) Unité de recherche INRIA Rhône-Alpes : 655, avenue de l’Europe - 38334

The heat loss from the edge of the specimen may, therefore, be assumed symmetric, half from the cold plate and half from both the guard ring and the main heater plates.. The

Cet effet de la salissure dans la rayure peut être apprécié sur la figure V -9a, présentant la valeur absolue de v- en fonction du temps de test grandeur nature, pour

We also see that when L/π = k with k an integer, the stable solution has a ϕ k -like profile: we recover here a fundamental feature of the Phase Field Crystal model for small M,