• Aucun résultat trouvé

Dissipative stochastic evolution equations driven by general Gaussian and non-Gaussian noise

N/A
N/A
Protected

Academic year: 2021

Partager "Dissipative stochastic evolution equations driven by general Gaussian and non-Gaussian noise"

Copied!
27
0
0

Texte intégral

(1)

HAL Id: hal-00434216

https://hal.archives-ouvertes.fr/hal-00434216

Preprint submitted on 20 Nov 2009

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

general Gaussian and non-Gaussian noise

Stefano Bonaccorsi, Ciprian Tudor

To cite this version:

Stefano Bonaccorsi, Ciprian Tudor. Dissipative stochastic evolution equations driven by general Gaus-

sian and non-Gaussian noise. 2009. �hal-00434216�

(2)

DISSIPATIVE STOCHASTIC EVOLUTION EQUATIONS DRIVEN BY GENERAL GAUSSIAN AND NON-GAUSSIAN NOISE

S. BONACCORSI,

Universit` a di Trento, Dipartimento di Matematica, Via Sommarive 14, 38050 Povo (Trento), Italia

C. A. TUDOR,

∗∗

Laboratoire Paul Painlev´e, Universit´e de Lille 1, F-59655 Villeneuve d’Ascq, France.

Abstract

We study a class of stochastic evolution equations with a dissipative forcing nonlinearity and additive noise. The noise is assumed to satisfy rather general assumptions about the form of the covariance function; our framework covers examples of Gaussian processes, like fractional and bifractional Brownian motion and also non Gaussian examples like the Hermite process. We give an application of our results to the study of the stochastic version of a common model of potential spread in a dendritic tree. Our investigation is specially motivated by possibility to introduce long-range dependence in time of the stochastic perturbation.

Keywords: stochastic evolution equation, dissipative nonlinearity, general Gaussian and non-Gaussian noise, neuronal networks

2000 Mathematics Subject Classification: Primary 60H15

Secondary 60G18, 92C20

1. Introduction

In recent years, existence, uniqueness and further properties of solutions to stochas- tic equations in Hilbert spaces under dissipativity assumptions has been widely dis- cussed in the literature since similar equations play an important rˆole in stochastic models of population biology, physics and mathematical finance (among others), com- pare the monograph [5] for a thorough discussion.

In this paper, by using semigroup methods, we shall discuss existence and uniqueness of mild solutions to a class of stochastic evolution equations driven by a stochastic process X which is not necessarily Gaussian. In the Hilbert space X we consider the following equation

du(t) = A u(t) + F(u(t)) dt + dX (t)

u(0) = u 0 , (1.1)

where A and F satisfy some dissipativity condition on X and X is a general X -valued process that satisfies some specific condition on the covariance operator.

Problems of the form of Equation (1.1) arise in the modeling of certain problems in neurobiology. In particular, in Section 5 we shall analyze a model of diffusion for

Email address: stefano.bonaccorsi@unitn.it

∗∗

Email address: tudor@math.univ-lille1.fr; Associate member: Samos, Centre d’Economie de La Sorbonne, Universit´ e de Paris 1 Panth´ eon-Sorbonne, rue de Tolbiac, 75013 Paris, France.

1

(3)

electric activity in a neuronal network, recently introduced in [4], driven by a stochastic term that is not white in time and space. Notice further that, motivated by this model, we are concerned with assumptions on the drift term which are not covered by those in [5].

Assumption 1.1. The operator A : D( A ) ⊂ X → X is associated with a form ( a , V ) that is densely defined, coercive and continuous; by standard theory of Dirichlet forms, compare [19], the operator A generates a strongly continuous, analytic semigroup (S(t)) t ≥ 0 on the Hilbert space X that is uniformly exponentially stable: there exist M ≥ 1 and ω > 0 such that kS(t)k L( X ) ≤ M e ωt for all t ≥ 0.

In the application of Section 5, the operator A is not self-adjoint, as the correspond- ing form a is not symmetric; also, since V is not compactly embedded in X , it is easily seen that the semigroup generated by A is not compact hence it is not Hilbert-Schmidt.

Assumption 1.2. F is an m-dissipative mapping with V ⊂ D(F ) and F : V → X is continuous with polynomial growth.

Let us introduce the class of noises that we are concerned with. We define the mean of a X valued process (X t ) t ∈ [0,T ] by m X : [0, T ] → X , m X (t) = E X t and the covariance C X : [0, T ] 2 → L 1 ( X ) by

hC X (t, s)u, vi X = E [hX t − m X (t)vi X hX s − m X (s), ui X ] for every s, t ∈ [0, T ] and for every u, v ∈ X .

Let Q be a nuclear self-adjoint operator on X (Q ∈ L 1 ( X ) and Q = Q > 0). It is well-known that Q admits a sequence (λ j ) j ≥ 1 of eigenvalues such that 0 < λ j ↓ 0 and P

j λ j ≥ 1 < ∞. Moreover, the eigenvectors (e j ) j ≥ 1 of Q form an orthonormal basis of X .

Let (x(t)) t ∈ [0,T] be a centered square integrable one-dimensional process with a given covariance R. We define its infinite dimensional counterpart by

X t = X

j=1

p λ j x j (t)e j t ∈ [0, T ],

where x j are independent copies of x. It is trivial to see that the above series is convergent in L 2 (Ω; X ) for every fixed t ∈ [0, T ] and

E kX t k 2 X = (T rQ)R(t, t).

Remark 1.1. The process X is a X -valued centered process with covariance R(t, s)Q.

Assumption 1.3. We will assume that the covariance of the process X satisfies the following condition:

(s, t) → ∂ 2 R

∂s∂t ∈ L 1 ([0, T ] 2 ). (1.2)

We will treat several examples of stochastic processes that satisfy (1.2). The first

two examples are Gaussian processes (fractional and bifractional Brownian motion)

while the third example is non-Gaussian (the Hermite process).

(4)

Example 1.2. The process X is a fractional Brownian motion (fBm) with Hurst parameter H > 1 2 . We recall that its covariance equals, for every s, t ∈ [0, T ]

R(s, t) = 1 2

s 2H + t 2H − |s − t| 2H .

In this case ∂s∂t

2

R = 2H (2H − 1) |t − s| 2H 2 in the sense of distributions. Since R vanishes on the axes, we have for every s, t ∈ [0, T ]

R(s, t) = Z t

0

ds 1

Z s 0

ds 2

2 R

∂s 1 ∂s 2 .

Example 1.3. X is a bifractional Brownian motion with H ∈ (0, 1), K ∈ (0, 1] and 2HK > 1. Recall the the bifractional Brownian motion (B t H,K ) t ∈ [0,T] is a centered Gaussian process, starting from zero, with covariance

R H,K (t, s) := R(t, s) = 1 2 K

t 2H + s 2H K

− |t − s| 2HK

(1.3) with H ∈ (0, 1) and K ∈ (0, 1]. Note that, if K = 1 then B H,1 is a fractional Brownian motion with Hurst parameter H ∈ (0, 1).

Example 1.4. A non Gaussian example: the Hermite process: The driving process is now a Hermite process with selsimilarity order H ∈ ( 1 2 , 1). This process appears as a limit in the so-called Non Central Limit Theorem (see [7] or [25]).

We will denote by (Z t (q,H) ) t ∈ [0,1] the Hermite process with self-similarity parameter H ∈ (1/2, 1). Here q ≥ 1 is an integer. The Hermite process can be defined in two ways: as a multiple integral with respect to the standard Wiener process (W t ) t ∈ [0,1] ; or as a multiple integral with respect to a fractional Brownian motion with suitable Hurst parameter. We adopt the first approach throughout the paper: compare Definition 2.2 below.

In Section 3 we treat the stochastic convolution process W A (t) =

Z t 0

S(t − s) dX s . (1.4)

It is the weak solution of the linear stochastic evolution equation dY (t) = A Y (t) dt + dX (t). Our aim is to prove that is a well-defined X -valued, mean square continuous, F t -adapted process. We strenghten Assumption (1.2) by imposing the following.

Assumption 1.4. Let X be given in the form X t = X

j ≥ 1

p λ j x j (t)e j

where λ j , e j and x j (t) have been defined above. Suppose that the covariance R of the process (X t ) t ∈ [0,T] satisfies the following condition:

∂R

∂s∂t (s, t)

≤ c 1 |t − s| 2H 2 + g(s, t)

for every s, t ∈ [0, T ] where |g(s, t)| ≤ c 2 (st) β with β ∈ (−1, 0), H ∈ ( 1 2 , 1) and c 1 , c 2

are strictly positive constant.

(5)

Remark 1.5. The fractional Brownian motion and the Hermite process satisfy As- sumption 1.4 with g identically zero. In the case of the bifractional Brownian motion the second derivative of the covariance can be divided into two parts. Indeed

g(u, v) = c 1 |u − v| 2HK 2 + c 2 (u 2H + v 2H ) K 2 (uv) 2H 1 := g 1 (u, v) + g 2 (u, v).

The part containing g 1 can be treated similarly to the case of the fractional Brownian motion. For the second term, note that

u 2H + v 2H ≥ 2(uv) H and (u 2H + v 2H ) K 2 ≤ 2 K 2 (uv) H(K 2) . So,

|g 2 (u, v)| ≤ cst.(uv) HK 1 .

In conclusion, Assumption 1.4 is satisfied with β = HK − 1 ∈ (−1, 0).

Our first main result is the following theorem concerning the regularity of the stochastic convolution process under the Assumptions 1.1 and 1.4.

Theorem 1.5. In the above framework, fix α ∈ (0, H ). Let W A be given by (1.4).

Then W A exists in L 2 ([0, T ] × Ω; X ) and it is F t -adapted.

For every γ < α and ε < α − γ it holds that

W A ∈ C α γ ε ([0, T ]; D(−A) γ ) ;

in particular for any fixed t ∈ [0, T ] the random variable W A (t) belongs to D(−A) γ . Now we consider the solution of the stochastic evolution equation (1.1). We consider generalized mild solutions in the sense of [5, Section 5.5]: an X -valued continuous and adapted process u = {u t , t ≥ 0} is a mild solution of (1.1) if it satisfies Pr-a.s. the integral equation

u(t) = S(t)u 0 + Z t

0

S(t − s)F(u(s)) ds + W A (t). (1.5) Theorem 1.6. In our setting, let u 0 ∈ D(F ) (resp. u 0 ∈ X ). Then there exists a unique mild (resp. generalized) solution

u ∈ L 2 F (Ω; C([0, T ]; X )) ∩ L 2 F (Ω; L 2 ([0, T ]; V )) to equation (1.1) which depends continuously on the initial condition:

E |u(t; u 0 ) − u(t; u 1 )| 2 X ≤ C |u 0 − u 1 | 2 X . (1.6) Remark 1.6. Even for a Wiener perturbation, this result is not contained in the existing literature since we does not assume any dissipativity or generation property of A on V , compare [5, Hypothesis 5.4 and 5.6].

With this result at hand, we can solve the model of a complete neuronal network

recently proposed in [4]. It is well known that any single neuron can be schematized

as a collection of a dentritic tree that ends into a soma at one end of an axon, hence

as a tree in the precise sense defined within the mathematical field of graph theory. By

(6)

introducing stochastic terms we can model the chaotic fluctuations of synaptic activity and post-synaptic elaboration of electronic potential. There is sufficient sperimental evidence that, in order to capture the actual behaviour of the neurobiological tissues, infinite dimensional, stochastic, nonlinear reaction-diffusion models are needed.

Previous models used simplified version of the neuronal network or just concentrate on single parts of the cell: compare [1] for a thorough analysis of the FitzHugh Nagumo system on a neuronal axon or [2] for the analysis of the (passive) electric propagation in a dendritic tree in the subtreshold regime. In our model, instead, we are based based on the deterministic description of the whole neuronal network that has been recently introduced in [4]; therefore, we avoid to sacrify the biological realism of the neuronal model and, further, we add a manifold of different possible stochastic perturbations that can be chosen as a model for the enviromental influence on the system. Notice that already in [2] a fractional Brownian motion with Hurst parameter H > 1 2 was chosen in order to model the (apparently chaotical) perturbance acting on a neuronal network. This choice is not a premiere in neuroscience, since different considerations show that real inputs may exhibit long-range dependence and self-similarity: see for instance the contributions in [22, Part II].

2. Wiener Integrals with respect to Hilbert valued Gaussian and non-Gaussian processes with covariance structure measure In this section we discuss the construction of a stochastic integral with respect to the process (X t ) t ∈ [0,T] , which is not necessarily Gaussian.

Since our non-Gaussian examples will be given by stochastic processes that can be expressed as multiple Wiener-Itˆ o integrals, we need to briefly recall the basic facts related to their constructed and their basic properties.

2.1. Multiple stochastic integrals

Let (W t ) t ∈ [0,T ] be a classical Wiener process on a standard Wiener space (Ω, F , P ).

If f ∈ L 2 ([0, T ] n ) with n ≥ 1 integer, we introduce the multiple Wiener-Itˆ o integral of f with respect to W . The basic reference is the monograph [18]. Let f ∈ S m be an elementary function with m variables that can be written as

f = X

i

1

,...,i

m

c i

1

,...,i

m

1 A

i1

× ... × A

im

where the coefficients satisfy c i

1

,...i

m

= 0 if two indices i k and i l are equal and the sets A i ∈ B([0, T ]) are disjoints. For such a step function f we define

I m (f ) = X

i

1

,...,i

m

c i

1

,...,i

m

W (A i

1

) . . . W (A i

m

)

where we put W ([a, b]) = W b −W a . It can be seen that the application I m constructed above from S m to L 2 (Ω) is an isometry on S m , i.e.

E [I n (f )I m (g)] = n!hf, gi L

2

([0,T]

n

) if m = n (2.1) and

E [I n (f )I m (g)] = 0 if m 6= n.

(7)

Since the set S n is dense in L 2 ([0, T ] n ) for every n ≥ 1, the mapping I n can be extended to an isometry from L 2 ([0, T ] n ) to L 2 (Ω) and the above properties (2.1) hold true for this extension.

We recall the following hypercontractivity property for the L p norm of a multiple stochastic integral (see [13, Theorem 4.1])

E |I m (f )| 2m ≤ c m E I m (f ) 2 m

(2.2) where c m is an explicit positive constant and f ∈ L 2 ([0, T ] m ).

2.2. Wiener integrals: the one-dimensional case

The idea to define Wiener integrals with respect to a centered Gaussian (or non Gaussian) process (X t ) t ∈ [0,T] is natural and standard. Denote by R(t, s) = E (X t X s ) the covariance of the process X . Consider E the set of step functions on [0, T ] defined as

f =

n − 1

X

i=0

c i 1 [t

i

,t

i+1

] (2.3)

where π : 0 = t 0 < t 1 < . . . < t n = T denotes a partition of [0, T ] and c i are real numbers. For a such f it is standard to define

I(f ) =

n − 1

X

i=0

c i X t

i+1

− X t

i

.

It holds that E I(f ) 2 =

n − 1

X

i,j=0

c i c j E X t

i+1

− X t

i

X t

i+1

− X t

i

=

n − 1

X

i,j=0

c i c j (R(t i+1 , t j+1 ) − R(t i+1 , t j ) − R(t i , t j+1 ) + R(t i , t j )) . The next step is to extend, by density, the application I : E → L 2 (Ω) to a bigger space, using the fact that it is an isometry. This construction has been done in [10] and we will describe here the main ideas. In particular, we shall see that the construction depends on the covariance structure of the process X ; the covariance of X should define a measure on the Borel sets of [0, T ] 2 . The function R defines naturally a finite additive measure µ on the algebra of finite disjoint rectangles included in [0, T ] 2 by

µ(A) = R(b, d) + R(a, c) − R(a, d) − R(c, b) if A = [a, b) × [c, d).

In order to extend the Wiener integral to more general processes, we assume that the covariance of the process X satisfies the following condition:

(s, t) → ∂ 2 R

∂s∂t ∈ L 1 ([0, T ] 2 ). (2.4)

(compare with Assumption 1.3). This is a particular case of the situation considered

in [10] where the integrator is assumed to have a covariance structure measure in the

sense that the covariance R defines a measure on [0, T ] 2 .

(8)

We have already seen some examples of stochastic processes that satisfy (1.2): the fractional Brownian motion with Hurst index bigger than 1 2 , the bifractional Brownian motion with 2HK > 1 and the Hermite process, for instance.

The next step is to extend the definition of the Wiener integral to a bigger class of integrands. We introduce |H| the set of measurable functions f : [0, T ] → R such that

Z T 0

Z T 0

|f (u)f (v)|

2 R

∂u∂v (u, v)

du dv < ∞. (2.5)

On the set |H| we define the inner product hf, hi H =

Z T 0

Z T 0

f (u)h(v) ∂ 2 R

∂u∂v (u, v) du dv (2.6)

and its associated seminorm kf k 2 H =

Z T 0

Z T 0

f (u)f (v) ∂ 2 R

∂u∂v (u, v) du dv.

We also define

kf k 2 |H| = Z T

0

Z T 0

|f (u)f (v)|

2 R

∂u∂v (u, v)

du dv. (2.7)

It holds that E ⊂ |H| and for every f, h ∈ E

E I(f ) 2 = E kf k 2 H . (2.8)

The following result can be found in [10].

Proposition 2.1. The set E is dense in |H| with respect to k·k |H| and in particular to the seminorm k·k H . The linear application Φ : E −→ L 2 (Ω) defined by

ϕ −→ I(ϕ)

can be continuously extended to |H| equipped with the k·k H -norm. Moreover we still have identity (2.8) for any ϕ ∈ |H|.

We will set R T

0 ϕ dX = Φ(ϕ) and it will be called the Wiener integral of ϕ with respect to X .

We remark below that if the integrator process is a process in the nth Wiener chaos then the Wiener integral with respect to X is again an element of the nth Wiener chaos.

Remark 2.1. Suppose that the process X can be written as X t = I k (L t (·)) with k ≥ 1 and L t ∈ L 2 ([0, T ] k ) for every t ∈ [0, T ]. Then for every ϕ ∈ |H| the Wiener integral R T

0 ϕ dX is also in the kth Wiener chaos. Indeed, for simple functions of the form (2.3)

it is obvious and then we use the fact that the kth Wiener chaos is stable with respect

to the L 2 convergence, that is, a sequence of random variables in the kth Wiener chaos

convergent in L 2 has as limit a random variable in the kth Wiener chaos.

(9)

Remark 2.2. Assumption 1.4 implies condition (1.2). In particular the process X whose covariance satisfies Assumption 1.4 have a covariance structure measure and the Wiener integral R T

0 ϕ dX exists for every ϕ ∈ |H|. Indeed, Z T

0

Z T 0

∂R

∂s∂t (s, t)

ds dt ≤ c 1

Z T 0

Z T 0

|s − t| 2H 2 ds dt + c 2

Z T 0

Z T 0

(st) β ds dt

≤ c

T 2H + T 2(β+1) . Let us discuss now some examples. Firstly we refer to Gaussian processes (fractional and bifractional Brownian motion).

Example 2.3. The case of the fractional Brownian motion with H > 1 2 . In this case

|H| is the space of measurable functions f : [0, T ] → R such that Z T

0

Z T 0

|f (u)f (v)|u − v| 2H 2 du dv < ∞.

On the other hand, for this integrator one can consider bigger classer of Wiener integrands. The natural space for the definition of the Wiener integral with respect to a Hermite process is the space H which is the closure of E with respect to the scalar product

h1 [0,t] , 1 [0,s] i H = R(t, s).

We recall that H can be expressed using fractional integrals and it may contain distributions. Recall also that

L 2 ([0, T ]) ⊂ L

H1

([0, T ]) ⊂ |H| ⊂ H

The Wiener integral with respect to Hermite processes can be also written as a Wiener integral with respect to the standard Brownian motion through a transfer operator (see e.g. [18]).

Example 2.4. The bifractional Brownian motion with 2HK > 1. Recall the the bifractional Brownian motion (B H,K t ) t ∈ [0,T] is a centered Gaussian process, starting from zero, with covariance

R H,K (t, s) := R(t, s) = 1 2 K

t 2H + s 2H K

− |t − s| 2HK

(2.9) with H ∈ (0, 1) and K ∈ (0, 1].

We can write the covariance function as

R(s 1 , s 2 ) = R 1 (s 1 , s 2 ) + R 2 (s 1 , s 2 ), where

R 1 (s 1 , s 2 ) = 1

2 K s 2H 1 + s 2H 2

K

− s 2HK 1 + s 2HK 2

and R 2 (s 1 , s 2 ) = − 1

2 K |s 2 − s 1 | 2HK + s 2HK 1 + s 2HK 2 .

(10)

We therefore have

2 R 1

∂s 1 ∂s 2 = 4H 2 K(K − 1)

2 K s 2H 1 + s 2H 2 K − 2

s 2H 1 1 s 2H 2 1 .

Since R 1 is of class C 2 ((0, T ] 2 ) and ∂s

2

R

1

1

∂s

2

is always negative, R 1 is the distribution function of a negative absolutely continuous finite measure, having ∂s

2

R

1

1

∂s

2

for density.

Concerning the term R 2 we suppose 2HK > 1. The part denoted by R 2 is (up to a constant) also the covariance function of a fractional Brownian motion of index HK and ∂s

2

R

2

1

∂s

2

= 2HK (2HK − 1) |s 1 − s 2 | 2HK 2 which belongs of course to L 1 ([0, T ] 2 ).

We also recall that the bifractional Brownian motion is a self-similar process with self- similarity index HK , it has not stationary increments, it is not Markovian and not a semimartingale for 2HK > 1.

A significant subspace included in |H| is the set L 2 ([0, T ]); if K = 1 and H = 1 2 , there is even equality, since X is a classical Brownian motion (see [10]).

Let us now give a non-Gaussian example.

Example 2.5. The Hermite process Z (q,H) := Z of order q with selfsimilarity order H .

The fractional Brownian process (B t H ) t ∈ [0,1] with Hurst parameter H ∈ (0, 1) can be written as

B t H = Z t

0

K H (t, s) dW s , t ∈ [0, 1]

where (W t , t ∈ [0, T ]) is a standard Wiener process, the kernel K H (t, s) has the expres- sion c H s 1/2 H R t

s (u − s) H 3/2 u H 1/2 du where t > s and c H = H(2H

− 1) β(2 − 2H,H − 1/2)

1/2

and β(·, ·) is the Beta function. For t > s, the kernel’s derivative is ∂K ∂t

H

(t, s) = c H s

t

1/2 − H

(t − s) H 3/2 . Fortunately we will not need to use these expressions explicitly, since they will be involved below only in integrals whose expressions are known.

We will denote by (Z t (q,H) ) t [0,1] the Hermite process with self-similarity parameter H ∈ (1/2, 1). Here q ≥ 1 is an integer. Let us state the formal definition of this process.

Definition 2.2. The Hermite process (Z t (q,H) ) t ∈ [0,1] of order q ≥ 1 and with self- similarity parameter H ∈ ( 1 2 , 1) is given by

Z t (q,H) = d(H) Z t

0

. . . Z t

0

dW y

1

. . . dW y

q

Z t y

1

∨ ... ∨ y

q

∂ 1 K H

(u, y 1 ) . . . ∂ 1 K H

(u, y q )du

! , t ∈ [0, 1] (2.10) where K H

is the usual kernel of the fractional Brownian motion and

H = 1 + H − 1

q ⇐⇒ (2H − 2)q = 2H − 2. (2.11)

(11)

Of fundamental importance is the fact that the covariance of Z (q,H) is identical to that of fBm, namely

E h

Z s (q,H) Z t (q,H) i

= 1

2 (t 2H + s 2H − |t − s| 2H ).

The constant d(H ) is chosen to have the variance equal to 1. We stress that Z (q,H) is far from Gaussian for q > 1, since it is formed of multiple Wiener integrals of order q (see also [26]).

The basic properties of the Hermite process are listed below:

• the Hermite process Z (q) is H-self-similar and it has stationary increments.

• the mean square of the increment is given by E

Z t (q,H) − Z s (q,H)

2

= |t − s| 2H ; (2.12)

as a consequence, it follows will little extra effort from Kolmogorov’s continuity criterion that Z (q,H) has H¨ older-continuous paths of any exponent δ < H.

• it exhibits long-range dependence in the sense that X

n ≥ 1

E h

Z 1 (q,H) (Z n+1 (q,H) − Z n (q,H) ) i

= ∞.

In fact, the summand in this series is of order n 2H 2 . This property is identical to that of fBm since the processes share the same covariance structure, and the property is well-known for fBm with H > 1/2.

• for q = 1, Z (1,H) is standard fBm with Hurst parameter H , while for q ≥ 2 the Hermite process is not Gaussian. In the case q = 2 this stochastic process is known as the Rosenblatt process.

In this case the class of integrands H is the same as in the case of the fractional Brownian motion. We will also note that, from Remark 2.1, the Wiener integral with respect to the Hermite process R T

0 ϕ dZ is an element of the kth Wiener chaos.

Moreover, it has been proven in [12] that for every ϕ ∈ |H| we have Z T

0

f (u)dZ(u) = Z T

0

. . . Z T

0

I(f )(y 1 , y 2 , . . . , y k ) dB(y 1 ) dB(y 2 ) . . . dB(y k ) where (B t ) t ∈ [0,T] is a Wiener process and we denoted by I the following transfer operator

I(f )(y 1 , y 2 , . . . , y k ) = Z T

y

1

∨···∨ y

k

f (u)∂ 1 K H

(u, y 1 )...∂ 1 K H

(u, y k ) du

where H is defined by (2.11).

(12)

2.3. The infinite-dimensional case Let

X t = X

j=1

p λ j x j (t)e j t ∈ [0, T ],

be a X -valued centered process with covariance R(t, s)Q.

Let G : [0, T ] → L( X ) and let (e j ) j ≥ 1 be a complete orthonormal system in X . Assume that for every j ≥ 1 the function G(·)e j belongs to the space |H|. The we define the Wiener integral of G with respect to X by

Z T 0

G dX = X

j ≥ 1

p λ j

Z T 0

G(s)e j dx j (s)

where the Wiener integral with respect to dx j has been defined above in paragraph 2.2.

Remark 2.6. The above integral is well-defined as an element of L 2 (Ω; X ) and we have the bound

E

Z T 0

G dB

2

≤ T r(Q) Z T

0

Z T 0

kG(u)k L(V ) kG(v)k L(V )

2 R

∂u∂v (u, v)

du dv

≤ T r(Q)

kG(·)k L(V )

2

|H| . 3. The stochastic convolution process

There exists a well established theory on stochastic evolution equations in infinite dimensional spaces, see Da Prato and Zabcyck [5], that we shall apply in order to show that Eq.(1.5) admits a unique solution. Let us recall from Assumption 1.1 that A is the infinitesimal generator of a strongly continuous semigroup (S(t)) t ≥ 0 , on X that is exponentially stable.

In this setting, we are concerned with the so-called stochastic convolution process W A (t) =

Z t 0

S(t − s) dX s . (1.4)

It is the weak solution of the linear stochastic evolution equation dY (t) = A Y (t) dt + dX (t). Our aim is to prove that is a well-defined mean square continuous, F t -adapted process. Let us make the following assumption:

Proposition 3.1. Assume that the covariance function R satisfies (1.2). Then, for every t ∈ [0, T ], the stochastic convolution given by (1.4) exists in L 2 ([0, T ]; X ) and it is F t adapted.

Proof. We have that, by using the exponential stability of the semigroup S(t) (see

(13)

Assumption 1.1)

E

Z t 0

S(t − s) dX (s)

2

X

≤ T r(Q) Z t

0

Z t 0

kS(t − u)k L( X ) kS(t − v)k L( X )

2 R

∂u∂v (u, v)

du dv

≤ M 2 T r(Q) Z T

0

Z T 0

e ω(t u) e ω(t v)

2 R

∂u∂v (u, v)

du dv < ∞.

The fact that W A is adapted is obvious.

The next step is to study the regularity (temporal and spatial) of the stochastic convolution process. This will lead to a study of an infinite sum of random variables with independent but not necessarily Gaussian summands (they are elements in a fixed order Wiener chaos). Let us recall the following result from [11, Theorem 3.5.1, page 76 and Theorem 2.2.1, page 32].

Proposition 3.2. Let X be a Hilbert space.

a) Let p > 4 and X 1 , . . . , X n be zero mean, independent X valued random variables.

Then

E

n

X

i=1

X i

p

X

!

1p

≤ c p

 E

n

X

i=1

X i

2

X

1 2

+ ( E kX n k p X ∨ ( E kX n − 1 k p X ∨ (. . . ∨ E kX 1 k p X )))

1p

i .

b) Let p > 0 and X 1 , X 2 , . . . , X n , . . . be a sequence of independent X valued random variables. If the series P

i ≥ 1

X i converges almost surely to a random variable S and for some t > 0

X

i ≥ 1

E kX i k p X 1 ( k X

i

k

X

>t) < ∞ (3.1)

then E kSk p X < ∞ and E kS n − Sk p X

n −→ →∞ 0 where S n =

n

P

i=1

X i . Remark 3.1. It is not difficult to see that the point a) above implies that

E

n

X

i=1

X i

p

X

!

p1

≤ c p

 E

n

X

i=1

X i

2

X

1 2

+

n

X

i=1

E kX i k p

!

1p

 . (3.2)

The following lemma is the main tool to get the regularity of the stochastic convo-

lution process W A .

(14)

Lemma 3.3. Let (X t ) t ∈ [0,T] a stochastic process whose covariance R satisfies Assump- tion 1.4. Denote, for every α ∈ (0, 1),

Y α (t) = Z t

0

(t − u) α S(t − u) dX u , t ∈ [0, T ].

Then for every α ∈ (0, H ), Y α belongs to L p ([0, T ]; X ).

Proof. Suppose first that X is Gaussian. Then, in order to show that Y α is in L p ([0, T ]; X ) it suffices to proves that it is in L 2 ([0, T ]; X ). We have

E kY α (t)k 2 X = E

X

j ≥ 1

p λ j

Z t 0

(t − u) α S (t − u)e j dx j

2

X

≤C X

j ≥ 1

λ j

Z t 0

Z t 0

(t − u) α (t − v) α kS(t − u)e j k X kS(t − v)e j k X

2 R

∂u∂v (u, v)

du dv

≤C (TrQ) Z t

0

Z t 0

(t − u) α (t − v) α e ω

1

(t u) e ω

1

(t v)

2 R

∂u∂v (u, v) du dv

≤C(TrQ) Z t

0

Z t 0

(t − u) α (t − v) α e ω

1

(t u) e ω

1

(t v) |u − v| 2H 2 du dv + C(TrQ)

Z t 0

Z t 0

(t − u) α (t − v) α e ω

1

(t u) e ω

1

(t v) (uv) β du dv :=I 1 + I 2 .

Concerning the term I 1 , we can write I 1 ≤ 2C(TrQ)

Z t 0

Z u 0

(t − u) α (t − v) α |u − v| 2H 2 dv du

≤ C(TrQ) Z t

0

u u 2H 1 Z 1

0

z α (1 − z) 2H 2 dz du = C(TrQ) Z t

0

u u 2H 1 du where we used the change of variable u v = z. The last quantity is clearly finite if and only if α < H . Concerning I 2 we have

I 2 ≤ Z t

0

(t − u) α e ω

1

(t u) u β du 2

and this is always bounded by a constant (depending only on T ) using the hypothesis imposed on α and β. We obtain thus the bound

E kY α (t)k 2 X ≤ C = C T

for every t ∈ [0, T ].

(15)

Let us assume now that X is not Gaussian and it belongs to the k-th Wiener chaos with k ≥ 2. The process X can be written as

X t = X

j ≥ 1

p λ j e j x j (t)

where x j is an element of the k th Wiener chaos with respect to the Wiener process w j and (w j ) j ≥ 1 are independent real one-dimensional Wiener processes. Then

Y α (t) = X

j ≥ 1

p λ j

Z t 0

(t − u) α S(t − u)e j dx j

is also an element in the k th Wiener chaos (in the sense that every summand is in the k th Wiener chaos with respect to w j ). Note also that, using the above computations from the Gaussian case we obtain

E kY α (t)k 2 X ≤ C = C T

for every t ∈ [0, T ]. Denote by S n,Y

α

(t) =

n

X

j=1

p λ j e j

Z t 0

(t − u) α S(t − u) dx j :=

n

X

j=1

p λ j A j .

By Proposition 3.2, point a) and Remark 3.1 we have

( E kS n,Y

α

k p )

1p

 E

n

X

j=1

p λ j A j

2

X

1 2

+ E

p λ n A n

p

X + · · · + E

p λ 1 A 1

p X

1p

.

Using the hypercontractivity property of multiple stochastic integrals (2.2), we get for every i = 1, .., n

E

p λ i A i

p X = λ

p 2

i E

Z t 0

(t − u) α S(t − u) dx j

p

≤ c p λ i

p2

E

Z t 0

(t − u) α S(t − u) dx j

2 !

p2

≤ c p,T λ i

p2

. As a consequence, since 0 < λ i ↓ 0

E kS n,Y

α

(t)k p X ≤ c p,T

n

X

i=1

λ i

!

12

+

n

X

i=1

λ i

p2

!

1p

 ≤ c p,T . (3.3) Now, since for every t the sequence S n,Y

α

is convergent in L 2 (Ω; X ) as n → ∞ we can find a sequence which converges almost surely. This subsequence will be again denoted by S n,Y

α

. By Proposition 3.2 point b), since

X

i ≥ 1

E

p λ i A i

p

X 1 ( λ

i

A

i

>t) ≤ X

i ≥ 1

λ i

p2

E kA i k p X ≤ c p,T

(16)

we obtain that for every t the random variable Y α (t) belongs to L p ([0, T ]; X ) and E kS n,Y

α

(t) − Y α (t)k p X → n →∞ 0. Letting now n → ∞ in (3.3) we obtain that

E kY α (t)k p X ≤ c p,T . and this finishes the proof.

Proposition 3.4. Suppose that X satisfies Assumption 1.4 and fix α ∈ (0, H ). Let W A be given by (1.4). Then for every γ < α and ε < α − γ it holds that

W A ∈ C α γ ε ([0, T ]; D((− A ) γ )) .

In particular for any fixed t ∈ [0, T ] the random variable W A (t) belongs to D((− A ) γ ).

Proof. For α, γ ∈ (0, 1), p > 1 and ψ ∈ L p ([0, T ], X ) we define R α,γ ψ(t) = sin(απ)

π Z t

0

(t − u) α 1 (−A) γ S(t − u)ψ(u) du.

Then, if α > γ + 1 p it holds that R α,γ ∈ L

L p ([0, T ]; X ); C α γ

p1

([0, T ]; D((− A ) γ )) It is standard to see that

(− A ) γ X(t) = (R α,γ Y α )(t) where Y α (t) = R t

0 (t −u) α S(t −u) dX (u). Since by the above lemma Y α ∈ L p ([0, T ]; X ) the conclusion follows.

Next we will regard further properties of the stochastic convolution process (1.4).

We are concerned with the L p norm of its supremum and with its regularity with respect to the time variable. In the Gaussian case the proofs basically follow the standard ideas from [6, Chapter 5], while in the non-Gaussian case, the results are new and they involve an analysis of the L p moments of multiple Wiener-Itˆ o integrals.

Lemma 3.5. Assume that (X t ) t ∈ [0,T ] satisfies Assumption 1.4 and let W A be given by (1.4). For any p > H 1 , we have

E sup

t ∈ [0,T ]

kW A (t)k p X ≤ C.

Proof. Note that for t ∈ [0, T ] and 0 < α < H W A (t) = sin πα

π Z t

0

S(t − s)(t − s) α 1 Z α (s) ds with Z α (s) = R s

0 S(s − u)(s − u) α dX u . By H¨ older’s inequality with p > α 1 > H 1 E sup

t ∈ [0,T]

kW A (t)k p X ≤ E Z T

0

kZ α (s)k p X ds.

(17)

Now, since 0 < α < H, E kZ α (s)k 2 X ≤ C

Z T 0

e ω(t u) e ω(t v)

2 R

∂u∂v (u, v)

du dv < C

and by using the computations in the proof of Lemma 3.3 and the hypercontractivity property for multiple stochastic integrals (2.2) we get E kZ α (s)k p bX ≤ C.

Let us now state our result concerning the regularity of W A with respect to the time variable.

Proposition 3.6. Fix α ∈ (0, H ∧ (β + 1)). Then the process W A (·) has α older continuous paths.

Proof. We will use Kolmogorov’s continuity criterium for Hilbert valued stochastic processes (see [6, Theorem 3.3]). To this end, we need the evaluate the increment W A (t) − W A (s). We can write

W A (t) − W A (s) = X

j ≥ 1

p λ j

Z t s

S(t − u)e j dx j (u)

+ X

j ≥ 1

p λ j

Z s 0

(S(t − u) − S(s − u))e j dx j (u) :=I 1 + I 2 .

Concerning the first term, we get from Assumption 1.4 E I 1 2 ≤(T rQ)c 1

Z t s

Z t s

e ω

1

(t u) e ω

1

(t u) |u − v| 2H 2 du dv + (T rQ)c 2

Z t s

Z t s

e ω

1

(t u) e ω

1

(t u) (uv) β du dv

≤C Z t

s

Z t s

|u − v| 2H 2 du dv + Z t

s

Z t s

(uv) β du dv

≤C(|t − s| 2H + |t − s| 2(β+1) ).

Following the proof of [6, Theorem 5.1.3], we obtain E I 2 2 ≤ C|t −s| for any γ ∈ (0, 1).

As a consequence

E kW A (t) − W A (s)k 2 X ≤ C

|t − s| 2H + |t − s| 2(β+1) + |t − s|

and by (2.2) we will have that (as in the proof of Lemma 3.3) for every s close to t E kW A (t) − W A (s)k p X ≤ C p

|t − s| pH + |t − s| p(β+1)

and this bound will imply the existence of an α-H¨ older continuous version of W A .

Remark 3.2. In the case of the fractional Brownian motion and of the Rosenblatt

process the order of continuity is H . For the bifractional Brownian motion, since

β + 1 = HK, the stochastic convolution is HK H¨ older continuous.

(18)

4. Existence and uniqueness of the solution Let us first introduce the spaces where the solution will live.

Definition 4.1. Let L 2 F (Ω; C([0, T ]; X )) denote the Banach space of all F t -measurable, pathwise continuous processes, taking values in X , endowed with the norm

kX k L

2

F

(Ω;C([0,T ];X)) = E sup

t ∈ [0,T]

kX(t)k 2 X

! 1/2

while L 2 F (Ω; L 2 ([0, T ]; V )) denotes the Banach space of all mappings X : [0, T ] → V such that X (t) is F t -measurable, endowed with the norm

kXk L

2

F

(Ω;L

2

([0,T];V)) = E Z T

0

kX(t)k 2 V dt

! 1/2

.

We are concerned with Eq. (1.1) that we mean to solve in mild form: a process u ∈ L 2 F (Ω; C([0, T ]; X )) ∩ L 2 F (Ω; L 2 ([0, T ]; V )) is a solution to Eq. (1.1) if it satisfies P -a.s. the integral equation

u(t) = S(t)u 0 + Z t

0

S(t − σ)F (u(σ)) dσ + W A (t), t ∈ [0, T ].

The strategy of the proof is classical, compare [5, Theorem 5.5.8]: we consider the difference (u(t) − W A (t)) t ∈ [0,T] and we prove that it satisfies the mild equation and it belongs to the relevant spaces.

4.1. Existence of the solution for deterministic equations Let us consider the following evolution equation

d

dt y(t) = A y(t) + F (z(t) + y(t)) y(0) = u 0 ,

(4.1) where A and F satisfy the dissipativity condition on X stated in Assumptions 1.1 and 1.2 and z is a trajectory of the stochastic convolution process, which satisfies the regularity conditions stated in Theorem 1.5,

z ∈ C α γ ε ([0, T ]; D(−A) γ ) .

The construction in this section is based on the techniques of [5, Section 5.5]; notice however that we are concerned with a different kind of stochastic convolution and we do not impose any dissipativity on the operators A and F on the space V .

Remark 4.1. The key point in the following construction is the observation that V = D((−A) 1/2 ), compare Remark 5.3. Further, in this case we impose the following bound: 1 2 < H. Therefore, we can and do assume that

z ∈ C H 1/2 ε

[0, T ]; D(−A) 1/2 for arbitrary ε > 0.

Now, notice that the assumption on F implies that F : V → X is continuous, hence

the process (F(z(t))) t ∈ [0,T] is continuous and satisfies sup t [0,T] ||F (z(t))|| X < +∞.

(19)

Let us introduce the Yosida approximations F α of F . It is known that F α are Lipschitz continuous, dissipative mappings such that, for all u ∈ V , it holds F α (u) → F (u) in X , as α → 0.

In this part, we are concerned with the following approximation of Eq. (4.1):

d

dt y α (t) = A y α (t) + F α (z(t) + y α (t)) y α (0) = u 0 .

(4.2)

Lemma 4.2. Let x ∈ X . Then, for any α > 0 there exists a unique mild solution y α (t, x) to Eq. (4.2) such that

y α ∈ C([0, T ]; X ) ∩ L 2 ([0, T ]; V ).

Proof. Since F α are Lipschitz continuous, the existence of the solution to (4.2) is standard. It remains to prove the existence of a estimate that is uniform in α.

By the assumptions on A there exists ω > 0 such that h A u, ui ≤ −ωkuk 2 V , compare also Remark 5.4; using the dissipativity of F we have

1

2 ||y α (t)|| 2 X = 1

2 ||u 0 || 2 X + Z t

0

h A y α (s), y α (s)i X ds + Z t

0

hF α (z(s) + y α (s)), y α (s)i X ds

≤ 1

2 ||u 0 || 2 X − ω Z t

0

ky α (s)k 2 V ds + Z t

0

hF α (z(s)), y α (s)i X ds

≤ 1

2 ||u 0 || 2 X − ω Z t

0

ky α (s)k 2 V ds + T sup

t ∈ [0,T]

||F (z(t))|| 2 X + Z t

0

||y α (s)|| 2 X ds which implies, by an application of Gronwall’s lemma, that

sup

t ∈ [0,T]

1

2 ||y α (t)|| 2 X + ω Z t

0

ky α (s)k 2 V ds

≤ C(T, u 0 , z). (4.3) Notice that the constant on the right-hand side is independent of α.

Lemma 4.3. For every α > 0, u 0 , u 1 ∈ X , it holds sup

t ∈ [0,T ]

||y α u

0

(t) − y α u

1

(t)|| 2 X ≤ C||u 0 − u 1 || 2 X . (4.4) Proof. us consider the difference y α u

0

(t) − y α u

1

(t), for x, x ¯ ∈ H :

d

dt [y u α

0

(t) − y u α

1

(t)] = A [y u α

0

(t) − y α u

1

(t)] + [F α (z(t) + y α u

0

(t)) − F α (z(t) + y u α

1

(t))]

hence

||y u α

0

(t) − y α u

1

(t)|| 2 X = ku 0 − u 1 k 2 X + 2 Z t

0

hA(y α u

0

(s) − y u α

1

(s)), y α u

0

(s) − y u α

1

(s)i ds + 2

Z t 0

hF α (y α u

0

(s)) − F α (y α u

1

(s)), y α u

0

(s) − y u α

1

(s)i ds

(20)

and therefore

ky α u

0

(t) − y α u

1

(t)k 2 X ≤ ku 0 − u 1 k 2 X − 2ω Z t

0

ky α u

0

(s) − y α u

1

(s)k 2 X ds.

Applying Gronwall’s lemma we obtain

ky u α

0

(t) − y u α

1

(t)k 2 X ≤ e 2ωt ku 0 − u 1 k 2 X . (4.5) Lemma 4.4. The sequence (y α ) α>0 is a Cauchy sequence in C([0, T ]; X )∩L 2 ([0, T ]; V ).

Proof. Let α, β > 0. Then we compute d

dt [y α (t) − y β (t)] = A [y α (t) − y β (t)] + [F α (z(t) + y α (t)) − F β (z(t) + y β (t))]

Now, let us recall that

hF α (x) − F β (y), x − yi X ≤ (α + β)

|F α (x)| 2 + |F β (y)| 2 for all x, y ∈ V , α, β > 0 (compare [5, Proposition 5.5.4]; it follows that

1

2 ky α (t) − y β (t)k 2 X + ω Z t

0

ky α (s) − y β (s)k 2 X ds

≤ (α + β) Z t

0

kF α (z(s) + y α (s))k 2 X + kF β (z(s) + y β (s))k 2 X ds. (4.6) Since F : V → X is continuous, it follows that for some contant L > 0

kF α (z(s) + y α (s))k 2 X ≤ kF(z(s) + y α (s))k 2 X

≤ L kz(s) + y α (s)k 2 V ≤ 2L

kz(s)k 2 V + ky α (s)k 2 V

hence by using estimate (4.3)

Z T 0

kF α (z(s) + y α (s))k 2 X + kF β (z(s) + y β (s))k 2 X ds

≤ 2LT kzk 2 C([0,T]; V ) + C(T, u 0 , z, ω, L) is bounded by a constant that does not depend on α and β. If we put the above estimate in (4.6) we obtain

1 2 sup

t ∈ [0,T]

ky α (t) − y β (t)k 2 X + ω Z T

0

ky α (s) − y β (s)k 2 X ds ≤ C(α + β ) which easily implies the thesis.

Theorem 4.5. For any z ∈ C([0, T ]; V ) there exists a unique solution (y(t)) t ∈ [0,T] to Eq. (4.1),

y ∈ C([0, T ]; V ) ∩ L 2 ([0, T ]; X )

and it depends continuously on the initial condition u 0 ∈ X .

(21)

Proof. Since y α is a Cauchy sequence in C([0, T ]; V ) ∩ L 2 ([0, T ]; X ) it converges to a unique function y in the same space; it remains to show that (y(t)) t ∈ [0,T] actually solves (4.1). Also, the continuous dependence on the initial condition follows from the same property proved for the approximating functions y α , since the estimate in (4.4) does not depend on α and it is conserved at the limit.

By the claimed convergence of y α , since J α is a sequence of continuous mapping that converges to the identity, it holds that J α (y α (s)) → y(s) ∈ V a.s. on [0, T ]. Therefore, by the continuity of F , it follows that

F α (z(s) + y α (s)) → F (z(s) + y(s)) ∈ X a.s. on [0, T ].

Now we use Vitali’s theorem (the Uniform Integrability Convergence Theorem, com- pare [24, Theorem 9.1.6]), to conclude that

Z t 0

S(t − s)F α (z(s) + y α (s)) ds −→

Z t 0

S(t − s)F (z(s) + y(s)) ds.

5. A network model for a neuronal cell

In this paper we aim to investigate a mathematical model of a complete neuron which is subject to stochastic perturbations; for a complete introduction to the biological motivations, see [9]. Our model is based on the deterministic one for the whole neuronal network that has been recently introduced in [4]; we shall borrow from this paper the basic analytical framework for the well-posedness of the problem.

We treat the neuron as a simple graph with different kind of (stochastic) evolutions on the edges and dynamic Kirchhoff-type condition on the central node (the soma).

This approach is made possible by the recent development of techniques of network evolution equations; hence, as opposite to most of the papers in the literature, which concentrate on some parts of the neuron, could it be the dendritic network, the soma or the axon, we take into account the complete cell.

In this paper, we schematize a neuron as a network by considering

• a FitzHugh-Nagumo (nonlinear) system on the axon, coupled with

• a (linear) Rall model for the dendritical tree, complemented with

• Kirchhoff-type rule in the soma.

It is commonly accepted that dendrites conduct electricity in a passive way. The well known Rall’s model [20, 21] simplify the analysis of this part by considering a simpler, concentrated “equivalent cylinder” (of finite length ℓ d ) that schematizes a dendritical tree; he showed that a linear cable equation fits experimental data on dendritical trees quite well, provided that it is complemented by a suitable dynamical conditions imposed in the interval end corresponding to the soma. Further efforts have been put on models for signal propagation along the axon. Shortly after the publication of Hodgkin and Huxley’s model for the diffusion of electric potential in the squid giant axon, a more analytically treatable model was proposed by FitzHugh and Nagumo; the model is able to catch the main mathematical properties of excitation and propagation using

◦ a voltage-like variable having cubic nonlinearity that allows regenerative self-

excitation via a positive feedback, and

(22)

◦ a recovery variable having a linear dynamics that provides a slower negative feedback.

In our model the axon has length ℓ, i.e. the space variable x in the above equations ranges in an interval (0, ℓ), where the soma (the cell body) is identified with the point 0.

There is a large evidence in the literature that realistic neurobiological models shall incorporate stochastic terms to model real inputs. It is classical to model the random perturbation with a Wiener process, compare [23], as it comes from a central limit theorem applied to a sequence of independent random variables.

However, there is a considerable interest in literature to apply different kind of noises: we shall mention long-range dependence processes and self-similar processes, as their features better model the real inputs: see the contributions in [22, Part II].

Further, they can be justified theoretically as they arise in the so called Non Central Limit Theorem, see for instance [7, 25].

The fractional Brownian motion is of course the most studied process in the class of Hermite processes due to its significant importance in modeling. It is not only selfsimilar, but also exhibits long-range dependence, i.e., the behaviour of the process at time t does depend on the whole history up to time t, stationarity of the increments and continuity of trajectories.

5.1. The abstract formulation

In the following, as long as we allow for variable coefficients in the diffusion operator, we can let the edges of the neuronal network to be described by the interval [0, 1]. The general form of the equation we are concerned with can be written as a system in the space X = (L 2 (0, 1)) 2 × R × L 2 (0, 1) for the unknowns (u, u d , d, v):

∂t u(t, x) = ∂x c(x) ∂x u(t, x)

− p(x)u(t, x) − v(t, x) + θ(u(t, x)) + ∂t ζ u (t, x)

∂t u d (t, x) = ∂x c d (x) ∂x u d (t, x)

− p d (x)u d (t, x) + ∂t ζ d (t, x)

∂t d(t) = −γd(t) − c(0) ∂x u(t, 0) − c d (1) ∂x u d (t, 1)

∂t v(t, x) = u(t, x) − ǫv(t, x) + ∂t ζ v (t, x)

(5.1)

under the following continuity, boundary and initial conditions

d(t) = u(t, 0) = u d (t, 1), t ≥ 0

∂x u(t, 1) = 0, ∂x u d (t, 0) = 0, t ≥ 0

u(0, x) = u 0 (x), v(0, x) = v 0 (x), u d (0, x) = u d;0 (x).

(5.2)

Throughout the paper we shall assume that the coefficients in (5.1) satisfy the following conditions.

Assumption 5.1.

The function θ : R → R satisfies some dissipativity conditions: there exists λ ≥ 0 such that

for h(u) = −λu + θ(u) it holds

[h(u) − h(v)](u − v) ≤ 0 ∀ u, v ∈ R ; |h(u)| ≤ c(1 + |u| 2ρ+1 ), ρ ∈ N . (5.3)

(23)

• c, c d , p, p d ∈ C 1 ([0, 1]) are continuous, positive functions such that, for some C > 0,

C ≤ c(x), c d (x) ≤ 1

C , C ≤ p(x) − λ, p d (x) ≤ 1 C ;

• γ > 0, ǫ > 0 are given constants.

Remark 5.1. The function θ : R → R , in the classical model of FitzHugh, is given by θ(u) = u(1 − u)(u − ξ) for some ξ ∈ (0, 1); it satisfies (5.3) with λ = 1 32 − ξ + 1).

Other examples of nonlinear conditions are known in the literature, see for instance [8]

and the references therein.

Our aim is to write equation (5.1), endowed with the conditions in (5.2), in an abstract form in the Hilbert space X = (L 2 (0, 1)) 2 × R ×L 2 (0, 1). We also introduce the Banach space Y = (C([0, 1])) 2 × R × L 2 (0, 1) that is continuously (but not compactly) embedded in X . In this section we establish the basic framework that we need in order to solve the abstract problem. To this aim we need to prove that the linear part of the system defines a linear, unbounded operator A that generates on X an analytic semigroup. We shall also study the dissipativity of A and of the nonlinear term F (see (5.6)).

On the domain D( A ) :=

(

v := (u, v, d, u d ) ∈ (H 2 (0, 1)) 2 × R × L 2 (0, 1) s. th. u(0) = u d (1) = d, u (1) = 0, u d (0) = 0, c(0)u (0) + c d (1)u d (1) = 0

)

(5.4) we define the operator A by setting

Av :=

(cu ) − pu + λu − v (c d u d ) − p d u d

−γd − (c(0)u (0) − c d (1)u d (1)) u − ǫv

(5.5)

In order to treat the nonlinearity in our system, we introduce the Nemitsky operator Θ on L 2 (0, 1) such that Θ(u)(x) = h(u(x)) for all u ∈ C([0, 1]) ⊂ L 2 (0, 1). Then we define F on X by setting

F ( v ) = (Θ(u), 0, 0, 0) on the domain D( F ) =

(u, v, d, u d ) ∈ X : u ∈ C([0, 1]) (5.6) Remark 5.2. In the above setting, the function F satisfies the conditions in Assump- tion 1.2.

Finally, setting B(t) = (ζ u (t), ζ v (t), 0, ζ d (t)) , we obtain that the initial value problem associated with (5.1)–(5.2) can be equivalently formulated as an abstract stochastic Cauchy problem

( d v (t) = [ Av (t) + F ( v (t)) dt + dB(t), t ≥ 0,

v (0) = v 0 , (5.7)

(24)

where the initial value is given by v 0 := (u 0 , v 0 , u 0 (0), u d;0 ) ∈ X .

In the next section we shall prove that the leading operator A in Eq. (5.7) satisfies the condition in Assumption 1.1. According to Theorem 1.6, this implies that there exists a unique solution to problem (5.7) whenever the noise (B(t)) t ≥ 0 is a fractional Brownian motion with Hurst parameter H > 1 2 , or a bifractional Brownian motion with H > 1 2 and K ≥ 1/2H, or an Hermite process with selfsimilarity order H > 1 2 , or, more generally, a process that satisfies Assumption 1.4.

Theorem 5.2. The proposed model for a neuron cell, endowed with a stochastic input that satisfies the conditions in Assumption 1.4, has a unique solution on the time interval [0, T ], for arbitrary T > 0, which belongs to

L 2 F (Ω; C([0, T ]; X )) ∩ L 2 F (Ω; L 2 ([0, T ]; V )) and depends continuously on the initial condition.

5.2. The well-posedness of the linear system

As stated above, we can refer to some results in the existing literature in order to prove well-posedness and further qualitative properties of our system: the main references here are [4, 16, 15].

Our first remark is that, neglecting the recovery variable v, the (linear part of the) system for the unknown (u, u d , d) is a diffusion equation on a network with dynamical boundary conditions:

∂t u(t, x) = ∂x ∂x c(x)u(t, x)

− p(x)u(t, x) + λu(t, x)

∂t u d (t, x) = ∂x ∂x c d (x)u d (t, x)

− p d (x)u d (t, x)

∂t d(t) = −γd(t) − c(0) ∂x u(t, 0) − c d (1) ∂x u d (t, 1)

(5.8)

Such systems are already present in the literature. Let us define X = (L 2 (0, 1)) 2 × R and introduce the operator

A

 u u d

d

 =

(cu ) − pu + λu (c d u d ) − p d u d

−γ 1 d − (c(0)u (0) − c d (1)u d (1))

with coupled domain D(A) =

(u, u d , d) ∈ (H 2 (0, 1)) 2 × C : u(0) = u d (1) = d

Then, by quoting for instance the papers [16, 15], we can state the following result.

Proposition 5.3. The operator (A, D(A)) is self-adjoint and dissipative and it has compact resolvent; by the spectral theorem, it generates a strongly continuous, analytic and compact semigroup (S(t)) t ≥ 0 on the Hilbert space X .

The next step is to introduce the operator A on the space X = X × L 2 (0, 1). We can think A as a matrix operator in the form

A =

A −P 1

P 1 −ǫ

Références

Documents relatifs

Relating to first international movers, we observed that coher- ence between a favorable pre-internationalization perfor- mance, measured from the aspiration-level performance

Two 2 main and parallel ways have been developed over the years to build a stochastic calculus with respect to Gaussian processes; the Itô integral wrt Brownian motion being at

This latter result is given in Subsection 4.1, while the end of this section is devoted to a complete comparison between our Itô formula and all the Itô formulas for Gaussian

The variance stabilization criteria are always defeated by GLR, due to the distortions of the noise-free patches as well as the consideration of the noise variance only, instead of

Since the problem of dictionary learning from noisy images in the presence of non Gaussian noise is a difficult one, our purpose is to first consider the most simple way of learning

However, the main idea of this paper is that (1) is also the characteristic function of a particular case of Gaussian compound stochastic processes (GCP), and therefore it

The rst one is an explicit bound for d(F, N) in terms of the fourth cumulants of the components of F , when F is a R d -valued random vector whose components are multiple integrals

Clustering: from modeling to visualizing Mapping clusters as spherical Gaussians Numerical illustrations for functional data Discussion.. Take