• Aucun résultat trouvé

Self-interacting diffusions IV: Rate of convergence

N/A
N/A
Protected

Academic year: 2021

Partager "Self-interacting diffusions IV: Rate of convergence"

Copied!
51
0
0

Texte intégral

(1)

HAL Id: hal-00408248

https://hal.archives-ouvertes.fr/hal-00408248

Preprint submitted on 29 Jul 2009

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Self-interacting diffusions IV: Rate of convergence

Michel Benaïm, Olivier Raimond

To cite this version:

Michel Benaïm, Olivier Raimond. Self-interacting diffusions IV: Rate of convergence. 2009. �hal-

00408248�

(2)

Self-interacting diffusions IV: Rate of convergence

Michel Bena¨ım

Universit´e de Neuchˆatel, Suisse Olivier Raimond

Universit´e Paris Ouest Nanterre la d´efense, France July 30, 2009

Abstract

Self-interacting diffusions are processes living on a compact Riemannian man- ifold defined by a stochastic differential equation with a drift term depending on the past empirical measure µ

t

of the process. The asymptotics of µ

t

is governed by a deterministic dynamical system and under certain conditions (µ

t

) converges almost surely towards a deterministic measure µ

(see Bena¨ım, Ledoux, Raimond (2002) and Bena¨ım, Raimond (2005)). We are interested here in the rate of convergence of µ

t

towards µ

. A central limit theorem is proved. In particular, this shows that greater is the interaction repelling faster is the convergence.

We acknowledge financial support from the Swiss National Science Foundation Grant 200021-103625/1

(3)

1 Introduction

Self-interacting diffusions

Let M be a smooth compact Riemannian manifold and V : M × M → R a sufficiently smooth mapping

1

. For all finite Borel measure µ, let V µ : M → R be the smooth function defined by

V µ(x) = Z

M

V (x, y)µ(dy).

Let (e

α

) be a finite family of vector fields on M such that X

α

e

α

(e

α

f )(x) = ∆f(x),

where ∆ is the Laplace operator on M and e

α

(f ) stands for the Lie derivative of f along e

α

. Let (B

α

) be a family of independent Brownian motions.

A self-interacting diffusion on M associated to V can be defined as the solution to the stochastic differential equation (SDE)

dX

t

= X

α

e

α

(X

t

) ◦ dB

tα

− ∇ (V µ

t

)(X

t

)dt.

where

µ

t

= 1 t

Z

t

0

δ

Xs

ds is the empirical occupation measure of (X

t

).

In absence of drift (i.e V = 0), (X

t

) is just a Brownian motion on M but in general it defines a non Markovian process whose behavior at time t depends on its past trajectories through µ

t

. This type of process was intro- duced in Benaim, Ledoux and Raimond (2002) (hence after referred as [3]) and further analyzed in a series of papers by Benaim and Raimond (2003, 2005, 2007) (hence after referred as [4], [5] and [6]). We refer the reader to these papers for more details and especially to [3] for a detailed construction of the process and its elementary properties. For a general overview of pro- cesses with reinforcement we refer the reader to the recent survey paper by Pemantle (2007) ([15]).

1The mapping Vx :M → Rdefined by Vx(y) = V(x, y) is C2 and its derivatives are continuous in (x, y)

(4)

Notation and Background

Standing Notation We let M (M ) denote the space of finite Borel mea- sures on M, P (M) ⊂ M (M ) the space of probability measures. If I is a metric space (typically, I = M, R

+

× M or [0, T ] × M ) we let C(I) denote the space of real valued continuous functions on I equipped with the topology of uniform convergence on compact sets. When I is compact and f ∈ C(I) we let k f k = sup

x∈I

| f (x) | . The normalized Riemann measure on M will be denoted by λ.

Let µ ∈ P (M) and f : M → R a nonnegative or µ − integrable Borel function. We write µf for R

f dµ, and f µ for the measure defined as f µ(A) = R

A

f dµ. We let L

2

(µ) denote the space of such functions for which µ | f |

2

< ∞ , equipped with the inner product

h f, g i

µ

= µ(f g) and the norm

k f k

µ

= p µf

2

. We simply write L

2

for L

2

(λ).

Of fundamental importance in the analysis of the asymptotics of (µ

t

) is the mapping Π : M (M ) → P (M ) defined by

Π(µ) = ξ(V µ)λ (1)

where ξ : C(M ) → C(M ) is the function defined by ξ(f )(x) = e

−f(x)

R

M

e

−f(y)

λ(dy) . (2)

In [3], it is shown that the asymptotics of µ

t

can be precisely related to the long term behavior of a certain semiflow on P (M ) induced by the ordinary differential equation (ODE) on M (M ) :

˙

µ = − µ + Π(µ). (3)

Depending on the nature of V, the dynamics of (3) can either be convergent

or nonconvergent leading to similar behaviors for { µ

t

} (see [3]). When V is

symmetric, (3) happens to be a quasigradient and the following convergence

result hold.

(5)

Theorem 1.1 ([5]) Assume that V is symmetric. i.e. V (x, y) = V (y, x).

Then the limit set of { µ

t

} (for the topology of weak* convergence) is almost surely a compact connected subset of

Fix (Π) = { µ ∈ P (M ) : µ = Π(µ) } .

In particular, if Fix (Π) is finite then (µ

t

) converges almost surely toward a fixed point of Π. This holds for a generic function V (see [5]).

Sufficient conditions ensuring that Fix (Π) has cardinal one are as follows:

Theorem 1.2 ([5], [6]) Assume that V is symmetric and that one of the two following conditions hold

(i) Up to an additive constant V is a Mercer kernel, That is V (x, y) = K(x, y) + C

and Z

K(x, y)f (x)f (y)λ(dx)λ(dy) ≥ 0 for all f ∈ L

2

.

(ii) For all x ∈ M, y ∈ M, u ∈ T

x

M, v ∈ T

y

M

Ric

x

(u, u) + Ric

y

(v, v) + Hess

x,y

V ((u, v), (u, v)) ≥ K( k u k

2

+ k v k

2

) where K is some positive constant. Here Ric

x

stands for the Ricci tensor at x and Hess

x,y

is the Hessian of V at (x, y).

Then Fix (Π) reduces to a singleton { µ

} and µ

t

→ µ

with probability one.

As observed in [6] the condition (i) in Theorem 1.2 seems well suited to describe self-repelling diffusions. On the other hand, it is not clearly related to the geometry of M. Condition (ii) has a more geometrical flavor and is robust to smooth perturbations (of M and V ). It can be seen as a Bakry- Emery type condition for self interacting diffusions.

In [5], it is also proved that every stable (for the ODE (3)) fixed point

of Π has a positive probability to be a limit point for µ

t

; and any unstable

fixed point cannot be a limit point for µ

t

.

(6)

Organisation of the paper

Let µ

∈ Fix (Π). We will assume that

Hypothesis 1.3 µ

t

converges a.s. towards µ

. Sufficient conditions are given by Theorem 1.2

In this paper we intend to study the rate of this convergence. Let

t

= e

t/2

et

− µ

).

It will be shown that, under some conditions to be specified later, for all g = (g

1

, . . . , g

n

) ∈ C(M )

n

the process

[∆

s

g

1

, . . . , ∆

s

g

n

, V ∆

s

]

s≥t

converges in law, as t → ∞ , toward a certain stationary Ornstein-Uhlenbeck process (Z

g

, Z ) on R

n

× C(M ). This process is defined in Section 2. The main result is stated in section 3 and some examples are developed. It is in particular observed that a strong repelling interaction gives a faster conver- gence. The section 4 is a proof section. The appendix, section 5, contains general material on random variables and Ornstein-Uhlenbeck processes on C(M ).

In the following K (respectively C) denotes a positive constant (respec- tively a positive random constant). These constants may change from line to line.

2 The Ornstein-Uhlenbeck process (Z g , Z).

Throughout all this section we let µ ∈ P (M ). For x ∈ M we set V

x

: M → R defined by V

x

(y) = V (x, y).

2.1 The operator G

µ

Let g ∈ C(M) and let G

µ,g

: R × C(M ) → R be the linear operator defined by

G

µ,g

(u, f ) = u/2 + Cov

µ

(g, f ), (4)

(7)

where Cov

µ

is the covariance on L

2

(µ), that is the bilinear form acting on L

2

× L

2

defined by

Cov

µ

(f, g) = µ(f g) − (µf )(µg).

We define the linear operator G

µ

: C(M) → C(M ) by

G

µ

f (x) = G

µ,Vx

(f (x), f ) (5)

= f (x)/2 + Cov

µ

(V

x

, f ).

It is easily seen that k G

µ

f k ≤ (2 k V k + 1/2) k f k . In particular, G

µ

is a bounded operator. Let { e

−tGµ

} denotes the semigroup acting on C(M ) with generator − G

µ

. From now on we will assume the following:

Hypothesis 2.1 There exists κ > 0 and b λ ∈ P (M ) such that µ << b λ with k

dλb

k

< ∞ , λ and b λ are equivalent measures with k

dbλ

k

< ∞ and k

dbλ

k

< ∞ , and such that for all f ∈ L

2

( b λ),

h G

µ

f, f i

bλ

≥ κ k f k

2bλ

. Let

λ( − G

µ

) = lim

t→∞

log( k e

−tGµ

k )

t .

This limit exists by subadditivity. Then

Lemma 2.2 Hypothesis 2.1 implies that λ( − G

µ

) ≤ − κ < 0.

Proof : For all f ∈ L

2

( b λ), d

dt k e

−tGµ

f k

2bλ

= − 2 h G

µ

e

−tGµ

f, e

−tGµ

f i

bλ

≤ − 2κ k e

−tGµ

f k

bλ

. This implies that k e

−tGµ

f k

λb

≤ e

−κt

k f k

bλ

.

Denote by g

t

the solution of the differential equation dg

t

dt = Cov

µ

(V

x

, g

t

)

with g

0

= f , where f ∈ C(M ). Note that e

−tGµ

f = e

−t/2

g

t

. It is straightfor- ward to check that (using the fact that k

dbλ

k

< ∞ )

d

dt k g

t

k

bλ

≤ K k g

t

k

bλ

(8)

with K a constant depending only on V and µ. Thus sup

t∈[0,1]

k g

t

k

λb

≤ K k f k

bλ

. Now, since for all x ∈ M and t ∈ [0, 1]

d dt g

t

(x)

≤ K k g

t

k

bλ

≤ K k f k

bλ

,

we have k g

1

k ≤ K k f k

bλ

. This implies that k e

−Gµ

f k ≤ K k f k

bλ

. Now for all t > 1, and f ∈ C(M),

k e

−tGµ

f k = k e

−Gµ

e

−(t−1)Gµ

f k

≤ K k e

−(t−1)Gµ

f k

bλ

≤ Ke

−κ(t−1)

k f k

bλ

≤ Ke

−κt

k f k

.

This implies that k e

−tGµ

k ≤ Ke

−κt

, which proves the lemma. QED The adjoint of G

µ

is the operator on M (M ) defined by the relation

m(G

µ

f ) = (G

µ

m)f

for all m ∈ M (M ) and f ∈ C(M ). It is not hard to verify that G

µ

m = 1

2 m + (V m)µ − (µ(V m))µ. (6)

2.2 The generator A

µ

and its inverse Q

µ

Let H

2

be the Sobolev space of real valued functions on M , associated with the norm k f k

2H

= k f k

2λ

+ k∇ f k

2λ

. Since Π(µ) and λ are equivalent measures with continuous Radon-Nykodim derivative, L

2

(Π(µ)) = L

2

(λ) := L

2

. We denote by K

µ

the projection operator, acting on L

2

(Π(µ)), defined by

K

µ

f = f − Π(µ)f.

(9)

We denote by A

µ

the operator acting on H

2

defined by A

µ

f = 1

2 ∆f − h∇ V µ, ∇ f i . Note that for f and g in L

2

,

h A

µ

f, g i

Π(µ)

= − 1 2

Z

h∇ f, ∇ g i (x)Π(µ)(dx) where h· , ·i denotes the Riemannian inner product on M.

For all f ∈ C(M) there exists Q

µ

f ∈ H

2

such that Π(µ)( Q

µ

f) = 0 and f − Π(µ)f = K

µ

f = − A

µ

Q

µ

f. (7) Note that if P

tµ

denotes the semigroup with generator A

µ

, then

Q

µ

f = Z

0

P

tµ

K

µ

f dt.

Since there exists p

µt

( · , · ) such that P

tµ

f (x) =

Z

M

p

µt

(x, y)f(y)Π(µ)(dy), we have

Q

µ

f (x) = Z

M

q

µ

(x, y)f (y)Π(µ)(dy) where

q

µ

(x, y ) = Z

0

(p

µt

(x, y) − 1)dt.

Then, as shown in [3], Q

µ

f is C

1

and there exists a constant K such that for all f ∈ C(M) and µ ∈ P (M),

k Q

µ

f k

≤ K k f k

(8) k∇ Q

µ

f k

≤ K k f k

. (9) Finally, note that for f and g in L

2

,

Z

h∇ Q

µ

f, ∇ Q

µ

g i (x)Π(µ)(dx) = − 2 h A

µ

Q

µ

f, Q

µ

g i

Π(µ)

(10)

= 2 h f, Q

µ

g i

Π(µ)

.

(10)

2.3 The covariance C

µ

We let C b

µ

denote the bilinear continuous form C b

µ

: C(M) × C(M ) → R defined by

C b

µ

(f, g) = 2 h f, Q

µ

g i

Π(µ)

.

This form is symmetric (see its expression given by (10)). Note also that for some constant depending on µ,

| C b

µ

(f, g) | ≤ K k f k × k g k .

We let C

µ

denote the mapping C

µ

: M × M → R defined by C

µ

(x, y) = C b

µ

(V

x

, V

y

).

Then C

µ

is a covariance function (or a Mercer kernel), i.e. it is continuous, symmetric and P

i,j

λ

i

λ

j

C

µ

(x

i

, x

j

) ≥ 0.

2.4 The process Z

We now define an Ornstein-Uhlenbeck process on C(M ) of covariance C

µ

and drift − G

µ

. This heavily relies on the general construction given in the appendix.

A Brownian motion on C(M) with covariance C

µ

is a C(M)-valued stochas- tic process W = { W

t

}

t≥0

such that

(i) W

0

= 0;

(ii) t 7→ W

t

is continuous;

(iii) For every finite subset S ⊂ R × M, { W

t

(x) }

(t,x)∈S

is a centered Gaussian random vector;

(iv) E [W

s

(x)W

t

(y)] = (s ∧ t)C

µ

(x, y ).

Lemma 2.3 There exists a Brownian motion on C(M ) with covariance C

µ

. Proof : Let

d

Cµ

(x, y ) :=

q

C

µ

(x, x) − 2C

µ

(x, y) + C

µ

(y, y)

= k∇ Q

µ

(V

x

− V

y

) k

Π(µ)

≤ K k V

x

− V

y

k

(11)

where the last inequality follows from (9). Then d

Cµ

(x, y ) ≤ Kd(x, y )

and the result follows from Proposition 5.8 and Remark 5.7 in the appendix.

QED

We say that a C(M )-valued process Z is an Ornstein-Uhlenbeck process of covariance C

µ

and drift − G

µ

if

Z

t

= Z

0

− Z

t

0

G

µ

Z

s

ds + W

t

(11) where

(i) W is a C(M )-valued Brownian motion of covariance C

µ

; (ii) Z

0

is a C(M )-valued random variable;

(iii) W and Z

0

are independent.

Note that we can think of Z as a solution to the linear SDE dZ

t

= dW

t

− G

µ

Z

t

dt.

It follows from section 5.3 in the appendix that such a process exists and defines a Markov process. Furthermore

Proposition 2.4 Under hypothesis 2.1,

(i) (Z

t

) converges in law toward a C(M )-valued random variable Z

; (ii) Z

is Gaussian, in the sense that for every finite set S ⊂ M, { Z

(x) }

x∈S

is a centered Gaussian random vector;

(iii) Let π

µ

denotes the law of Z

. Then π

g

is characterized by its variance Var (π

µ

) : M (M) → R ,

m 7→ E ((mZ

)

2

),

(12)

and for all m ∈ M , Var (π

µ

)(m) =

Z

0

Z

M×M

C

µ

(x, y )m

t

(dx)m

t

(dy)dt

= Z

0

C b

µ

(V m

t

, V m

t

)dt where

m

t

= e

−tGµ

m.

Proof : This follows from Proposition 5.16 in the appendix. Example 5.18 shows that assertion (iii) of this proposition is satisfied. QED

2.5 The process Z

g

.

For g = (g

1

, . . . , g

n

) ∈ C(M )

n

, let ˜ M = { 1, . . . , n } ∪ M be the disjoint union of { 1, . . . , n } and M, and C

µg

: ˜ M × M ˜ → R be the function defined by

C

µg

(x, y) =

 

C b

µ

(g

x

, g

y

) for x, y ∈ { 1, . . . , n } , C

µ

(x, y) for x, y ∈ M,

C b

µ

(V

x

, g

y

) for x ∈ M, y ∈ { 1, . . . , n } . Then C

µg

is a Mercer kernel (see section 5.2).

A Brownian motion on R

n

× C(M ) with covariance C

µg

is a R

n

× C(M)- valued stochastic process (W

g

, W ) = { (W

tg1

, . . . , W

tgn

, W

t

) }

t≥0

such that:

(i) W = { W

t

}

t≥0

is a C(M )-valued Brownian motion with covariance C

µ

; (ii) For every finite subset S ⊂ R × M, { W

tg

, W

t

(x) }

(t,x)∈S

is a centered

Gaussian random vector;

(iii) E (W

sgi

W

tgj

) = (s ∧ t) C b

µ

(g

i

, g

j

) and E (W

s

(x)W

tgi

) = (s ∧ t) C b

µ

(V

x

, g

i

).

Lemma 2.5 There exists a Brownian motion on R

n

× C(M) with covariance C

µg

.

Proof : Let ˜ d be the distance on ˜ M defined by d(x, y) = ˜

 

1

x6=y

for x, y ∈ { 1, . . . , n } , d(x, y ) for x, y ∈ M,

d(x, x

0

) + 1 for x ∈ M, y ∈ { 1, . . . , n }

(13)

where x

0

is some arbitrary point in M. This makes ˜ M a compact metric space, and it is easy to show that the function C

µg

verifies hypothesis 5.6 (use the proof of Lemma 2.3). The result follows by application of Proposition 5.8. QED

Let now be Z

tg

= (Z

tg1

, . . . , Z

tgn

) ∈ R

n

denote the solution to the SDE dZ

tgi

= dW

tgi

− (Z

tgi

/2 + Cov

µ

(Z

t

, g

i

)) dt, i = 1, . . . , n (12) where (W

g

, W ) is as above and Z = (Z

t

) is given by (11).

The following result generalizes Proposition 2.4.

Proposition 2.6 Under hypothesis 2.1,

(i) The process (Z

tg

, Z

t

) converges in law toward a centered R

n

× C(M) valued Gaussian random variable (Z

g

, Z

).

(ii) Let π

g,µ

denotes the law of (Z

g

, Z

). Then π

g,µ

is characterized by its variance

Var (π

g,µ

) : R

n

× M (M ) → R , (u, m) 7→ E (mZ

+ h u, Z

g

i )

2

; and for all u ∈ R

n

, m ∈ M (M ),

Var (π

g,µ

)(u, m) = Z

0

C b

µ

(f

t

, f

t

)dt with

f

t

= e

−t/2

X

i

u

i

g

i

+ V m

t

, and where m

t

is defined by

m

t

f = m

0

(e

−tGµ

f) + X

n

i=1

u

i

Z

t

0

e

−s/2

Cov

µ

(g

i

, e

−(t−s)Gµ

f)ds. (13)

Proof : Let G

gµ

: R

n

× C(M ) → R

n

× C(M) be the operator defined by G

gµ

=

I/2 A

gµ

0 G

µ

(14)

(14)

where A

gµ

: C(M ) → R

n

is the linear map defined by A

gµ

(f) =

Cov

µ

(f, g

1

), . . . , Cov

µ

(f, g

n

) .

Then (Z

g

, Z) is a C( ˜ M)-valued Ornstein-Uhlenbeck process of covariance C

µg

and drift − G

gµ

. It is not hard to verify that under hypothesis 2.1, the assumptions of Proposition 5.16 hold, so that (Z

tg

, Z

t

) converges in law to- ward a centered R

n

× C(M ) valued Gaussian random variable (Z

g

, Z

) with variance

Var (π

g,µ

)(u, m) = Z

0

C b

µ

(f

t

, f

t

)dt with f

t

= P

i

u

t

(i)g

i

+ V m

t

and where (u

t

, m

t

) = e

−t(Ggµ)

(u, m). Now (G

gµ

)

=

I/2 0 (A

gµ

)

(G

µ

)

and (A

gµ

)

u = P

i

u

i

(g

i

− µg

i

)µ. Thus u

t

= e

−t/2

u and dm

t

dt = − (A

gµ

)

u

t

− (G

µ

)

m

t

Thus m

t

is the solution with m

0

= m of dm

t

dt = − e

−t/2

X

i

u

i

(g

i

− µg

i

)

!

µ − G

µ

m

t

(15) Note that (15) is equivalent to

d

dt (m

t

f ) = − e

−t/2

Cov

µ

X

i

u

i

g

i

, f

!

− m

t

(G

µ

f) for all f ∈ C(M), and m

0

= m. From which we deduce that

m

t

= e

−tGµ

m

0

− Z

t

0

e

−s/2

e

−(t−s)Gµ

X

i

u

i

(g

i

− µg

i

! ds

which implies the formula for m

t

given by (13). QED

For further reference we call (Z

g

, Z) an Ornstein-Uhlenbeck process of co-

variance C

µg

and drift − G

gµ

. It is called stationary when its initial distribution

is π

g,µ

.

(15)

3 A central limit theorem for µ t

We state here the main results of this article. We assume µ

∈ Fix (Π) satisfies hypotheses 1.3 and 2.1. Set ∆

t

= e

t/2

et

− µ

), D

t

= V ∆

t

and D

t+·

= { D

t+s

: s ≥ 0 } . Then

Theorem 3.1 D

t+·

converges in law, as t → ∞ , towards a stationary Ornstein- Uhlenbeck process of covariance C

µ

and drift − G

µ

.

For g = (g

1

, . . . , g

n

) ∈ C(M )

n

, we set D

tg

= (∆

t

g, D

t

) and D

t+·g

= { D

t+sg

: s ≥ 0 } . Then

Theorem 3.2 (D

gt+s

)

s≥0

) converges in law towards a stationary Ornstein- Uhlenbeck process of covariance C

µg

and drift − G

gµ

.

Define C b : C(M ) × C(M) → R the symmetric bilinear form defined by C(f, g) = b

Z

0

C b

µ

(f

t

, g

t

)dt, (16) with (g

t

is defined by the same formula, with g in place of f )

f

t

(x) = e

−t/2

f(x) − Z

t

0

e

−s/2

Cov

µ

(f, e

−(t−s)Gµ∗

V

x

)ds. (17)

Corollary 3.3 ∆

t

g converges in law towards a centered Gaussian variable Z

g

of covariance

E [Z

gi

Z

gj

] = C(f, g). b

Proof : Follows from theorem 3.2 and the calculus of Var (π

g,µ

)(u, 0). QED

3.1 Examples

3.1.1 Diffusions

Suppose V (x, y) = V (x), so that (X

t

) is just a standard diffusion on M with invariant measure µ

=

λexp(−Vexp (−V)

.

Let f ∈ C(M ). Then f

t

defined by (17) is equal to (using e

−tGµ

1 = e

−t/2

1) = e

−t/2

f. Thus

C(f, g) = 2µ b

(f Q

µ

g). (18)

Corollary 3.3 says that

(16)

Theorem 3.4 For all g ∈ C(M )

n

,

gt

converges in law toward a centered Gaussian variable (Z

g1

, . . . , Z

gn

), with covariance given by

E (Z

gi

Z

gj

) = 2µ

(g

i

Q

µ

g

j

).

Remark 3.5 This central limit theorem for Brownian motions on compact manifolds has already been considered by Baxter and Brosamler in [1] and [2]; and by Bhattacharya in [7] for ergodic diffusions.

3.1.2 The case µ

= λ and V symmetric.

Suppose here that µ

= λ and that V is symmetric. We assume (without loss of generality since Π(λ) = λ implies that V λ is a constant function) that V λ = 0.

Since V is compact and symmetric, there exists an orthonormal basis (e

α

)

i≥0

in L

2

(λ) and a sequence of reals (λ

α

)

α≥0

such that e

0

is a constant function and

V = X

α≥1

λ

α

e

α

⊗ e

α

.

Assume that for all α, 1/2 + λ

α

> 0. Then hypothesis 2.1 holds with b λ = λ, and the convergence of µ

t

towards λ holds with positive probability (see [6]).

Let f ∈ C(M) and f

t

defined by (17), denoting f

α

= h f, e

α

i

λ

and f

tα

= h f

t

, e

α

i

λ

, we have f

t0

= e

−t/2

f

0

and for α ≥ 1,

f

tα

= e

−t/2

f

α

− λ

α

e

−(1/2+λα)t

e

λαt

− 1 λ

α

f

α

= e

−(1/2+λα)t

f

α

. Using the fact that

C b

λ

(f, g) = 2λ(f Q

λ

g ), this implies that

C(f, g) = 2 b X

α≥1

X

β≥1

1

1 + λ

α

+ λ

β

h f, e

α

i

λ

h g, e

β

i

λ

λ(e

α

Q

λ

e

β

).

This, with corollary 3.3, proves

(17)

Theorem 3.6 Assume hypothesis 1.3 and that 1/2 + λ

α

> 0 for all α. Then for all g ∈ C(M )

n

,

gt

converges in law toward a centered Gaussian variable (Z

g1

, . . . , Z

gn

), with covariance given by

E (Z

gi

Z

gj

) = C(g b

i

, g

j

).

In particular,

E (Z

eα

Z

eβ

) = 2

1 + λ

α

+ λ

β

λ(e

α

Q

λ

e

β

).

Note that when all λ

α

are positive, which corresponds to what is named a self-repelling interaction in [6], the rate of convergence of µ

t

towards λ is bigger than when there is no interaction, and the bigger is the interaction (that is larger λ

α

’s) faster is the convergence.

4 Proof of the main results

We assume hypothesis 1.3 and µ

satisfies hypothesis 2.1. It is possible to choose κ in hypothesis 2.1 such that κ < 1/2. In the following κ will denote such constant. Note that we have λ( − G

µ

) < − κ. Such κ exists when hypothesis 2.1 holds.

4.1 A lemma satisfied by Q

µ

We denote by X (M ) the space of continuous vector fields on M , and equip the spaces P (M ) and X (M ) respectively with the weak convergence topology and with the uniform convergence topology.

Lemma 4.1 For all f ∈ C(M ), the mapping µ 7→ ∇ Q

µ

f is a continuous mapping from P (M ) in X (M ).

Proof : Let µ and ν be in M (M ), and f ∈ C(M ). Set g = Q

µ

f . Then f = − A

µ

g + Π(µ)f and

k∇ Q

µ

f − ∇ Q

ν

f k

= k − ∇ Q

µ

A

µ

g + ∇ Q

ν

A

µ

g k

= k∇ g + ∇ Q

ν

A

µ

g k

≤ k∇ (g + Q

ν

A

ν

g ) k

+ k∇ Q

ν

(A

µ

− A

ν

)g k

(18)

since ∇ (g + Q

ν

A

ν

g ) = 0 and (A

µ

− A

ν

)g = h∇ V

µ−ν

, ∇ g i , we get

k∇ Q

µ

f − ∇ Q

ν

f k

≤ K kh∇ V

µ−ν

, ∇ g ik

. (19) Using the fact that (x, y) 7→ ∇ V

x

(y) is uniformly continuous, the right hand term of (19) converges towards 0, when d(µ, ν) converges towards 0, d being a distance compatible with the weak convergence. QED

4.2 The process ∆

Set h

t

= V µ

t

and h

= V µ

. Recall ∆

t

= e

t/2

et

− µ

) and D

t

= V ∆

t

. Note that D

t

(x) = ∆

t

V

x

.

To simplify the notation, we set K

s

= K

µs

, Q

s

= Q

µs

and A

s

= A

µs

. Let (M

tf

)

t≥1

be the martingale defined by

M

tf

= X

α

Z

t

1

e

α

( Q

s

f)(X

s

)dB

sα

.

The quadratic covariation of M

f

and M

g

(with f and g in C(M)) is given by

h M

f

, M

g

i

t

= Z

t

1

h∇ Q

s

f, ∇ Q

s

g i (X

s

)ds.

Then for all t ≥ 1 (with ˙ Q

t

=

dtd

Q

t

) , Q

t

f(X

t

) − Q

1

f(X

1

) = M

tf

+

Z

t

1

Q ˙

s

f(X

s

)ds − Z

t

1

K

s

f (X

s

)ds.

Thus

µ

t

f = 1 t

Z

t

1

K

s

f (X

s

)ds + 1 t

Z

t

1

Π(µ

s

)f ds + 1 t

Z

1

0

f (X

s

)ds

= − 1 t

Q

t

f (X

t

) − Q

1

f (X

1

) − Z

t

1

Q ˙

s

f(X

s

)ds

+ M

tf

t + 1

t Z

t

1

h ξ(h

s

), f i

λ

ds + 1 t

Z

1

0

f (X

s

)ds.

Note that (D

t

) is a continuous process taking its values in C(M) and that D

t

= e

t/2

(h

et

− h

). For f ∈ C(M ) (using the fact that µ

f = h ξ(h

), f i

λ

),

t

f = X

5

i=1

it

f (20)

(19)

with

1t

f = e

−t/2

− Q

et

f(X

et

) + Q

1

f(X

1

) + Z

et

1

Q ˙

s

f(X

s

)ds

!

2t

f = e

−t/2

M

eft

3t

f = e

−t/2

Z

et

1

h ξ(h

s

) − ξ(h

) − Dξ(h

)(h

s

− h

), f i

λ

ds

4t

f = e

−t/2

Z

et

1

h Dξ(h

)(h

s

− h

), f i

λ

ds

5t

f = e

−t/2

Z

1

0

f (X

s

)ds − µ

f

.

Then D

t

= P

5

i=1

D

it

, where D

ti

= V ∆

it

. Finally, note that

h Dξ(h

)(h − h

), f i

λ

= − Cov

µ

(h − h

, f). (21)

4.3 First estimates

We recall some estimates from [3]: There exists a constant K such that for all f ∈ C(M ) and t > 0,

k Q

t

f k

≤ K k f k

k∇ Q

t

f k

≤ K k f k

k Q ˙

t

f k

≤ K

t k f k

. These estimates imply in particular that

h M

f

− M

g

i

t

≤ K k f − g k

× t and that

Lemma 4.2 There exists a constant K depending on k V k

such that for all t ≥ 1, and all f ∈ C(M )

k ∆

1t

f k

+ k ∆

5t

f k

≤ K × (1 + t)e

−t/2

k f k

, (22)

which implies that ((∆

1

+ ∆

5

)

t+s

)

s≥0

and ((D

1

+ D

5

)

t+s

)

s≥0

both converge

towards 0 (respectively in M (M) and in C( R

+

× M )).

(20)

We also have

Lemma 4.3 There exists a constant K such that for all t ≥ 0 and all f ∈ C(M ),

E [(∆

2t

f)

2

] ≤ K k f k

2

,

| ∆

3t

f | ≤ K k f k

λ

× e

−t/2

Z

t

0

k D

s

k

2λ

ds,

| ∆

4t

f | ≤ K k f k

λ

× e

−t/2

Z

t

0

e

s/2

k D

s

k

λ

ds.

Proof : The first estimate follows from

E [(∆

2t

f)

2

] = e

−t

E [(M

eft

)

2

] = e

−t

E [ h M

f

i

et

]

≤ e

−t

Z

et

1

k∇ Q

s

f k

2

ds

≤ K k f k

2

.

The second estimate follows from the fact that

k ξ(h) − ξ(h

) − Dξ(h

)(h − h

) k

λ

= O( k h − h

k

2λ

).

The last estimate follows easily after having remarked that

|h Dξ(h

)(h

s

− h

), f i| = | Cov

µ

(h

s

− h

, f ) |

≤ K k f k

λ

× k h

s

− h

k

λ

≤ K k f k

λ

× s

−1/2

k D

log(s)

k

λ

. This proves this lemma. QED

4.4 The processes ∆

and D

Set ∆

= ∆

2

+ ∆

3

+ ∆

4

and D

= D

2

+ D

3

+ D

4

. For g ∈ C(M), set ǫ

gt

= e

t/2

h ξ(h

et

) − ξ(h

) − Dξ(h

)(h

et

− h

), g i

λ

.

Then

d∆

t

g = − ∆

t

g

2 dt + dN

tg

+ ǫ

gt

dt + h Dξ(h

)(D

t

), g i

λ

dt

(21)

where for all g ∈ C(M), N

g

is a martingale. Moreover, for f and g in C(M), h N

f

, N

g

i

t

=

Z

t

0

h∇ Q

es

f(X

es

), ∇ Q

es

g (X

es

) i ds.

Then, for all x, dD

t

(x) = − D

t

(x)

2 dt + dM

t

(x) + ǫ

t

(x)dt + h Dξ(h

)(D

t

), V

x

i

λ

dt

where M is the martingale in C(M ) defined by M (x) = N

Vx

and ǫ

t

(x) = ǫ

Vtx

. We also have

G

µ

(D

)

t

(x) = D

t

(x)

2 − h Dξ(h

)(D

t

), V

x

i

λ

.

Denoting L

µ

= L

−Gµ

(defined by equation (32) in the appendix), this implies that

dL

µ

(D

)

t

(x) = dD

t

(x) + G

µ

(D

)

t

(x)dt

= dM

t

(x) + h Dξ(h

)((D

1

+ D

5

)

t

), V

x

i

λ

dt + ǫ

t

(x)dt Thus

L

µ

(D

)

t

(x) = M

t

(x) + Z

t

0

ǫ

s

(x)ds with ǫ

s

(x) = ǫ

s

V

x

where for all f ∈ C(M ),

ǫ

s

f = ǫ

fs

+ h Dξ(h

)((D

1

+ D

5

)

s

), f i

λ

. Using lemma 5.10,

D

t

= L

−1µ

(M)

t

+ Z

t

0

e

−(t−s)Gµ

ǫ

s

ds. (23) For g = (g

1

, . . . , g

n

) ∈ C(M)

n

, we denote ∆

t

g = (∆

t

g

1

, . . . , ∆

t

g

n

), N

g

= (N

g1

, . . . , N

gn

) and ǫ

t

g = (ǫ

t

g

1

, . . . , ǫ

t

g

n

). Then, denoting L

gµ

= L

−Gg

µ

(with G

gµ

defined by (14)) we have

L

gµ

(∆

g, D

)

t

= (N

tg

, M

t

) + Z

t

0

s

g, ǫ

s

)ds

(22)

so that (using lemma 5.10 and integrating by parts) (∆

t

g, D

t

) = (L

gµ

)

−1

(N

g

, M )

t

+

Z

t

0

e

−(t−s)Ggµ∗

s

g, ǫ

s

)ds. (24) Moreover

(L

gµ

)

−1

(N

g

, M)

t

=

N b

tg1

, . . . , N b

tgn

, L

−1µ

(M )

t

,

where

N b

tgi

= N

tgi

− Z

t

0

N

sgi

2 + C b

µ

(L

−1µ

(M )

s

, g

i

)

ds.

4.5 Estimation of ǫ

t

4.5.1 Estimation of k L

−1µ

(M )

t

k

λ

Lemma 4.4 (i) For all α ≥ 2, there exists a constant K

α

such that for all t ≥ 0,

E [ k L

−1µ

(M)

t

k

αλ

]

1/α

≤ K

α

.

(ii) a.s. there exists C with E [C] < ∞ such that for all t ≥ 0, k L

−1µ

(M )

t

k

λ

≤ C(1 + t).

Proof : Since k L

−1µ

(M )

t

k

λ

≤ K k L

−1µ

(M )

t

k

λb

, we estimate k L

−1µ

(M )

t

k

bλ

. We have

dL

−1µ

(M )

t

= dM

t

− G

µ

L

−1µ

(M)

t

dt.

Let N be the martingale defined by N

t

=

Z

t

0

* L

−1µ

(M)

s

k L

−1µ

(M)

s

k

bλ

, dM

s

+

bλ

.

We have h N i

t

≤ Kt for some constant K. Then

d k L

−1µ

(M)

t

k

2λb

= 2 k L

−1µ

(M)

t

k

bλ

dN

t

− 2 h L

−1µ

(M)

t

, G

µ

L

−1µ

(M )

t

i

bλ

dt

+ d

Z

h M(x) i

t

λ(dx) b

. Note that there exists a constant K such that

d dt

Z

h M (x) i

t

b λ(dx)

≤ K

Références

Documents relatifs

In this paper we will only focuse on the derivation of functional inequalities in connection with decay estimates for entropy and entropy production estimates of smooth

étudiés en fonction de leurs capacités d’adsorption est inversement proportionnel avec leurs surfaces spécifiques, cela peut être dû au nombre de sites actifs par unité de

Suppose that (X, Y, Z) is a random walk in Z 3 that moves in the following way: on the first visit to a vertex only Z changes by ± 1 equally likely, while on later visits to the

The article presents a novel variational calculus to analyze the stability and the propagation of chaos properties of nonlinear and interacting diffusions. This differential

On the other hand if X falls in the neighborhood of a positive minimum of F while U &gt; 0 (the case of a negative maximum with U &lt; 0 being symmetric) then, as long as it

exit problem (time and location) in convex landscapes, showing the same result as Herrmann, Imkeller and Peithmann, but without reconstructing the proofs of Freidlin and Wentzell..

Later, Miclo [15] proved, through some functional inequalities, that the free energy (that is the relative entropy of the distribution of the process at time t with respect to

Une idée pour construire le modèle continu est donc de trouver l’analogue continu des marches aléatoires coalescentes discrètes, c’est à dire de définir une famille de