• Aucun résultat trouvé

Quasi-Stationary Distributions and Resilience: What to get from a sample?

N/A
N/A
Protected

Academic year: 2021

Partager "Quasi-Stationary Distributions and Resilience: What to get from a sample?"

Copied!
40
0
0

Texte intégral

(1)

HAL Id: hal-02167101

https://hal.archives-ouvertes.fr/hal-02167101v3

Submitted on 19 Jun 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Quasi-Stationary Distributions and Resilience: What to get from a sample?

Jean-René Chazottes, Pierre Collet, Sylvie Méléard, Servet Martínez

To cite this version:

Jean-René Chazottes, Pierre Collet, Sylvie Méléard, Servet Martínez. Quasi-Stationary Distributions

and Resilience: What to get from a sample?. Journal de l’École polytechnique - Mathématiques, École

polytechnique, 2020. �hal-02167101v3�

(2)

Quasi-Stationary Distributions and Resilience: What to get from a sample?

J.-R. Chazottes ∗1 , P. Collet †1 , S. M´ el´ eard ‡2 , and S. Mart´ınez §3

1 CPHT, CNRS, Ecole Polytechnique, Institut Polytechnique de Paris, F-91128 Palaiseau, France

2 CMAP, CNRS, Ecole Polytechnique, Institut Polytechnique de Paris, F-91128 Palaiseau, France

3 DIM-CMM, Universidad de Chile, UMI 2807 UChile-CNRS, Beauchef 851, Santiago, Chile

June 9, 2020

Abstract

We study a class of multi-species birth-and-death processes going almost surely to extinction and admitting a unique quasi-stationary distribution (qsd for short). When rescaled by K and in the limit K → +∞, the realizations of such processes get close, in any fixed finite-time window, to the trajecto- ries of a dynamical system whose vector field is defined by the birth and death rates. Assuming this dynamical system has a unique attracting fixed point, we analyzed the behavior of these processes for finite K and finite times, “interpo- lating” between the two limiting regimes just mentioned. In the present work, we are mainly interested in the following question: Observing a realization of the process, can we determine the so-called engineering resilience? To answer this question, we establish two relations which intermingle the resilience, which is a macroscopic quantity defined for the dynamical system, and the fluctuations of the process, which are microscopic quantities. Analogous relations are well known in nonequilibrium statistical mechanics. To exploit these relations, we need to introduce several estimators which we control for times between log K (time scale to converge to the qsd) and exp(K) (time scale of mean time to extinction).

Keywords: birth-and-death process, dynamical system, engineering resilience, quasi- stationary distribution, fluctuation-dissipation relation, empirical estimators.

∗ Email: chazottes@cpht.polytechnique.fr

† Email: collet@cpht.polytechnique.fr

‡ Email: sylvie.meleard@polytechnique.edu

§ Email: smartine@dim.uchile.cl

(3)

Contents

1 Introduction and main results 3

1.1 Context and setting . . . . 3

1.2 Main results . . . . 4

1.3 A “fluctuation-dissipation” approach . . . . 6

1.4 Standing assumptions . . . . 7

1.5 A numerical example . . . . 8

1.6 Organization of the paper . . . . 10

2 Time evolution of moments of the process and moments of the QSD 10 2.1 Time evolution of moments starting from anywhere . . . . 10

2.2 Moments estimates for the qsd . . . . 13

3 Controlling time averages of the estimators 18 4 Fluctuation and correlation relations 22 4.1 Proof of Theorem 1.1 . . . . 22

4.2 Proof of Theorem 1.2 . . . . 23

5 Variance estimates for the estimators 24 A Proof of the two variance estimates 29 A.1 Starting from anywhere: proof of Proposition 5.1 . . . . 29

A.2 Starting from the qsd: proof of Corollary 5.2 . . . . 34

B Counting the number of births 34

C Gaussian limit for the rescaled qsd 35

(4)

Titre en fran¸ cais :

Distributions quasi-stationnaires et r´ esilience : que peut-on obtenir des donn´ ees ? R´ esum´ e en fran¸ cais :

Nous ´ etudions une classe de processus de naissance et mort avec plusieurs esp` eces dans la situation o` u l’extinction est certaine et la distribution quasi-stationnaire est unique. Si on fixe un intervalle de temps fini et qu’on normalise les r´ ealisations d’un tel processus par un param` etre d’´ echelle K, elles deviennent arbitrairement proches, dans la limite K → +∞, des trajectoires d’un certain syst` eme dynamique dont le champ de vecteurs est d´ efini ` a partir des taux de naissance et mort. Quand le syst` eme dynamique admet un seul point fixe attractif, nous avons pr´ ec´ edemment analys´ e le comportement du processus pour des valeurs de K finies et pour des temps finis, c’est-` a-dire, le comportement interm´ ediaire entre les deux comportements lim- ites ´ evoqu´ es ci-dessus. La question principale qui nous int´ eresse dans le pr´ esent article est la suivante : si on observe une r´ ealisation du processus, pouvons-nous estimer la r´ esilience au sens de l’ing´ enieur (engineering resilience) ? Pour r´ epondre ` a cette ques- tion, nous d´ emontrons deux relations entremˆ elant la r´ esilience, qui est une quantit´ e macroscopique d´ efinie pour le syst` eme dynamique sous-jacent, et les fluctuations du processus, qui sont elles des quantit´ es microscopiques. De tels genres de relations sont bien connues en m´ ecanique statistique hors d’´ equilibre. Afin d’exploiter ces relations nous introduisons plusieurs estimateurs empiriques que nous parvenons ` a contrˆ oler pour des temps entre log K, qui est l’´ echelle de temps pour observer la convergence vers la distribution quasi-stationnaire, et exp(K), qui est l’´ echelle du temps moyen d’extinction.

Mots cl´ es :

processus de naissance et mort, syst` emes dynamiques, r´ esilience, distribution

quasi-stationnaire, relation de fluctuation-dissipation, estimateurs empiriques.

(5)

1 Introduction and main results

1.1 Context and setting

The ability of an ecosystem to return to its reference state after a perturbation stress is given by its resilience, a concept pioneered by Holling. Resilience has several faces and multiple definitions [5]. In the traditional theoretical setting of dynamical systems, that is, differential equations, one of them is the so-called engineering resilience. It is concerned with what happens in the vicinity of a fixed point (equilibrium state) of the system, and is given by minus the real part of the dominant eigenvalue of the Jacobian matrix evaluated at the fixed point. It can also be defined as the reciprocal of the characteristic return time to the fixed point after a (small) perturbation. In this paper, we are interested in how to determine the engineering resilience from the data.

But which data? The drawback of the notion of engineering resilience is that we do not observe population densities governed by differential equations. Instead, we count individuals which are subject to stochastic fluctuations. Can we nevertheless infer the resilience? The subject of this paper is to show that this is possible in the framework of birth-and-death processes which are, in a sense made precise below, close to the solutions of a corresponding differential equation, at certain time and population size scales.

Let us now describe our framework. We consider a population made of d species interacting with one another. Suppose that the state of the process at time t, which we denote by N K (t) = (N 1 K (t), . . . , N d K (t)), is n = (n 1 , . . . , n i , . . . , n d ) ∈ Z d + , where n i is the number of individuals of the ith species. Then the rate at which the population increases (respectively decreases) by one individual of the jth species is K B j ( K n ) (respectively K D j ( K n )), where K is a scaling parameter. Under the assumptions we will make, the process goes extinct, i.e., 0 is an absorbing state, with probability one.

There are two limiting regimes for the behavior of this process. The first one is to fix K and let t tend to infinity, which leads inevitably to extinction. The second one consists in fixing a time horizon and letting K tend to +∞, after having rescaled the process by K. In this limit, the behavior of the rescaled process is governed by a certain differential equation. More precisely, given any 0 < t H < +∞ and any ε > 0 and x 0 ∈ R d + , we have

K→+∞ lim P b K x

0

c

sup

0 ≤t≤ t

H

dist

N K (t) K , x(t)

> ε

= 0

where dist(·, ·) is the Euclidean distance in R d + , and x(t) is the solution of the differ- ential equation in R d +

dx

dt = B(x) − D(x)

with initial condition x 0 . We refer to [4, Chapter 11] for a proof. We use the notations x = (x 1 , . . . , x d ), B(x) = (B 1 (x), . . . , B d (x)), and so on and so forth. We will make further assumptions (see Subsection 1.4) on the birth and death rates to be in the following situation. The vector field

X = B − D

has a unique attracting fixed point x (lying in the interior of R d + ). We denote by M its differential evaluated at x , namely

M = DX(x ).

(6)

We then define the (engineering) resilience as

ρ = − sup{Re(z) : z ∈ Sp(M )}

where Sp M

denotes the spectrum (set of eigenvalues) of the matrix M . Under our assumptions, we have ρ > 0.

We can now formulate more precisely the goal of this paper. Given a finite-length realization of the process (N K (t), t ≤ T ), with large, but finite K, we want to build an estimator for ρ . To this end, we need a good understanding of the behavior of the Markov process (N K (t)) in an intermediate regime between the two limiting regimes described above. This was done in a previous work of ours [3], and this can be roughly summarized as follows. All states n 6= 0 are transient and 0 is absorbing, hence the only stationary distribution is the Dirac measure sitting at 0. The mean time to extinction behaves like exp(Θ(K)). (We recall Bachmann-Landau notations below.) If we start in the vicinity of the state n = bKx c, that is, if the initial state has its coordinates of size of order K, then either the process wanders around n or it gets absorbed at 0. More precisely, there is a unique quasi-stationary distribution (qsd, for short) ν

K

which describes the statistics of the process conditioned to be non- extinct before time t. Without this conditionning, the law of the process at time t is well approximated by a mixture of the Dirac measure at 0 and the qsd ν

K

, for times t ∈ [cK log K, exp(Θ(K))], where c > 0, in the sense that the total variation distance between them is exponentially small in K, provided that K is large enough. We will rely on these results that will be recalled precisely later on. We will also need to prove further properties.

1.2 Main results

To estimate the engineering resilience ρ , we will establish a matrix relation involving M . Let µ K = (µ K 1 , . . . , µ K d ) be the vector of species sizes averaged with respect to ν

K

, that is,

µ K p = Z

n p

K

(n) , p = 1, . . . , d. (1.1) For each τ ≥ 0, define the matrix

Σ K p,q (τ) = E ν

K

N p K (τ) − µ K p

N q K (0) − µ K q

, p, q ∈ {1, . . . , d}.

In Section 4.1, we will prove the following result.

Theorem 1.1. For all τ ≥ 0 we have

Σ K (τ) = e τ M

Σ K (0) + O √ K

. (1.2)

Some comments are in order. If τ is equal to, say, 1/K then the estimate becomes useless. More generally, if τ is too small then e τ M

is too close to the identity matrix.

Moreover, we will show later on that µ K and Σ K (τ) are of order K. Hence the estimate becomes irrelevant if τ becomes proportional to log K. Indeed, without knowing the constant of proportionality, e τ M

Σ K (0) can be of the same order than the error term.

Before proceeding further, we recall the following classical Bachmann-Landau no- tations.

Notations. Given a ∈ R , the symbol O(K a ) stands for any real-valued function f (K)

such that there exists C > 0 and K 0 > 0 such that, for any K > K 0 , |f (K)| ≤ CK a .

(7)

Note in particular that O(1) will always mean a strictly positiveconstant that we don’t want to specify. Sometimes, we will also use the symbol Θ(K a ) stands for any real- valued function f(K) such that there exist C 1 , C 2 > 0 and K 0 > 0 such that, for any K > K 0 , C 1 K a ≤ f (K) ≤ C 2 K a . One can naturally generalize Θ(K a ) to vector- valued functions. For instance, for n ∈ R d + we write n = Θ(K a ) if n i = Θ(K a ) for i = 1, . . . , d.

Relation (1.2) allows to determine M . Indeed, we have e τ M

= Σ K (τ) Σ K (0) −1 + O

1

√ K

. (1.3)

This formula suggests that in order to estimate M , we need estimators for Σ K (0) and Σ K (τ). Given a finite-length realization of N K (t), 0 ≤ t ≤ T

up to some time T > 0, we define estimators for µ K p and Σ K p,q (τ), for 0 ≤ τ < T , p, q ∈ {1, . . . , d}, K ∈ N by

S p µ (T, K) = 1 T

Z T

0

N p K (s) ds (1.4)

and

S p,q C (T, τ, K) = 1 T −τ

Z T −τ

0

N p K (s + τ)−S p µ (T, K )

N q K (s)−S q µ (T, K)

ds. (1.5) Under suitable conditions on n, K and T , S µ (T, K ) well approximates µ K . More precisely, we will prove an estimate of the following form (see Theorem 3.4 for a precise statement)

E n

S p µ (T, K )

− µ K p ≤ C K + k n k 1

1 + log K

T + e −c (knk

1

∧K) + T e c

0

K

(1.6) for every n ∈ Z d + , p = 1, . . . , d, where c, c 0 , C are positive constants. We use the notation k n k 1 = P d

i=1 n i . Let us comment on this bound. Roughly speaking, it can only be useful if T is much smaller than exp(O(1)K) if n is, say, of order K. For instance, suppose that, for K large enough, we want the bound to be Θ(K −a ), for some a > 0. One can check that this is possible if n = Θ(K) and T = Θ(K a+1 log K). (Note in particular that, in this situation, we have a consistent estimator when K → +∞.) However, when T becomes exp(O(1)K) or larger, we know that E n

S p µ (T, K)

≈ 0, because with high probability, at this time scale the process is absorbed at 0. This is the manifestation of the fact that the only stationary distribution is the Dirac measure at 0. Consistently, our bound becomes very bad in that regime.

An estimate of the same kind holds for S C (T, τ, K) which well approximates Σ K (τ) in the appropriate ranges.

Remark 1.1. It is possible to use discrete time instead of continuous time in the

above averages. Indeed the key results (in particular Proposition 3.3) are obtained for

discrete times.

(8)

We can now define the empirical matrix M

emp

(T, τ, K) by e τ M

emp

(T ,τ,K) = S C (T, τ, K) S C (T, 0, K ) −1 .

We will see later on that, in appropriate regimes, S C (T, 0, K) is near Σ K (0) and S C (T, τ, K) is near Σ K (τ) (see Propositions 5.4 and 5.6). The matrix Σ K (0) is in- vertible as a covariance matrix of a non-constant vector and is Θ(K) (see Proposition 2.9). Then (1.2) implies that Σ K (τ) is invertible and the same holds for S C (T, τ, K).

These remarks imply that the matrix M

emp

is well defined.

We define the empirical resilience by ρ

emp

(T, τ, K) = − sup

Re(z) : z ∈ Sp M

emp

(T, τ, K ) . Our main result (Theorem 5.7) is then the following.

Theorem. For τ = Θ(1), n = Θ(K) (initial state) and 0 < T exp(Θ(1)K), and K large enough, we have

ρ

emp

(T, τ, K ) − ρ

≤ O(1) K 2

√ T + 1

√ K

with a probability larger than 1 − 1/K. In particular, if T ≥ K 5 , we have ρ

emp

(T, τ, K ) − ρ

≤ O(1)

√ K .

Several comments are in order. The dependence on the initial state n is somewhat hidden and involved in the fact that the estimates hold “with a probability larger than 1 − 1/K”. Indeed, the estimate of this probability results from Chebychev inequality and variance estimates in which the process is started in n. What the symbol precisely means is not mathematically defined. It means that we need to consider T “much smaller than something exponentially big in K”. Indeed, since we do not control explicitly the various constants appearing in exponential terms in K, we have to consider T which varies on a scale smaller than exp(Θ(1)K), for instance exp Θ(1) √

K

. The reader is invited to step through the proof of Theorem 5.7 for the more precise, but cumbersome bound we obtain.

1.3 A “fluctuation-dissipation” approach

The above estimator for the engineering resilience, based on (1.3), is valid for any d. In the case d = 1 (only one species), we have another, simpler, estimator based on a “fluctuation-dissipation relation”. This relation is in fact true for any d and of independent interest. Let D K be the d × d diagonal matrix given by

D K p,p = KB p (x ) = KD p (x ).

We have the following result. We write Σ K instead of Σ K (0), and the transpose of a matrix M is denoted by M | .

Theorem 1.2. We have

M Σ K + Σ K M ∗| + 2 D K = O √ K

. (1.7)

(9)

This relation is proved in Section 4.2. For background on fluctuation-dissipation relations in Statistical Physics, we refer to [7, sections 2-3]. Note that the matrix Σ K is symmetrical, but in general the matrix M is not (see [3]). Note also that each term in the left-hand side of (1.7) is of order K, as we will see below.

If Σ K and D K are known, the matrix M is not uniquely defined, except for d = 1 (see for example [8]). For d = 1, (1.7) easily gives the resilience since it becomes a scalar equation:

ρ = K(B (x ) + D(x ))

K + O

1

√ K

.

Remark 1.2. The quantity K(B(x ) + D(x )) = 2KB(x ) is the average total jump rate Kν

K

(B (n/K )+D(n/K )) up to O(1) terms. This follows from a Taylor expansion of B(n/K) + D(n/K) around x , Theorem 2.6 and Proposition 2.7.

In the case d = 1, an estimator for D K is S

D

(T, K ) = 1

T number of births up to time T

. (1.8)

In Section 5, we establish a bound for

E n

S

D

(T, K )

− KB(x )

which depends on T, K and k n k 1 , and is small in the relevant regimes. The estimator we use for Σ K is

S

Σ

(T, K) = 1 T

Z T

0

N K (s) − S µ (T, K)

N K (s) − S µ (T, K)

ds. (1.9) Again, we can control how well this estimator approximates Σ K . This provides another estimator for ρ , with a controlled error.

1.4 Standing assumptions

The two (regular) vector fields B(x) and D(x) are given in R d + . We assume that their components have second partial derivatives which are polynomially bounded.

Obviously, we suppose that B j (x) ≥ 0 and D j (x) ≥ 0 for all j = 1, . . . , d and x ∈ R d + . A dynamical system in R d + is defined by the vector field X (x) = B(x)− D(x), namely

dx

dt = B(x) − D(x) = X (x).

For x ∈ R d + , we use the following standard norms:

k x k 1 =

d

X

j=1

x j , k x k 2 = v u u t

d

X

j=1

x 2 j . We now state our hypotheses.

H.1 The vectors fields B and D vanish only at 0.

H.2 There exists x belonging to the interior of R d + (fixed point of X ) such that

B(x ) − D(x ) = X (x ) = 0 .

(10)

H.3 Attracting fixed point: there exist β > 0 and R > 0 such that k x k 2 < R, and for all x ∈ R d + with k x k 2 < R,

h X (x), (x − x )i ≤ −β k x k 2 kx − x k 2 2 . (1.10) H.4 The fixed point 0 of the vector field X is repelling (locally unstable). Moreover, on the boundary of R d + , the vector field X points toward the interior (except at 0).

H.5 Define

B(y) = b sup

k x k

1

=y d

X

j=1

B j (x) , D(y) = b inf

k x k

1

=y d

X

j=1

D j (x) and for y > 0, let

F(y) = B(y) b D(y) b .

We assume that there exists 0 < L < R such that sup y>L F(y) < 1/2 and lim y→+∞ F (y) = 0.

H.6 There exists y 0 > 0 such that R ∞

y

0

D(y) b −1 dy < +∞ and y 7→ D(y) is increasing b on [y 0 , +∞[.

H.7 There exists ξ > 0 such that inf

x∈ R d

+

1≤ inf j ≤ d

D j (x) sup 1≤ ` d x `

> ξ. (H7)

H.8 Finally, we assume that

1≤ inf j ≤ d ∂ x j B j (0) > 0. (H8) (By ∂ x j we mean the partial derivative with respect to x j .)

Assumptions H.5 and H.6 ensure that the time for “coming down from infinity”

for the dynamical system is finite. Together with H.3, this also implies that x is a globally attracting stable fixed point. More comments on these assumptions can be found in [3].

1.5 A numerical example

We consider the two-dimensional vector fields B(x 1 , x 2 ) =

a x 1 + b x 2

e x 1 + f x 2

and

D(x 1 , x 2 ) =

x 1 c x 1 + d x 2 x 2 g x 1 + h x 2

where all the coefficients are positive. This is a model of competition between two species of Lotka-Volterra type. We have taken

a = 0.4569, b = 0.2959, e = 0.5920, f = 0.6449

c = 0.9263, d = 0.9157, g = 0.9971, h = 0.2905.

(11)

Assumptions H.1 and H.4 are easily verified numerically. Assumptions H.5 and H.6 are satisfied because B(y) b ≤ (a + b + e + f)y and D(y) b ≥ (c ∧ h)y 2 /4. Concerning H.2, we checked numerically that there is a unique fixed point inside the positive quadrant, namely x = (0.3567, 1.4855). It remains to check H.3, namely that

−β = sup{R(x) : x ∈ R 2 + } < 0 where

R(x) = hX (x), (x − x )i k x k 2 k x − x k 2 2 .

We first checked that the numerator N (x) = hX(x), (x− x )i is negative and vanishes only at 0 and x . It is easy to check that N (x) < 0 for k x k 2 large enough. We have verified numerically that the only solutions of the equations ∂ x

1

N = ∂ x

2

N = 0 in the closed positive quadrant are x and z = (0.1739, 0.4361), with N(z) = −0.2852, thus this is a negative local minimum. This implies that N (x) < 0 in the closed positive quadrant, except at 0 and x where it vanishes. This implies that R ≤ 0 in the closed positive quadrant. It is easy to check that

lim sup

k x k

2

→+∞

R(x) ≤ −(c ∧ h)/ 2 .

This implies that R < 0 except perhaps at 0 and x . Near 0 we have by Taylor expansion

R(x) = − hDX(0) x, x i

k x k 2 k x k 2 2 1 + O( k x k 2 )

= − h x, D | B(0) x i

k x k 2 k x k 2 2 1 + O( k x k 2 ) and, since the vector D | B (0) x has positive components, there exists % > 0 such that for all x ∈ R 2 +

h x, D | B(0) x i ≥ % k x k 2 .

If y = x − x is small, we have by Taylor expansion (since X(x ) = 0) R(x) = hM y, yi

k x k 2 k y k 2 2 1 + O( k y k 2 )

=

y, 1 2 M ∗| + M y

k x k 2 k y k 2 2 1 + O( k y k 2 ) . One can check numerically that the two real eigenvalues of the symmetric matrix

M ∗| + M

are strictly negative, the largest being numerically equal to −0.786. This completes the verification of hypothesis H.3.

Illustrating standard experiments on populations of cells or bacteria, we have chosen K = 10 5 and simulated a unique realization of the process with T = 100 which contains about 5.10 7 jumps (cell divisions or deaths). The resilience computed from the vector field is numerically equal to 0.547. We have computed ρ

emp

(100, 1, 10 5 ).

The relative error, that is |ρ

emp

(100, 1, 10 5 ) − ρ |/ρ , is equal to 0.022.

Note that the situation we are interested in is completely different from standard

statistical approach where one can repeat the experiments.

(12)

1.6 Organization of the paper

In Section 2, we will study the time evolution of the moments of the process and we will prove moment estimates for the qsd. In Section 3, we will obtain control on the large time behavior of averages for the process. In Section 4, we will prove the relations (1.2) and (1.7). In Section 5, we will apply these relations to obtain approximate expressions of the engineering resilience in terms of the covariance matrices for the qsd. From the results of Section 3, we will deduce variance bounds for the estimators (1.4), (1.5) and (1.8), starting either in the qsd or from an initial condition of order K.

2 Time evolution of moments of the process and moments of the QSD

2.1 Time evolution of moments starting from anywhere

The generator L K of the birth and death process N K = (N K (t), t ≥ 0) is defined by

L K f (n) = (2.1)

K

d

X

`=1

B `

n K

f (n + e (`) ) − f(n) + K

d

X

`=1

D `

n K

f (n − e (`) ) − f (n) where e (`) = (0, . . . , 0, 1, 0, . . . , 0), the 1 being at the `-th position, and f : Z d + → R is a function with bounded support. We denote by (S t K , t ≥ 0) the semigroup of the process N K acting on bounded functions, that is, for f : Z d + → R , we have

S t K f (n) = E

f(N K (t))

N K (0) = n

= E n [f (N K (t))] . For A > 1, let

T A = inf{t > 0 : kN K (t)k 1 > A}. (2.2) Notice that we will use either k · k 1 or k · k 2 . They are of course equivalent but one can be more convenient than the other, depending on the context. We have the following result.

Theorem 2.1. There exists a constant C (2.1) > 0 such that for K large enough, the operator group S 1 K extends to exponentially bounded functions and

sup

n ∈ Z d

+

S 1 K e k·k

1

(n) ≤ e C

(2.1)

K .

Proof. Introduce the function G K defined on [ y 0 , +∞) by G K (y) =

Z ∞

y

dz

D(z) b + 1 K D(y) b .

Assumption H.6 implies that G K is well defined and decreasing on [y 0 , +∞). We can define its inverse function on (0, s 0 ] for s 0 > 0 small enough (independent of K). Take 0 < η ≤ s 0 ∧ 1− 4 e

−1

. Then there is a unique positive function y K defined by

y K (s) = G K −1 (ηs), s ∈ (0, 1]. (2.3)

(13)

Note that y K (s) ≥ y 0 and lim s↓0 y K (s) = +∞. Let ϕ K (s) = e −Ky

K

(s)

K D(y b K (s)) . Note that

lim

s↓0 ϕ K (s) = 0.

Using the Lipschitz continuity of D b (and then its differentiability almost everywhere) and (2.3), we obtain

˙

ϕ K (s) = dϕ K

ds (s) = − e −Ky

K

(s)

D(y b K (s)) + e −Ky

K

(s) D b 0 (y K (s)) K D(y b K (s)) 2

! dy K

ds (s) = η e −Ky

K

(s) . We now consider the function

f K (t, n) = ϕ K (t) e knk

1

to which we apply Itˆ o’s formula at time t ∧ T A . We get

E n

h ϕ K t ∧ T A

e kN

K

(t∧T A )k

1

i

= E n

"

Z t∧T A

0

t f K + L K f K

(s, N K (s)) ds

# . We have

t f K (t, n) + L K f K (t, n) = ˙ ϕ K (t) e knk

1

+ Kϕ K (t) e knk

1

(e −1)

d

X

`=1

B `

n K

+ (e −1 −1)

d

X

`=1

D `

n K

! . Note that

∂ t f K (t, n) + L K f K (t, n)

≤ e knk

1

˙

ϕ K (t) + Kϕ K (t)

(e −1) B b k n k 1

K

− (1 − e −1 ) D b k n k 1

K

≤ e knk

1

˙

ϕ K (t) − Kϕ K (t)(1 − e −1 ) D b k n k 1

K 1 − e F k n k 1

K

.

It follows from H.5 that there exists a number ζ > y 0 such that if y > ζ, then F (y) < (2e) −1 .

If k n k 1 < ζK we get

∂ t f K (t, n) + L K f K (t, n)

≤ O(1) e ζK ϕ ˙ K (t) + Kϕ K (t) . For k n k 1 ≥ K(ζ ∨ y K (t)) we have

t f K (t, n) + L K f K (t, n) ≤ 0 since ˙ ϕ K (t) = ηK D(y b K (t))ϕ K (t) and D( b k n k 1 /K) ≥ D(y b K (t)).

Finally, for ζK ≤ k n k 1 < Ky K (t) we get ∂ t f K (t, n) + L K f K (t, n)

≤ e Ky K (t) ϕ ˙ K (t) = η.

We deduce that

E n h

ϕ K 1 ∧ T A

e kN

K

(1∧T A )k 1 i

≤ O(1) e ζK .

The result follows by letting A tend to infinity and by monotonicity.

(14)

We deduce moment estimates for the process which are uniform in the starting state, and in time, for times larger than 1.

Corollary 2.2. For all t ≥ 1, the semi-group (S t ) maps functions of polynomially bounded modulus in bounded functions. In particular, for all q ∈ N , we have

sup

t≥1

sup

n ∈ Z d

+

E n

kN K (t)k q 1

≤ q q e −q K q e C

(2.1)

. (2.4)

Proof. We have E n

kN K (1)k q 1

= K q E n

kN K (1)k q 1 K q e

k

N

K(1)k1

K e

k

N

K(1)k1

K

≤ K q q q e −q E n

h e

k

N

K(1)k1

K

i

since for all x ≥ 0, x q e −x ≤ q q e −q . Inequality (2.4) follows from H¨ older’s inequality and Theorem 2.1. Let us now consider t > 1. From the Markov property and by using the previous inequality, we deduce that

E n

N K (t)

q 1

= E n

h E N

K

(t−1)

N K (1)

q 1

i

≤ q q e −q K q e C

(2.1)

. The proof is finished.

For times t less than 1, the moment estimates depends on the initial state.

Proposition 2.3. For each integer q, there exists a constant c q > 0 such that for all K > 1, t ≥ 0 and n ∈ Z d +

E n

N K (t)

q 2

≤ c q K q + k n k q 2 1 {t<1} .

Proof. We have only to study the case t < 1, the other case being given in (2.4).

We prove the result for q even, namely q = 2 q 0 . The result for q odd follows from Cauchy-Schwarz inequality. Letting

f q

0

(n) = k n k

2q

2

0

we have

L K f q

0

(n) = K

d

X

`=1

B `

n

K k n k 2 2 + 2n ` + 1 q

0

− k n k

2q

2

0

+ K

d

X

`=1

D `

n

K k n k 2 2 − 2 n ` + 1 q

0

− k n k

2q

2

0

.

Using H.5 and the equivalence of the norms, we see that there exists a constant c q

0

> 0 such that if k n k 2 > c q

0

K

L K f q

0

(n) < 0.

Moreover, we can take c q

0

large enough such that for all n

L K f q

0

(n) ≤ c q

0

K 2q

0

.

(15)

Applying Itˆ o’s formula to f q

0

we get as in the proof of Theorem 2.1 E n h

kN K (t ∧ T A )k 2q 2

0

i

≤ k n k

2q

2

0

+ E n

"

Z t∧T A

0

c q

0

K 2q

0

ds

#

≤ k n k

2q

2

0

+ t c q

0

K 2q

0

.

(Recall that T A is defined in (2.2).) The result follows by letting A tend to infinity.

2.2 Moments estimates for the qsd

Let us first recall (cf. [3]) that, under the assumptions of Section 1.4, there exists a unique qsd ν

K

with support Z d + \{0}. Further, starting from the qsd, the extinction time is distributed according to an exponential law with parameter λ 0 (K) satisfying (Theorem 3.2 in [3])

e −d

1

K ≤ λ 0 (K) ≤ e −d

2

K (2.5) where d 1 > d 2 > 0 are constants independent of K. Recall also that for all t > 0,

P ν K N K (t) ∈ · , T 0 > t

= e −λ

0

(K)t ν

K

·) (2.6) where

T 0 = inf{t > 0 : N K (t) = 0}.

Finally, for all f in the domain of the generator

L K ν

K

(f ) = ν

K

( L K f ) = −λ 0 (K) ν

K

(f ) (2.7) with the notation

ν

K

(f ) = Z

f(n) dν

K

(n).

We use several notations from [3] that we now recall. Let n = bKx c.

For x ∈ R d + and r > 0, B(x, r) is the ball of center x and radius r. We consider the sets

∆ = B n , ρ (4.2) √ K

, D = B

n , min j n j 2

∩ Z d + (2.8)

where ρ (4.2) > 0 is a constant defined in [3, Corollary 4.2]. Note that since n is of order K, we have ∆ ⊂ D for K large enough. The first entrance time in ∆ (resp. D) will be denoted by T (resp. T D ).

We first prove that the support of the qsd is, for large K, almost included in D.

(This will be important to control moments later on.)

Proposition 2.4. There exists a constant c (2.4) > 0 such that for all K large enough ν

K

D c

≤ e −c

(2.4)

K .

Proof. We first recall two results from [3]. From Lemma 5.1 in [3], there exist γ > 0 and δ ∈ (0, 1) such that for all K large enough

sup

n∈∆ c \0 P n T > γ log K, T 0 > T

≤ δ. (2.9)

(16)

By Sublemma 5.8 in [3], there exist two constants C > 0 and c > 0 such that for all K large enough, and for all t > 0

sup

n∈∆ P n T D c < t

≤ C 1 + t

e −cK . (2.10)

Now, for q ∈ N \{0} define

t q = qγ log K.

We will first estimate sup n P n N K (t q ) ∈ D c , T 0 > t q

. Note that N K (t q ) ∈ D c implies T D c ≤ t q . We distinguish two cases according to whether n ∈ ∆ or n ∈ ∆ c \{0}.

Let n ∈ ∆. It follows from (2.10) that

P n (N K (t q ) ∈ D c ) ≤ C 1 + t q e −cK . Now let n ∈ ∆ c \{0}. We have

P n N K (t q ) ∈ D c \{0}

=

P n N K (t q ) ∈ D c \{0}, T ≤ t q

+ P n N K (t q ) ∈ D c \{0}, T > t q . Using the strong Markov property at time T and (2.10) we obtain

P n N K (t q ) ∈ D c \{0}, T ≤ t q

= E n

1 {T

≤ t q } P N

K

(T

) (N K (t q − T ) ∈ D c \{0})

≤ C(1 + t q ) e −cK . We bound the second term recursively in q.

P n T > t q , T 0 > T

= E n h

1 {T

>t q−1 } 1 {T

0

>T

} P N

K

(t q−1 ) T > t 1 , T 0 > T i

≤ δ sup

n ∈∆ c \{0} P n T > t q−1 , T 0 > T

where we used the strong Markov property at time t q−1 and (2.9). This implies sup

n∈∆ c \{0} P n N K (t q )∈ D c \{0}, T > t q

≤ sup

n∈∆ c \{0} P n T > t q , T 0 > T

≤ δ q . Therefore

sup

n6=0 P n N K (t q ) ∈ D c \{0}

≤ C 1 + t q

e −cK + δ q .

Taking q = bKc we conclude that there exists a constant c 0 > 0 such that for K large enough

sup

n6=0 P n N K (t

bKc

) ∈ D c \{0}

≤ e −c

0

K . This implies

P ν

K

N K (t

bKc

) ∈ D c , T 0 > t

bKc

≤ e −c

0

K but by (2.6)

P ν

K

N K (t

bKc

) ∈ D c , T 0 > t

bKc

= e −λ

0

(K)t

bKc

ν

K

D c

and the result follows from (2.5).

(17)

Corollary 2.5. For each q ∈ N , there exists C q > 0 such that for all K large enough Z

D c

k n k q 1

K

(n) ≤ C q K q e −c

(2.4)

K and Z

k n k q 1

K

(n) ≤ C q K q . Proof. It follows at once from (2.6) (at time 1) and Theorem 2.1 that

Z

e knk

1

K

(n) ≤ e λ

0

(K) e C

(2.1)

K ≤ 2 e C

(2.1)

K (2.11) for K large enough. We have

Z

D c

k n k q 1

K

(n) = K q Z

D c

k n k 1

K q

e

knk1

K e

knk1

K dν

K

(n)

≤ K q q q e −q Z

e

knk1

K 1 D c (n) dν

K

(n).

We use H¨ older inequality to get Z

D c

k n k q 1

K

(n) ≤ K q q q e −q Z

e knk

1

K

(n) K

1

Z

1 D c (n) dν

K

(n) 1− K

1

. The first result follows from (2.11) and Proposition 2.4. The second estimate follows from the first one, and the bound sup n∈D k n k 1 ≤ O(1)K.

We now estimate centered moments.

Theorem 2.6. For each q ∈ Z + , there exists C q > 0 such that for all K large enough Z

kn − Kx k 2 2 q

K

(n) ≤ C q K q .

Proof. The proof consists in a recursion over q. The bound is trivial for q = 0. For q ∈ N define the function

f q (n) = kn − Kx k 2q 2 1 D

1

(n) where

D 1 = B

K x , 2K 3 min

j x j

∩ Z d + .

Recall that e (j) is the vector with 1 at the jth coordinate and 0 elsewhere. From the trivial identity

kn − Kx ± e (j) k 2 2 = kn − Kx k 2 2 ± 2(n j − Kx j ) + 1 (2.12) it follows that

kn − Kx ± e (j) k 2q 2 − kn − Kx k 2q 2 ± 2q (n j − Kx j )kn − Kx k 2q−2 2

≤ 3 q 2 q (1 + kn − Kx k 2q−2 2 ).

Indeed, applying the trinomial expansion to (2.12), we obtain

kn − Kx ± e (j) k 2q 2 − kn − Kx k 2q 2 ± 2q(n j − Kx j )kn − Kx k 2q−2 2

≤ q! X

p

1

≤q−2 p

1

+p

2

+p

3

=q

kn − Kx k 2p 2

1

(2kn − Kx k 2 ) p

2

p 1 ! p 2 ! p 3 ! + q kn − Kx k 2q−2 2 .

(18)

Observe that if p 1 ≤ q − 2, p 1 + p 2 + p 3 = q and then 2p 1 + p 2 = p 1 + q − p 3 ≤ 2q − 2 − p 3 ≤ 2q − 2, since p 3 ≥ 0. This implies that

kn − Kx k 2p 2

1

(2kn − Kx k 2 ) p

2

≤ 2 q (1 + kn − Kx k 2q−2 2 ).

It follows that L K f q (n) = 2qK

d

X

j=1

X j

n K

(n j − Kx j )kn − Kx k 2q−2 2 1 D

1

(n) + R q (n) (2.13) where

|R q (n)| ≤ O(1) K6 q (1 + kn − Kx k 2q−2 2 ) 1 D

1

(n) + qK 2q+1 1 D c (n)

(2.14) To get this bound, we used the fact that

sup

j=1,...,d

| 1 D

1

(n ± e (j) ) − 1 D

1

(n)| ≤ 1 D c (n).

Using (1.10) we get K

d

X

j=1

X j

n K

(n j − Kx j )kn − Kx k 2q−2 2 1 D

1

(n)

≤ −β 0 kn − Kx k 2q 2 1 D

1

(n) = −β 0 f q (n) (2.15) where

β 0 = β 3 min

j x j .

Integrating the equation (2.13) with respect to ν

K

and using (2.7), (2.14), (2.15) and Proposition 2.4, we obtain

(2qβ 0 − λ 0 (K)) ν

K

(f q ) ≤ O(1) K6 q (1 + ν

K

(f q−1 )) + 6 q K 2q+1 e −c

(2.4)

K . Observing that ν

K

(f 0 ) ≤ 1, it follows by recursion over q that, for each integer q, there exists C q 0 > 0 such that, for all K large enough, ν

K

(f q ) ≤ C q 0 K q . Finally we have

Z

kn − Kx k 2q 2

K

(n) = ν

K

(f q ) + Z

kn − Kx k 2q 2 1 D

1

c (n) dν

K

(n)

≤ ν

K

(f q ) + Z

kn − Kx k 2q−2 2 1 D c (n) dν

K

(n) since D ⊂ D 1 . The result follows using the previous estimate and Corollary 2.5.

The next result gives a more precise estimate for the average of n (instead of an error of order √

K ).

Proposition 2.7. We have

µ K − Kx = O(1)

where µ K is defined in (1.1). Moreover, since kn − Kx k 2 = O(1), we have

µ K − n = O(1) . (2.16)

(19)

Proof. Define the functions

g j (n) = h n − Kx , e (j) i, 1 ≤ j ≤ d.

By Taylor expansion and the polynomial bounds on B and D we get L K g j (n) = K B j (n/K ) − D j (n/K)

=

d

X

m=1

∂ m B j (x ) − ∂ m D j (x )

g m (n) 1 D (n) + O(1) kn − Kx k 2 2 K 1 D (n) + O(1) (K p + k n k p 2 ) 1 D c (n)

for some positive integer p independent of K. Using Cauchy-Schwarz inequality, identity (2.7), Corollary 2.5 and Proposition 2.4 we get

Z

1 + k n k p 2

1 D c (n) dν

K

(n) = o(1).

From Proposition 2.4, Theorem 2.6 and (2.5) we get

d

X

m=1

∂ m B j (x ) − ∂ m D j (x )

ν

K

(g m ) = O(1).

The result follows from the invertibility of the d × d matrix (∂ m B j (x ) − ∂ m D j (x )) which follows from H.3. The other inequalities follow immediately.

Corollary 2.8. For all K > 0, we have kΣ K k ≤

Z

n − µ K

2

2 dν

K

(n) = Z

k n − Kx k 2 2

K

(n) + O(1) ≤ O(1)K.

Proof. Combine Proposition 2.7 and Theorem 2.6.

We now show that Σ K is indeed of order K.

Proposition 2.9. There exist two strictly positive constants c (2.9) and c 0 (2.9) such that for all K large enough, the matrix Σ K satisfies

Σ K ≥ c (2.9) K Id

for the order among positive definite matrices, Id being the identity matrix, and, in particular,

Z

n − µ K

2

2 dν

K

(n) ≥ c 0 (2.9) K.

Proof. We denote by Σ e K the positive definite matrix Σ e K p,q =

Z

n p − n p

n q − n q

K

(n) . By (2.16) we have

e Σ K − Σ K

2 = O(1). (2.17)

(20)

Let v be a unit vector in R d . We have h v, Σ e K vi =

Z

h v, (n − n )i 2

K

(n) ≥ Z

h v, (n − n )i 2

K

(n).

From Lemma 5.3 in [3] there exists a constant c > 0 such that for all K large enough and all n ∈ ∆,

ν

K

({n}) ≥ c U ∆ ({n}) where U is the uniform distribution on ∆. Therefore

h v, Σ e K vi ≥ c Z

h v, (n − n )i 2 dU ∆ (n) and we get

h v, Σ ˜ K vi ≥ c (2.9) Kkvk 2 2 . The result follows.

3 Controlling time averages of the estimators

For T > 0, we define the time average of a function f : Z d + → R by S f (T, K) = 1

T Z T

0

f (N K (s)) ds. (3.1)

The goal of this section is to obtain a control of |S f (T, K )− ν

K

(f )| for a suitable class of functions.

We recall the following result from [3, Theorem 3.1].

Theorem 3.1 ([3]). There exist a > 0, K 0 > 1 such that, for all t ≥ 0 and for all K ≥ K 0 , we have

sup

n∈Z d

+

\{0}

P n (N K (t) ∈ · , t < T 0 ) − P n (t < T 0 ) ν

K

(·)

TV

≤ 2 e

log

at K . (3.2) It is also proved in [3] that, for a time much larger than log K and much smaller than the extinction time (which is of order exp(Θ(1)K)), the law of the process at time t is close to the qsd. The accuracy of the approximation depends on the initial condition. This suggests to study the distance between the law of the process at time t and the qsd as a function of the initial condition, K and t. This will result from (3.2) if P n T 0 ≤ t

can be estimated. In fact we prove a more general result.

Lemma 3.2. For γ ≥ 0, define τ γ = inf

t ≥ 0 : kN K (t)k 1 ≤ γK . There exist δ > 0, α > 0 and C > 0 such that for all n ∈ Z d + , K ≥ 1, 0 ≤ γ ≤ 1 ∧ k x α k

1

and t ≥ 0, we have

P n τ γ ≤ t

≤ C exp

−δ

ζ k n k 1

K ∧ α

− γ k x k 1

K

+ t exp (−δ (α − γ k x k 1 )K)

!

(3.3) where

ζ = min

1≤j≤d x j > 0. (3.4)

(21)

Taking γ = 0 in (3.3), we get P n T 0 ≤ t

≤ C

exp

−δ

ζ k n k

1

K ∧ α

K

+ t exp(−α δK)

. (3.5) Proof. It follows from H.1 and H.3 (using Taylor’s expansion of X(x) near 0) that there exists α 0 ∈ (0, R) (where R was introduced in Assumption H.3) such that for all x ∈ R d + satisfying k x k 2 ≤ α 0 we have

hX (x), x i ≥ β kx k 2 k x k 2 − 2β k x k 2 hx, x i + β k x k 3 2 + hX(x), xi

≥ β kx k 2 k x k 2 + O(1) k x k 2 2 ≥ β k x k 2 2

2 k x k 2 . (3.6)

For α ∈ (0, α 0 ] and δ > 0 to be chosen later on, we define ψ(n) = e −δ(hn,x i∧ αK) . It is easy to verify that if hn, x i > α K + k x k 2 we have

L K ψ(n) = 0.

If αK − k x k 2 ≤ hn, x i ≤ αK + k x k 2 we have

L K ψ(n)

≤ O(K) e −αδK .

For hn, x i ≤ αK − k x k 2 , we have k n k 1 ≤ hn, x i/ζ ≤ αK/ζ, where ζ is defined in (3.4), and

L K ψ(n) = Kg δ, n

K

e −δhn,x i where the function g is defined by

g(s, x) =

d

X

j=1

B j (x) e −sx

j −1 +

d

X

j=1

D j (x) e sx

j −1 .

We have

g(s, x) = −s

d

X

j=1

B j (x) − D j (x) x j

+

d

X

j=1

B j (x) e −sx

j −1 + sx j +

d

X

j=1

D j (x) e sx

j −1 − sx j . From the differentiability of the vector fields B and D and using (3.6), it follows that there exists a constant Γ > 0 such that, for all 0 ≤ s ≤ 1 and k x k 2 < α 0 we have

g(s, x) = −s h X (x) , x i + O(1) s 2 k x k 2

≤ −s β k x k 2 2

2 k x k 2 + Γs 2 k x k 2 . Therefore we can choose δ > 0 and 0 < α < α 0 such that

sup

k x k

2

≤ α

g(δ, x) < 0.

(22)

Therefore, for all n, we have

L K ψ(n) ≤ O(1)K e −αδK . For ˜ γ > 0 (independent of K), we define

˜

τ ˜ γ = inf

t ≥ 0 : h N K (t), x i ≤ ˜ γK . We apply Ito’s formula to ψ to get

E n

ψ N K (t ∧ τ ˜ γ ˜ )

= ψ(n) + E n

"

Z t∧˜ τ

˜

γ

0

L K ψ(N K (s)) ds

# . We have

˜

γK − ζ ≤ h N K (˜ τ γ ˜ ), x i ≤ γK ˜ hence

ψ(N K (˜ τ ˜ γ )) ≥ e −δ(˜ γ∧ α)K e −δζ . Then

E n

ψ N K (t ∧ ˜ τ γ ˜ )

≥ P n ˜ τ γ ˜ ≤ t) e −δ γ∧ α)K e −δζ . Therefore

P n ˜ τ γ ˜ ≤ t) e −δ γ∧ α)K e −δζ ≤ e −δ(hn,x i∧ αK) + t O(1)K e −αδK . To conclude, observe that

P n τ γ ≤ t) ≤ P n τ ˜ γ ˜ ≤ t) for ˜ γ = γ k x k 1 because for all n ∈ Z d + ,

0 < ζ k n k 1 ≤ hn, x i ≤ k n k 1 sup

j=1,...,d

x j ≤ k n k 1 kx k 1

and kN K (τ γ )k 1 ≤ γK.

We have the following result.

Proposition 3.3. For all bounded functions h : Z d + → R , t ≥ 0, n ∈ Z d + , and K > K 0 , we have

E n

h N K (t)

− ν

K

(h)

≤ O(1)khk ∞

e −δ

ζ

knk

K

1

∧ α

K + t e −αδK + e

log

at K where α, δ and ζ are defined in Lemma 3.2, and a and K 0 are defined in Theorem 3.1.

Proof. From the bound (3.2) we get

E n

h h N K (t) 1 {T

0

>t}

i − P n (t < T 0 ) ν

K

(h)

≤ O(1) khk ∞ e

log

at K . This implies

E n

h N K (t)

− ν

K

(h)

≤ E n h

h N K (t) 1 {T

0

≤t}

i

+ P n (t ≥ T 0 ) ν

K

(h) + O(1) khk ∞ e

log

at K

≤ O(1) khk ∞

P n (t ≥ T 0 ) + e

log

at K

≤ O(1) khk

e −δ

ζ

knk

K

1

∧ α

K +t e −αδK + e

log

at K

using (3.5).

(23)

We now extend Proposition 3.3 to more general functions. For q ∈ Z + , we define the Banach space F K ,q by

F K ,q = (

f : Z d + → R : kf k K ,q := sup

n6=0

|f (n)|

K q + k n k q 2

< +∞

)

. (3.7)

We have the following result for time-averages of functions in F K . Theorem 3.4. For all K > K 0 , f ∈ F K ,q , T > 0, and n ∈ Z d + , we have

E n

S f (T, K)

− ν

K

(f )

≤ O(1) kf k K ,q (K q + k n k q 2 )

× 1

T + e −δ

ζ

knk

K

1

∧ α

K +T e −αδK + log K

aT + 1 − e −λ

0

(K)

12

where α, δ and ζ are defined in Lemma 3.2, and λ 0 (K) is defined in (2.5).

Remark 3.1. One can check that if one modifies slightly the definition of the time average (3.1) by integrating from 1 to T + 1, then one can remove the term k n k q 2 from the previous estimate.

Proof. For f ∈ F K ,q , Corollary 2.5 gives ν

K

(f )

≤ O(1)K q kf k K ,q . By Proposition 2.3 we have

1 T E n

"

Z 1∧T

0

f N K (s) ds

#

≤ O(1) kf k K ,q

K q + k n k q 2

T .

Hence for T ≤ 1 we get

E n

S f (T, K )

− ν

K

(f )

≤ O(1) kf k K ,q K q + k n k q 2 1

T + 1

. For T > 1, we have by the Markov property that

1 T E n

"

Z T

1

f N K (s) ds

#

= 1 T

Z T

1

E n

E N

K

(s−1)

f N K (1) ds

= 1 T

Z T −1

0 E n

g N K (s) ds where we set

g(m) = E m

f N K (1)

. (3.8)

By Corollary 2.2, the function g is bounded and

kgk ∞ ≤ O(1) kf k K ,q K q . (3.9) Applying Proposition 3.3 to g thus gives

E n

g N K (s)

− ν

K

(g)

≤ O(1) kf k K ,q K q

e −δ

ζ

knk

K

1

∧ α

K +s e −αδK + e

log

as K

.

(24)

Integrating over s ∈ [0, T − 1] yields

1 T

Z T−1

0 E n

g N K (s)

ds − T − 1 T ν

K

(g)

≤ O(1) kf k K ,q

K q T

(T − 1) e −δ

ζ

knk

K

1

∧ α

K + (T − 1) 2 e −αδK + log K a

. Using Lemma 3.5 (stated and proved right after this proof), we finally obtain

E n

S f (T, K )

− ν

K

(f )

≤ O(1) kf k K ,q

K q + k n k q 2

T + O(1) kf k K ,q

K q T

(T − 1) e −δ

ζ

knk

K

1

∧ α

K + (T − 1) 2 e −αδK + log K a

+ O(1) kf k K ,q K q 1 − e −λ

0

(K)

12

+ 1

T ν

K

(g) + ν

K

(f ) 1 {T≤1}

≤ O(1) kf k K ,q K q + k n k q 2 1 T

2 + log K a

+ e −δ

ζ

knk

K

1

∧ α

K + T e −δ αK + 1 − e −λ

0

(K)

12

+ 1 {T≤1}

. This finishes the proof of the theorem.

We used the following lemma in the previous proof.

Lemma 3.5. For f ∈ F K ,q and g defined in (3.8) we have

K

(g) − ν

K

(f )| ≤ O(1)K q kf k K ,q 1 − e −λ

0

(K)

12

. Proof. We write

ν

K

(g) = E ν

K

f (N K (1)) 1 {T

0

>1}

+ E ν

K

f (N K (1)) 1 {T

0

≤1}

. Since ν

K

is a qsd, it follows by Cauchy-Schwarz inequality that

K

(g) − ν

K

(f )|

≤ 1 − e −λ

0

(K) ν

K

(f )

+ E ν

K

f 2 (N K (1))

12

E ν

K

1 {T

0

≤1}

12

≤ O(1)K q kf k K ,q 1 − e −λ

0

(K)

12

where we used Corollaries 2.2 and 2.5 and the fact that under ν

K

the law of T 0 is exponential with parameter λ 0 (K). The lemma is proved.

4 Fluctuation and correlation relations

4.1 Proof of Theorem 1.1

Let

Σ e K i,j (t) = E ν

K

(N i K (t) − n i )(N j K (0) − n j )

, i, j = 1, . . . , d.

Références

Documents relatifs

In veterinary medicine, disease detection is not really challenging when it concerns sick animals expressing clinical signs of disease, particularly when the

We present and analyze two algorithms, denoted by univsos1 and univsos2, allowing to decompose a non-negative univariate polynomial f of degree n into sums of squares with

1 The problem of identifying logics is not only of interest for proof theory, but for the whole area of logic, including mathematical logic as well as philosophical logic.. This

[r]

Although both definitions are technically correct, the definition of sphericity in terms of orthogonal contrasts is generally superior because it (a) generalizes to multi-factor

Jiang, “enhancing the security of mobile applications by using tee and (u)sim,” in Proceedings of the 2013 IEEE 10th International Con- ference on Ubiquitous Intelligence

&lt;his is not Fukushima but Hiro- shima&gt;, can be an example of Critical Idol Des- truction hrough Visuality per se, namely in its mode of visual communication combined with

Exercice 2.2. You toss a coin four times. On the …rst toss, you win 10 dollars if the result is “tails” and lose 10 dollars if the result is “heads”. On the subsequent tosses,