• Aucun résultat trouvé

Statistics of transitions for Markov chains with periodic forcing

N/A
N/A
Protected

Academic year: 2021

Partager "Statistics of transitions for Markov chains with periodic forcing"

Copied!
29
0
0

Texte intégral

(1)

HAL Id: hal-01071400

https://hal.archives-ouvertes.fr/hal-01071400v2

Preprint submitted on 24 Jan 2015

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Statistics of transitions for Markov chains with periodic forcing

S Herrmann, D Landon

To cite this version:

S Herrmann, D Landon. Statistics of transitions for Markov chains with periodic forcing. 2014.

�hal-01071400v2�

(2)

Statistics of transitions for Markov chains with periodic forcing

S. Herrmann and D. Landon Institut de Mathématiques de Bourgogne

UMR CNRS 5584, Université de Bourgogne,

B.P. 47 870 21078 Dijon Cedex, France

Abstract

The inuence of a time-periodic forcing on stochastic processes can essentially be emphasized in the large time behaviour of their paths. The statistics of transition in a simple Markov chain model permits to quan- tify this inuence. In particular a functional Central Limit Theorem can be proven for the number of transitions between two states chosen in the whole nite state space of the Markov chain. An application to the stochastic resonance is presented.

Key words and phrases: Markov chain, Floquet multipliers, central limit theorem, large time asymptotic, stochastic resonance.

2000 AMS subject classications: primary 60J27; secondary: 60F05, 34C25

Introduction

The description of natural phenomenon sometimes requires to introduce stochas- tic models with periodic forcing. The simplest model used to interpret for in- stance the abrupt changes between cold and warm ages in paleoclimatic data is a one-dimensional diusion process with time-periodic drift [6]. This periodic forcing is directly related to the variation of the solar constant (Milankovitch cy- cles). In the neuroscience framework, such periodic forced model is also of prime importance: the ring of a single neuron stimulated by a periodic input signal can be represented by the rst passage time of a periodically driven Ornstein- Uhlenbeck process [19] or other extended models [14]. Moreover let us note that seasonal autoregressive moving average models have been introduced in order to analyse and forecast statistical time series with periodic forcing. Recently the time dependence of the volatility in nancial time series leaded to emphasize periodic autoregressive conditional heteroscedastic models. Whereas several sta- tistical models permit to deal with time series, the inuence of periodic forcing on time-continuous stochastic processes concerns only few mathematical studies.

Let us note a nice reference in the physics literature dealing with this research subject [13].

Therefore we propose to study a simple Markov chain model evolving in a time-periodic environment (already introduced in the stochastic resonance

(3)

context [12] and [9]) and in particular to focus our attention to its large time asymptotic behaviour. Since the dynamics of the Markov chain is not time- homogeneous, the classical convergence towards the invariant measure and the related convergence rate cannot be used.

Description of the model. Let us consider a time-continuous irreducible Markov chain evolving in the state space S ={s1, s2, . . . , sd}with d2. The transition rate from statesito statesjis denoted byϕ0i,j, fori6=j. We assume thatϕ0i,j cfor some positive constantcand for anyi6=j. We perturb this ini- tial process by a periodic forcing of periodT; it means that the transition rates ϕ0i,j are increased using additional non negative periodic functions ϕpi,j. The obtained Markov chain is denoted by (Xt)t≥0 and its innitesimal generator is given by

Qt=

−ϕ1,1(t) ϕ2,1(t) . . . ϕd,1(t) ϕ1,2(t) −ϕ2,2(t) ϕd,2(t)

... ... ... ...

ϕ1,d(t) ϕ2,d(t) . . . −ϕd,d(t)

, (0.1)

Hereϕi,j=ϕ0i,j+ϕpi,j areT-periodic functions representing the transition rate from statesi tosj. In particular, the transitions rates satisfy:

ϕi,j(t)c >0 for any(i, j)∈ S2. (H) We also assume thatϕi,j are càdlàg functions.

In order to describe precisely the paths of the chain(Xt), we dene transi- tions statistics: Nti,jcorresponds to the number of switching from statesi tosj up to timet. For notational convenience, we focus our attention onNt:=Nt1,2. Obviously knowing the processes (Nti,j) for any 1 i, j d is equivalent to know the behaviour of(Xt).

Main result. Let us rst note that, in the higher dimensional space[0, T]×S, we can dene a Markov process (t modT, Xt)t≥0 which is time-homogeneous and admits a unique invariant measure µ = (µi(t))1≤i≤d, t∈[0,T[. The main results can then be stated. The periodic forcing implies that the distribution of the Markov chain (Xt) converges as time elapses toward the unique invariant measureµ(the sense of this convergence is made precise in Section 1). Moreover the rst moments of the statistics Ntsatisfy:

t→∞lim 1

tE[Nt] = 1 T

Z T 0

ϕ1,2(s)µ1(s)ds,

and there exists a constant κϕ > 0 such that limt→∞Var(Nt)/t = κϕ. The explicit value of the constantκϕis emphasized in Section 2.1. Using these two moment asymptotics, we can prove a Central Limit Theorem: the number of transitions between two given states duringnperiods is asymptotically gaussian distributed: the process

NntT E[NntT] pVar(NntT)

!

0≤t≤1

converges in distribution towards the standard Brownian motion asntends to innity (Theorem 2.6).

(4)

Application. The explicit expression of the mean number of transition be- tween two states before time t permits to deal with particular optimization problems appearing in the stochastic resonance framework (see, for instance, [8]). Let us reduce the study to a2-state space: S ={s1, s2} and to the corre- sponding Markov chain whose transition rates correspond to ϕ1,2 respectively ϕ2,1, the exit rate of the state s1resp. s2. Let us consider a family of periodic forcing having all the same period T and being parametrized by a variable , then it is possible to choose in this family the perturbation which has the most inuence on the stochastic process, just by minimizing the following quality measure:

M() :=

Z T 0

ϕ1,2(s)µ1(s)ds1 .

Indeed this expression intuitively means that the asymptotic number of transi- tions from states1 to states2 is close to 1. In Section 3 we shall compare this quality measure (already introduced in [21]) to other measures usually used in the physics literature [12].

1 Periodic stationary measure for Markov chains

Before focusing our attention on the paths behaviour of the Markov chain, we describe, in this preliminary section, the xed time distribution of the random process and, in particular, analyse the existence of a so-called periodic stationary probability measure PSPM (we shall precise this terminology in the following).

The marginal law of the Markov chain (Xt)t≥0 starting from the initial distri- bution ν0 and evolving in the state spaceS={s1, . . . , sd} is given by

νi(t) =Pν0(Xt=si), 1id.

This probability measure ν = (ν1, . . . , νd) (the symbol stands for the trans- pose) constitutes a solution to the following ODE:

˙

ν(t) =Qtν(t) and ν(0) =ν0, (1.1) where the generator Qt is dened in (0.1). Let us just note that

P(Xt+h=sj|Xt=si) =ϕi,j(t)h+o(h) fori6=j. Moreover the following relation holds

ϕi,i=

d

X

j=1,j6=i

ϕi,j, ∀1id. (1.2)

Floquet's theory dealing with linear dierential equation with periodic coe- cients can thus be applied. In particular we shall prove that ν(t) converges exponentially fast towards a periodic solution of (1.1), the convergence rate being related to the Floquet multipliers (see Section 2.4 in [4]).

Denition 1.1. Any T-periodic solutionν(t) = (ν1, . . . , νd) of (1.1) is called a periodic stationary probability measure PSPM i νi(t) 0 for all i {1, . . . , d} andPd

ν (t) = 1both for allt0.

(5)

The following statement points out the long time asymptotics of the Markov chain.

Theorem 1.2. The system (1.1) has a unique stationary probability measure µ(t)which isT-periodic. For any initial conditionν0, the probability distribution ν(t) :=PXt converges in the large time limit towards µ(t). More precisely the rate of convergence is given by

t→∞lim 1

t log(t)µ(t)k ≤Re(λ2)<0, (1.3) whereλ2is the second Floquet exponent associated to (1.1) and (0.1);k·kstands for the Euclidian norm inRd.

Proof. Step 1. Existence of the periodic invariant measure. We consider the distribution of the Markov chain(Xt)starting from statesi, we obtain obviously a probability measure which is solution of the following ode:

˙

νi(t) =Qtνi(t), νi(0) = (δij)j∈{1,...,d}, (1.4) whereδijstands for the Kronecker symbol. We deduce that the principal matrix solution of (1.1) is given by

M(t) =

ν11(t) ν12(t) . . . ν1d(t) ν21(t) ν22(t) ν2d(t)

... ... ... ...

νd1(t) νd2(t) . . . νdd(t)

sinceM(0) = Idd, the identity matrix inRd. The monodromy matrixM(T)is therefore stochastic and strictly positive: νji(T)>0 sinceQt satises (H). By the Perron-Frobenius theorem (see chapter 8 in [15]), the largest eigenvalue is simple and equal to 1. Moreover, the associated eigenvector u= (u1, . . . , ud) is strictly positive and so we dene a probability measure using a normalisa- tion procedure: Pdu

i=1ui. Consequently there exists a unique periodic invariant probability measureµ(t)dened by

µ(t) = M(t)u Pd

i=1ui

, t1.

Floquet's theory ensures thatµ(t)isT-periodic.

Step 2. Convergence. By the Perron-Frobenius theorem, the eigenvalues of the monodromy matrix M(T), also called Floquet multipliers, are {r1, r2, . . . , rs}, sdwith1 =r1>|r2| ≥ |r3| ≥. . .≥ |rs| and whose associated multiplicities n1, . . . , ns satisfy n1 = 1 and Ps

k=1nk = d. Let us decompose the space as follows Rd = Rµ(0)V where µ(0) is the periodic invariant measure at time t= 0 andV is a stable subspace for the linear operatorM(T). Since the rst eigenvalue r1 is simple, the spectral radius ofM(T) restricted to the subspace V satisesρ(M(T)|V) =|r2|<1. So for any probability distributionν0, we get ν0=αµ(0) +v withαRandvV. Hence

kM(T)nν0αµ(0)k=

M(T)|Vn v

M(T)|Vn · kvk.

(6)

Using Gelfand's formula (see, for instance [20], p.70) we obtain the asymptotic result

n→∞lim 1

nlogkM(T)nν0αµ(0)k ≤log(|r2|)<0. (1.5) In particular, since M(T)nν0 is a probability measure, we deduce thatα= 1. Let us just note that the Floquet multiplier r2 satises r2 =eλ2T where λ2 is the associated Floquet exponent dened modulo2π/T. Consequently

log(|r2|) =T Re(λ2).

Let us now consider any timet,ν(t)is then a probability measure satisfying ν(t) =M(t)ν0.

We dener(t)[0, T[byr(t) =t− bt/TcT and obtain (t)µ(t)k=kM(t)ν0M(t)µ0k

=kM(r(t))M(bt/TcT)ν0M(r(t))µ0k

≤ kM(r(t))k

M(T)bt/Tcν0µ0

.

By (1.5) and sinceM(t)is a continuous andT-periodic function (bounded op- erator), we obtain the announced statement (1.3).

The particular 2-dimensional case

In this section, we focus our attention on the particular2-dimensional case. As explained in Theorem 1.2, the distribution of the Markov chain ν(t) := PXt

starting from the initial distribution ν0 and evolving in the state space S = {s1, s2}converges exponentially fast to the unique PSPMµ. In dimension2, we can compute explicitly the probability measureν(t)and the convergence rate, applying Floquet's theory. This theory deals with linear dierential equation with periodic coecients (see Section 2.4 in [4]). The following statement points out the long time asymptotics of the Markov chain.

Proposition 1.3. In the large time limit, the probability distribution ν con- verges towards the unique PSPM µdened byµ(t) = (µ1(t),1µ1(t))and

µ1(t) =µ1(0)eR0t1,22,1)(s)ds+ Z t

0

ϕ2,1(s)eRst1,22,1)(u)duds, (1.6) where

µ1(0) = I(ϕ2,1)

I(ϕ1,2+ϕ2,1) and I(f) = Z T

0

f(t)eRtT1,22,1)(u)dudt. (1.7) More precisely, ifν(0)6=µ(0)then

t→∞lim 1

tlogkν(t)µ(t)k=λ2, (1.8) whereλ2 stands for the second Floquet exponent:

λ2=1 T

Z T

1,2+ϕ2,1)(t)dt. (1.9)

(7)

Remark 1.4. It is possible to transform (Xt)t≥0 into a time-homogeneous Markov process just by increasing the space dimension. By this procedure(µ(t))0≤t<T becomes the invariant probability measure of (t mod T, Xt)t≥0.

Proof. 1. First we study the existence of a unique PSPM. Let µ(t)be a prob- ability measure thus µ1(t) +µ2(t) = 1. If µ satises (1.1) then we obtain, by substitution, the dierential equation:

˙

µ1(t) =−ϕ1,2(t)µ1(t) +ϕ2,1(t)(1µ1(t)).

This equation can be solved using the variation of the parameters. The proce- dure yields (1.6). The periodicity of the solution requires µ1(T) = µ1(0) and leads to (1.7).

2. The system (1.1) admits two Floquet multipliersρ1andρ2. Since there exists a periodic solution, one of the multipliers (let us say ρ1) is equal to 1 and we can compute the other one using the relation between the productρ1ρ2 and the trace of Qt:

ρ1ρ2= exp Z T

0

tr(Qt)dt

! .

The explicit expression of the trace leads to (1.9). Let us just note that we can link to both Floquet multipliers ρ1 and ρ1 the so-called Floquet exponentsλ1 andλ2 dened (not uniquely) by

ρ1=eλ1T and ρ2=eλ2T.

3. Since the Floquet multipliers are dierent, each multiplier is associated with a particular solution of (1.1). The multiplierρ1 = 1(i.e. λ1= 0) corresponds to the PSPM sinceµ(t+T) =ρ1µ(t)for alltR+. For the Floquet exponent λ2, we considerζ(t)the solution of (1.1) with initial conditionζ(0)= (−1,1). Combining both equations of (1.1), we obtain

( ζ1(t) +ζ2(t) = 0 ζ1(t)ζ2(t) =−2 exp

Rt

01,2+ϕ2,1)(s)ds

. (1.10)

We deduce ζ(t)=

exp

Z t

0

1,2+ϕ2,1)(s)ds

, exp

Z t

0

1,2+ϕ2,1)(s)ds

and we can easily check thatζ(t+T) =ζ(t)eλ2T.

The solution of (1.1) with any initial condition is therefore a linear combination ofζ andµ, the solutions associated with the Floquet multipliers. Writingν(0) in the basis (µ(0), ζ(0))yields ν(t) =αµ(t) +βζ(t),with α=ν1(0) +ν2(0)(α is equal to1in the particular probability measure case) and

β= ν1(0)ν2(0)

2 +αI(ϕ2,1)I(ϕ1,2) 2I(ϕ1,2+ϕ2,1) .

Then, if the initial condition is a probability measure, we obtain (1.8) since kν(t)µ(t)k=kβζ(t)k=

2|β|eR0t1,22,1)(s)ds.

(8)

2 Statistics of the number of transitions

In this section, we aim to describe the number of transitionsNti,j, up to timet, between two given statessi andsj. This information is of prime interest since computing it for a given path is very simple [17]. Recent studies emphasize how to get the probability distribution of this counting process, even in some more general situations: Markov renewal processes including namely the time- homogeneous Markov chains [3].

Moreover counting the transitions permits to get informations about the tran- sition rates of the Markov chain. In the particular time-homogeneous case, the number of transitions during some large time interval are used for estimation purposes (for continuous-time Markov chains see, for instance, [1] and for time discrete Markov chains [2]).

In general, the large time behaviour of Nti,j is directly related to the ergodic theorem, the law of large numbers and nally the Central Limit Theorem (for precise hypotheses concerning these limit theorems, see [16]). Let us just dis- cuss a particular situation: the study of a time-discrete Markov chain(Xn)n≥0 with values in the state space S = {s1, . . . , sd} and with transition probabil- ities π. Let us denote µ its invariant probability measure. In order to de- scribe the number of transitions, we introduce a new Markov chain by dening Zn := (Xn−1, Xn)forn1, valued in the state spaceS2. Its invariant measure is thereforeµ˜ dened by

˜

µ(x, y) :=π(x, y)µ(x), (x, y)∈ S2.

In this particular situation, the number of transitions of the chain(Xn)is given by

Nn1,2=

n

X

k=1

1{Xk−1=s1, Xk=s2}=

n

X

k=1

1(s1,s2)(Zk).

In other words, it corresponds to the number of visits of the state(s1, s2)by the chain (Zn)n≥1. Consequently, under suitable conditions, the ergodic theorem can be applied:

n→∞lim Nn1,2

n = ˜µ(s1, s2) almost surely. The Central Limit Theorem species the rate of convergence.

However these arguments cannot be applied directly to the periodic forced Markov chain model associated to the innitesimal generator (0.1) due to es- sentially two facts:

the Markov chain(Xt)t≥0 is time-inhomogeneous

the Markov chain is a time-continuous stochastic process.

One way to overcome these diculties is to combine a discrete time-splitting (tn)n≥0 on one hand and an increase of the space dimension on the other hand so that(tnmodT, Xtn−1, Xtn)becomes homogeneous. This procedure seems to be complicated and we choose to present a quite dierent approach based on a time-spitting and on a functional Central Limit Theorem for weakly dependent random variables introduced by Herrndorf [11]. This result requires to study the asymtotic behaviour of the rst moments ofNti,j and a mixing property of

(9)

the Markov chain.

Let us also mention that usually the Central Limit Theorem and the associated large deviations could be proven using asymptotic properties of the Laplace transform ofNti,j. Of course such information is not sucient for a functional CLT. An overview of the conditions can be found in [5].

2.1 Long time asymptotics for the average and the vari- ance

The general d-dimensional case

Let us focus our attention to the two rst moments ofNt, the number of transi- tion between two given states, let us says1ands2. In a homogeneous continuous time Markov chain, the average and the variance ofNtgrows linearly if the pro- cess starts with the stationary distribution. What happens if the Markov chain is not homogeneous and in particular, if the transition probabilities depend periodically on time?

Let us introduce dierent mathematical quantities which play a crucial role in the asymptotic result.

Let us denote byM(t)the fundamental solution of (1.1), that is:

M˙(t) =QtM(t), M(0) = Id. (2.1)

Ξ(T)represents the Jordan canonical form ofM(T). Pis the matrix basis of this canonical form: Ξ(T) =P−1M(T)P. Moreover we denote for any t0,

Ξ(t) =P−1M(t)P. (2.2)

Three additional notations: the vector e1 = (1,0, . . . ,0) Rd and the matricesIdˇ1i,j= 1{i=j≥2} for1i, jdand(Bt)i,j=ϕ1,2(t)δi,2δj,1. Theorem 2.1. Asymptotics of the two rst moments

The number of transitions from states1 to states2, denoted byNt, satises the following asymptotic properties.

1. First moment. For any initial distributionPX0, we observe in the large time limit

mt:=E[Nt] Z t

0

ϕ1,2(s)µ1(s)ds,

where µ1 is the rst coordinate of the periodic stationary measure associated with the Markov chain (Xt)t≥0. In particular,

t→∞lim 1

t E[Nt] = 1 T

Z T 0

ϕ1,2(s)µ1(s)ds (2.3) 2. Second moment. Let us denote by Rν(t) := Var(Nt)E[Nt] for the initial distribution of the Markov chain: PX0 =ν. Then

Rµ(0)(T) = 2 Z T

0

ϕ1,2(s)e1PΞ(s) ˇId1C(s)ds, (2.4)

(10)

whereµ is the PSPM and C(t) =

Z t 0

Ξ(s)−1P−1Bsµ(s)ds. (2.5) Moreover the following limit holds

t→∞lim 1

t Rν(t) = 2 T

( e1Z T

0

ϕ1,2(s)PΞ(s)ds

Ξ(T) ˇId1

IdΞ(T) ˇId1−1

C(T) )

+ 1

T Rµ(0)(T). (2.6)

The mathematical quantity Rν(t) has been studied since its expression is ac- tually more concise than the explicit expression of the variance. Moreover, it is well known that the variance of a Poisson distribution equals its average.

Therefore Rvanishes for this particular probability distribution which in fact plays an important role for counting processes in homogeneous environments.

Remark 2.2. 1. The limit (2.6) does not depend on the initial distribution of X0. This property is related to the ergodic behaviour of the Markov chain developed in 1.2.

2. If the fundamental solution of (2.1) at time T is diagonalizable, that is r1 = 1> |r2| > . . . > |rd| where ri are the Floquet multipliers of (1.1), then (2.6) takes a simpler form due to the following expression:

Ξ(T) ˇId1(IdΞ(T) ˇId1)−1

i,j

= ri 1ri

1{i=j≥2}, 1i, jd.

3. If the transition probabilities are constant functions such that Qt dened in (0.1) satises

ϕi,j=ϕ1{i6=j}(d1)ϕ1{i=j},

for some constant ϕ > 0, then Theorem 1.2 can be applied for any T >0 and straightforward computations lead to:

Rµ(0)(t) = 2

d4(1e−dϕtdϕt) Hence

t→∞lim 1

tRµ(0)(t) = d3.

Even in this simple homogeneous situation, Nt is not asymptotically Poisson distributed. Indeed the Poisson distribution would satisfyR(t) = 0.

Proof. Step 1. Averaged number of transitions. Let us rst decompose the averaged number of transitions as follows:

mt=

d

X

k=1

mkt with mkt =E[Nt1{Xt=sk}].

(11)

We setMt:= (m1t, . . . , mdt). Forh >0 small enough, we get m2t+h=E[Nt+h1{Xt+h=s2}] = X

1≤i≤d

E[(Nt+ (Nt+h− Nt))1{Xt=si, Xt+h=s2}]

= X

1≤i≤d

E[Nt1{Xt=si}]P(Xt+h=s2|Xt=si)

+ X

1≤i≤d

E[(Nt+h− Nt)1{Xt=si, Xt+h=s2}]

=m2(t)(1ϕ2,2(t)h) +

d

X

i=1,i6=2

mi(t)ϕi,2(t)h+ν1(t)ϕ1,2(t)h+o(h) where νi(t) = P(Xt =si). By similar computations, we obtain the result for h <0close to the origin. Moreover for k6= 2:

mkt+h=E[Nt+h1{Xt+h=sk}]

=mk(t)(1ϕk,k(t)h) +

d

X

i=1,i6=k

mi(t)ϕi,k(t)h+o(h).

Finally we observe that Mtsatises the ode:

M˙t=QtMt+Btνt, M0= 0, (2.7) where (Bt)i,j = ϕ1,2(t)δi,2δj,1. Let M(t) the fundamental solution of (2.1).

Since Qt satises (H), M(T) is an irreducible positive and stochastic matrix.

Indeed, let us just explain whyPsi(XT =sj)>0for anyiandj: let us assume that this inequality does not hold. Then forhsmall enough, there exists a state sl such that

Psi(XT−h=sl)>0, (2.8) and

P(XT =sj|XT−h=sl) =ϕl,j(T h)h+o(h) (2.9) ifl6=j, otherwise:

P(XT =sj|XT−h=sj) = 1ϕj,j(Th)h+o(h). (2.10) By (H), the combination of (2.8), (2.9) and (2.10) leads to the announced prop- erty Psi(XT =sj)>0, as a product of two positive quantities. Therefore the Perron-Frobenius theorem (see chapter 8 in [15]) applied to the matrix M(T) implies

the eigenvaluesr1, r2, . . . , rs,sdof the matrixM(T)have the associated multiplicityn1= 1,Ps

k=1nk=dandr1= 1>|r2| ≥. . .|rs|.

the eigenvector associated to the rst eigenvalue corresponds to the peri- odic stationary probability measureµ(0).

We denote therefore B = (ξ10, . . . , ξd0) the basis of the canonical Jordan form of the matrix M(T) and P the basis matrix of B, P−1M(T)P being then the Jordan form. In particularξ01=µ(0). We deneξk(t) =M(t)ξk0,1kdand

(12)

observe two dierent cases. First case: ξk0is an eigenvector ofM(T)associated to the eigenvalue rj which implies that

ξk(t+T) =M(t+T)ξ0k=M(t)M(Tk0=rjM(t)ξk0=rjξk(t) (2.11) and consequently ξk is a Floquet solution associated to the Floquet multiplier rj.

Second case: ξk0 is not an eigenvector ofM(T)and belongs to the Jordan block associated to the eigenvaluerj then

ξk(t+T) =M(t)M(T)ξ0k=rjM(t)ξ0k+M(t)ξk−10 =rjξk(t) +ξk−1(t). (2.12) Furthermore we denote byΞ(t)the matrix dened by (2.2): the coecientΞi,j(t) represents thei-th coordinate of the solutionξj(t)in the basisBfor1i, jd. Let us note that since ξ1 is a probability measure, (1, . . . ,1)ξ1 = 1. Moreover combining (1.1) and (1.2) leads to the following property: (1, . . . ,1)PΞ(t)is a constant function. Ifξk0is an eigenvector ofM(T)associated to the eigenvalue rj then ξk(T) = rjξk(0) with |rj| < 1. In particular, since (1, . . . ,1)ξk(t) is constant in the canonical basis (1, . . . ,1)ξk(t) = 0. If ξk0 is not an eigenvector but belongs to the Jordan block associated to the eigenvaluerjthen (2.12) leads to

(1, . . . ,1)ξk(T) =rj(1, . . . ,1)ξk(T) + (1, . . . ,1)ξk−1(T).

If(1, . . . ,1)ξk−1(T) = 0then the property|rj|<1 leads to(1, . . . ,1)ξk(T) = 0. So step by step, we prove that

(1, . . . ,1)PΞ(t) = (1,0, . . . ,0), ∀t0. (2.13) Let us now solve the homogeneous part of the equation (2.7): there exists a vectorC= (C1, . . . , Cd)such that

Mt=PΞ(t)C.

By the method of parameter variation, we obtain the system:

PΞ(t) ˙C(t) =Btν(t) = (0, ϕ1,2(t)ν1(t),0, . . . ,0). (2.14) The initial conditionM(0) = 0leads toC(0) = 0.By multiplying (2.14) on the left side by the vector (1, . . . ,1)we obtain C˙1(t) =ϕ1,2(t)ν1(t).Hence

C1(t) = Z t

0

ϕ1,2(s)ν1(s)ds. (2.15) We obtain therefore an explicit solution of (2.7) and deduce that

E[Nt] = (1, . . . ,1)Mt=C1(t) = Z t

0

ϕ1,2(s)ν1(s)ds Z t

0

ϕ1,2(s)PΞ1,1(s)ds, as t becomes large. The equivalence presented in the previous equation is due to the ergodic property of the periodically driven Markov chain (Theorem 1.2).

More precisely, for any suciently small constant >0(smaller than|Re(λ2)|) there exists a constantC >0 such that:

E[Nt] Z t

0

ϕ1,2(s)PΞ1,1(s)ds

Z t 0

ϕ1,2(s)|ν1(s)PΞ1,1(s)|ds

C Z t

ϕ1,2(s)e(Re(λ2)+)sds. (2.16)

Références

Documents relatifs

During the completion of the present work, we remarked that approximations by a flux-splitting scheme (i.e. a special choice of the numerical flux) have been recently shown to

Moreover, in the particular case of a finite number of context α-lis, Theorem 3 gives a rather easy condition for the existence and uniqueness of a stationary probability measure:

the j-th gap, for large j, whose intensity is of order j -1 is correctly given by our treatment, if the matrix elements of the perturbative potential are quite smaller

We point out that the regenerative method is by no means the sole technique for obtain- ing probability inequalities in the markovian setting.. Such non asymptotic results may

We have compared the RB estimator (25) of the extremal index (RB standing for regeneration- based) to the estimates obtained with the standard blocks method, the runs method, and

As in the Heston model, we want to use our algorithm as a way of option pricing when the stochastic volatility evolves under its stationary regime and test it on Asian options using

However, under the above conditions, Theorem 1 ensures that with high probability there is a unique stationary distribution and the walk exhibits uniform cutoff at time (log