• Aucun résultat trouvé

Submit your manuscripts at http://www.hindawi.com

N/A
N/A
Protected

Academic year: 2022

Partager "Submit your manuscripts at http://www.hindawi.com"

Copied!
19
0
0

Texte intégral

(1)

OF MARKOV PROCESSES USED IN DYNAMIC RELIABILITY

CHRISTIANE COCOZZA-THIVENT, ROBERT EYMARD, SOPHIE MERCIER, AND MICHEL ROUSSIGNOL

Received 3 May 2004; Revised 15 February 2005; Accepted 22 February 2005

In dynamic reliability, the evolution of a system is described by a piecewise determinis- tic Markov process (It,Xt)t0with state-spaceE×Rd, whereEis finite. The main result of the present paper is the characterization of the marginal distribution of the Markov process (It,Xt)t0at timet, as the unique solution of a set of explicit integro-differential equations, which can be seen as a weak form of the Chapman-Kolmogorov equation.

Uniqueness is the difficult part of the result.

Copyright © 2006 Christiane Cocozza-Thivent et al. This is an open access article dis- tributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is prop- erly cited.

1. Introduction

In dynamic reliability, a system is considered, which evolves in time according to a two- component Markov process (It,Xt)t0: the symbolIt corresponds to some physical state of the system and has a finite state-spaceE, whereasXtrepresents some “environmental”

condition (temperature, pressure,. . .) and takes its values inRd. The transition rate be- tween two different statesiandjofEmay vary with the current environmental condition xinRd. The transition rate from stateito state jthen appears as a function fromRd to R+, and will be denoted bya(i,j,·). The evolution of the environmental componentXt is described by a set of differential equations, which depend on the physical state of the item. More precisely, the dynamic ofXtis such that, givenIt(ω)=ifor allt[a,b], we have

d

dtXt(ω)=vi,Xt(ω), (1.1)

for allt[a,b], where v is a mapping fromE×RdtoRd. As a consequence, the trajectory (Xt)t0is deterministic between any two consecutive jumps of (It)t0, and (Xt)t0is the

Hindawi Publishing Corporation

Journal of Applied Mathematics and Stochastic Analysis Volume 2006, Article ID 92156, Pages1–18

DOI10.1155/JAMSA/2006/92156

(2)

solution of the ordinary differential equationd y/dt=v(i,y). Such processes are called piecewise deterministic Markov processes, see Davis [7,8].

In dynamic reliability, the process (Xt)t0is usually assumed to be continuous so that no jump is allowed for the environmental component (see [12], e.g., and other papers by P. E. Labeau). In the present paper, this assumption of continuity is relaxed and the process (Xt)t0may jump simultaneously with the process (It)t0. More precisely, when the processIt jumps from stateito state j, we assume that the process (Xt) belongs to (y,y+d y) with probabilityμ(i,j,x)(d y) at the jump time, given thatXt=xjust before the jump (where (i,j,x)E2×Rd). In the caseμ(i,j,x)(d y)=δx(d y) (for alli,j,x), the process (Xt)t0 is continuous and our model meets the usual one. Our model however includes a lot of other interesting cases such as interacting semi-Markov processes or non-homogenous Markov processes.

The marginal distribution of the process (It,Xt)t0at timetusually is a complicated characteristic which is not analytically reachable (see, e.g., [12]). The aim of the present paper is to characterize such a marginal distribution as the unique measure solution of a set of explicit integro-differential equations. Such equations can actually be seen as a weak form of the Chapman-Kolmogorov (CK) equations associated to the Markov pro- cess (It,Xt)t0. In a second paper (see [3]), we use such a characterization to propose a numerical algorithm, which is shown to converge weakly towards the right marginal distribution in a third paper [4].

Piecewise deterministic Markov processes (PDMP) have already been studied by Davis [8] (other references therein). However, the main part of the present work, namely, ex- istence and uniqueness of solutions to the CK equation is not treated by Davis, who is mainly interested in functionals of the process which do not englobe marginal distribu- tions. PDMP have also been studied by Costa, Dufour, and Raymundo in [5,6,9] and by Borovkov and Novikov in [2], with different objectives from ours. A chapter in [13] is also devoted to PDMP.

The paper is organized as follows: notations and assumptions are given inSection 2.

InSection 3, the process (It,Xt)t0is constructed, using the Markov renewal theory.

We derive that (It,Xt)t0is a Markov process and give an explicit expression for its tran- sition probabilities and for its full generator on regular functions.

InSection 4, we first give some weak form for the CK equations with regular test func- tions. The existence of a solution to such equations is ensured by the construction of Section 3. Then we show the uniqueness of the solution by extending the set of test func- tions to more general set of functions. This is the main part of the paper.

Concluding remarks are given inSection 5.

2. Notations-assumptions We use the following notations.

(1) SymbolEstands for the finite state-space of the studied item.

(2) For dN, symbolsᏼ(Rd) andᏼ(E×Rd), respectively, represent the set of probability measures onRdandE×Rd.

(3) SymbolsCb(F,R) andC1(F,R), respectively, stand for the set of bounded con- tinuous functions and the set of continuously differentiable functions from a

(3)

Banach space F to R (or R+). In the following, we use F=Rd,F=Rd×R, F=Rd×[0,T],C1b(F,R)=Cb(F,R)C1(F,R). SymbolC1c(F,R) stands for the set of continuously differentiable functions with a compact support.

(4) For any setF, symbolsFEandFE×E, respectively, represent the set of functions

f :

E−→F,

i−→ f(i), f :

E×E−→F,

(i,j)−→f(i,j). (2.1)

For instance,Cb(Rd,R)Estands for the set of functionsf :ECb(Rd,R), namely, such that f(i) is a bounded continuous function from Rd toR, for all iE.

For the sake of simplicity, we write f(i,x) or f(i,j,x) instead of (f(i))(x) or (f(i,j))(x), fori,jEandxinRdorRd×R.

With these notations, we adopt the following assumptions, which will be referred to as assumptions (H).

(1) The transition rateais assumed to be such thataCb(Rd,R+)E×E, namely, non- negative, continuous, and bounded by some real numberA >0.

(2) The velocity v which appears in (1.1) is assumed to be such that:

(i) vC(Rd,Rd)E,

(ii) the function v(i,·) is locally Lipschitz continuous (for alliE),

(iii) the function div v(i,·) is almost everywhere bounded by some real value D >0 (for alliE),

(iv) the function v(i,·) is sublinear (for all iE): there are someV1>0 and V2>0 such that

iE,xRd, v(i,x)V1x+V2. (2.2) (3) The distribution{μ(i,j,x)}which controls the jumps of (Xt)t0by jump times of

(It)t0is such that, fori,jEandψCb(Rd,R), the functionx

ψ(y)μ(i,j, x)(d y),xRd, is continuous (whereμ:E×E×Rdᏼ(Rd)).

These assumptions guarantee the existence and uniqueness of the solution to the differ- ential equations fulfilled by the environmental component. They also provide us with some classical properties for the solution which are summed up in the following lemma.

Lemma 2.1. There exists one and only one functiongC(Rd×R,Rd)Eof the classC1with respect to timetsuch that for anyiE,

∂g

∂t(i,x,t)=vi,g(i,x,t), (x,t)Rd×R, (2.3) with an initial condition

g(i,x, 0)=x, xRd. (2.4)

(4)

Moreover, this unique solutiongfulfils the following properties.

(1) The functionxg(i,x,t) is locally Lipschitz continuous with respect toxonRd, for allt >0 andiE.

(2) For allxRd, (s,t)R×R, andiE, one has

gi,g(i,x,s),t=g(i,x,s+t). (2.5) 3. Probabilistic construction of the process

In this section, we first construct the process (It,Xt)t0in a similar way as in Davis [8].

We then derive explicit expressions for the transition probabilities of the process (It,Xt)t0

and for its full generator on the class of regular functions.

3.1. Construction of the process. In order to construct the process (It,Xt)t0with val- ues inE×Rd, we first define a Markov renewal process (Tn, (ITn,XTn))n0 by its condi- tional distributions. Let us denote byQ(ITn,XTn,j,d y,du) the conditional distribution of (ITn+1,XTn+1,Tn+1Tn), given (Tk, (ITk,XTk))kn(or equivalently given (ITn,XTn)). Setting λ(i,·)= jEa(i,j,·) foriE, we defineQin the following way:

EhXTn+1,Tn+1Tn

1{ITn+1=j}|

Tk,ITk,XTk

kn

=

0 exp

u

0λITn,gITn,XTn,vdv

aITn,j,gITn,XTn,u

×

h(y,u)μITn,j,gITn,XTn,u(d y)

du

=

0 h(y,u)QITn,XTn,j,d y,du.

(3.1)

Here,his any positive measurable function onRd×R+and jE.

The process (It,Xt)t0is now constructed as

if Tnt < Tn+1, thenIt=ITn, Xt=gITn,XTn,tTn. (3.2) As noted in Section 1, we can see that, given (ITn,XTn)n0, the trajectory of (Xt)t0 is purely deterministic on [Tn,Tn+1) forn=0, 1, 2,. . ., withT0=0.

3.2. Transition probabilities. From the previous construction, we derive an explicit ex- pression for the transition probabilities of (It,Xt)t0. This is based on calculations of con- ditional expectations, using several properties of the Markov renewal process (Tn, (ITn, XTn))n0such as

EhTn+1Tn|

Tk,ITk,XTkkn

=

0 h(u) exp

u

0λITn,gITn,XTn,vdv

λITn,gITn,XTn,udu.

(3.3)

(5)

It is convenient to introduce the following notation:

ci(t,x)=exp

t

0λi,g(i,x,v)dv

. (3.4)

Proposition 3.1. The process (It,Xt)t0is a homogeneous Markov process with values in E×Rd. Its transition probabilitiesPt(i,x,j,d y) are given by

h(y)Pt(i,x,j,d y)

=1{i=j}ci(t,x)hg(i,x,t) +

t

0ci(s,x)ai,j,g(i,x,s)cj(ts,y)hg(j,y,ts)μi,j,g(i,x,s)(d y)

ds

+

p2

j1,...,jp1

···

{s10,...,sp0,s1+···+spt}

ds1···dsp

×cis1,xai,j1,g(i,x,s)μi,j1,gi,x,s1

d y1

×

p2

i=1

cji

si+1,yi

aji,ji+1,gji,yi,si+1

μji,ji+1,gji,yi,si+1 d yi+1

×cjp1sp,yp1

ajp1,j,gjp1,yp1,spμjp1,j,gjp1,yp1,sp(d y)

×cjt

s1+···+sp,yhgj,y,t

s1+···+sp,

(3.5) wherehis any bounded measurable function.

Remark 3.2. These transition probabilities are the sum of different terms. The first term corresponds to the case of no jump between 0 andt, and consequently to a deterministic trajectory on [0,t]. The second term corresponds to the case of one single jump, and the other terms to more than two jumps.

As usual, the transition probabilitiesPtcan be seen as operators on the set of measur- able bounded functions defined onE×Rdin the following way:

(i,x)E×Rd, Pth(i,x)=EhIt,Xt|I0=i,X0=x=

jE

h(j,y)Pt(i,x,j,d y).

(3.6) The following two corollaries can be easily checked thanks to the expression ofPtas given byProposition 3.1.

(6)

Corollary 3.3. LethCb(Rd,R)E, then for alltR+,PthCb(Rd,R)E.

Corollary 3.4. For hCb(Rd,R)E and (i0,x0)E×Rd,tPth(i0,x0), tR+, is a continuous function.

3.3. Generator. We now exhibit sufficient conditions for a functionhto be in the domain of the full generatorᏸof the Markov process (It,Xt)t0, and we give the expression ofᏸh for such anh. Alternative sufficient conditions may be found in [13, Theorem 11.2.2] or [8]. In all the following, we will use the setD0such that

D0=

h:hC1bRd,RE

, v(i,·)· ∇h(i,·)CbRd,R

,iE. (3.7) Here,represents the gradient with respect to the second variable (inRd) and

v(i,·)· ∇h(i,·)= d k=1

∂h

∂xk(i,·)v(k)(i,·) (3.8) with v(i,·)=(v(1)(i,·), v(2)(i,·),. . ., v(d)(i,·)).

Proposition 3.5. ForhD0, let us definehin the following way:

ᏸh(i,x)=

j

a(i,j,x)

h(j,y)μ(i,j,x)(d y)h(i,x)λ(i,x) + d k=1

∂h

∂xk(i,x)v(k)(i,x)

=

j

a(i,j,x) h(j,y)h(i,x)μ(i,j,x)(d y) + v(i,x)· ∇h(i,x).

(3.9)

Then, under assumptions (H),hD0belongs to the domain of the full generatorof the Markov process (It,Xt)t0. Equivalently,h(It,Xt)h(I0,X0)t

0ᏸh(Is,Xs)dsis a martin- gale with respect to the natural filtration of the process (It,Xt)t0.

Proof. Assumptions (H) imply thatᏸis a linear operator fromD0to the set of bounded measurable functions. For (i,x)E×RdandhD0, let us consider

ht(i,x)=Pth(i,x)h(i,x)

t =

jE

h(j,y)Pt(i,x,j,d y)h(i,x)

t . (3.10)

Due to classical arguments, it is sufficient to prove that limt0+ht(i,x)=ᏸh(i,x), and that ht(i,x) is uniformly bounded for (i,x)E×Rdandtin a neighborhood of 0.

Based onProposition 3.1, we have

Pth(i,x)h(i,x)=A1+A2+A3, (3.11)

(7)

where

A1=ci(t,x)hi,g(i,x,t)h(i,x), A2=

j

t

0ci(s,x)ai,j,g(i,x,s)

×

cj(ts,y)hj,g(j,y,ts)μi,j,g(i,x,s)(d y)

ds,

A3=

j

p2

j1,...,jp1

···

{s10,...,sp0,s1+···+spt}

ds1···dsp

×ci

s1,xai,j1,g(i,x,s)μi,j1,gi,x,s1

d y1

×

p2

i=1

cji

si+1,yi

aji,ji+1,gji,yi,si+1

μji,ji+1,gji,yi,si+1 d yi+1

×cjp1

sp,yp1

ajp1,j,gjp1,yp1,sp

μjp1,j,gjp1,yp1,sp (d y)

×cjt

s1+···+sp,yhj,gj,y,t

s1+···+sp.

(3.12) Denoting byMandΛrespective bounds forhandλ, it is easy to check thatA3is bounded byM p2Λptp/ p!. Thus,A3divided bytis uniformly bounded for (i,x,t)E×Rd×V, whereV is a neighborhood of 0, and tends to 0 asttends to 0.

Under assumptions (H), A2 divided bytis bounded and tends to ja(i,j,x)h(j, y)μ(i,j,x)(d y) asttends to 0.

Under assumptions (H),A1divided bytis bounded and A1

t =

hi,g(i,x,t)h(i,x)

t hi,g(i,x,t)1ci(t,x)

t . (3.13)

Whenttends to 0, this quantity tends to d

k=1

∂h

∂xk(i,x)v(k)(i,x)h(i,x)λ(i,x). (3.14)

This completes the proof.

4. The marginal distributions of (It,Xt) as unique solutions of integro-differential equations

From the previous section, we have an explicit expression for the full generator of (It,Xt)t0 on regular functions (elements of D0). Now, we derive a weak form of the Chapman-Kolmogorov (CK) equations with regular test functions inSection 4.1. Exis- tence of a solution for such CK equations is clearly ensured by the construction of the

(8)

process (It,Xt)t0 of the previous section. The aim of this section then is to prove the uniqueness of the solution. Actually, the setD0of test functions for the CK equations (as provided by the previous section) soon appears as not rich enough to show such unique- ness. We then extendD0to a set of regular time-dependent functions inSection 4.2and to less regular time-dependent functions inSection 4.3. The final conclusion is made in Section 4.4.

In this section, symbolρ0 stands for the initial distribution of (It,Xt)t0 (withρ0 ᏼ(E×Rd)).

4.1. The Chapman-Kolmogorov equations. We give here the expression of the CK equa- tions forhinD0, which is a direct consequence from the expression for the full generator given inProposition 3.5.

Proposition 4.1 (Chapman-Kolmogorov equations). Letρt(i,dx) be the marginal distri- bution of (It,Xt)t0at timet, namely, the measure onE×Rdsuch that

ρt(i,dx)=

i0E

RdPt

i0,x0,i,dxρ0 i0,dx0

, t0. (4.1)

Then, for alltR+andhD0,

iE

Rdh(i,x)ρt(i,dx)

iE

Rdh(i,x)ρ0(i,dx)

= t

0

iE

Rd

jE

a(i,j,x)

Rdh(j,y)μ(i,j,x)(d y)h(i,x)

+ v(i,x)· ∇h(i,x)

ρs(i,dx)ds.

(4.2)

Such a proposition suggests the following definition.

Definition 4.2 (measure solution to CK equations). Let P :R+ᏼ(E×Rd) be such that for alltR+andhD0,

iE

h(i,x)Pt(i,dx)

iE

h(i,x)ρ0(i,dx)

= t

0

iE

jE

a(i,j,x)

h(j,y)μ(i,j,x)(d y)h(i,x)

+ v(i,x)· ∇h(i,x)

Ps(i,dx)ds.

(4.3)

Then, P is called a measure solution of the Chapman-Kolmogorov equations.

Remark 4.3. If P is a measure solution of the CK equations, then P0=ρ0. Also, for all hCb(Rd,R)E, one may check that iEh(i,x)Pt(i,dx) is continuous int.

(9)

Due toProposition 4.1, we already know that a measure solution of the CK equations exists (namely, Pt(i,dx)=ρt(i,dx)), so our problem now reduces to show the uniqueness of such a solution. If we assume that there are two measure solutions of the CK equations, we have to show that their difference, sayPs(i,dx), is zero, or equivalently that

iE

Rdh(i,x)Ps(i,dx)=0, (4.4) forhbelonging to sufficiently large set of functions, for almost alls >0. Alternatively, we may also prove that

iE

Rd×R+

ϕ(i,x,s)Ps(i,dx)ds=0, (4.5)

forϕbelonging to sufficiently large set of functions. The form of (4.3) suggests the second form to be more appropriate to our problem. This leads us to extend (4.3) to some time- dependent functionsϕ. Also, such functions will be taken with a compact support so that the term iEϕ(i,x,t)˜Pt(i,dx) in (4.3) will vanish ast→ ∞.

4.2. Characterization of a measure solution of the CK equations with time-dependent functions. The extension of (4.3) to time-dependent functions might be derived from the following equivalence (see, e.g., [11]):

hD0,hIt,Xt

hI0,X0

t

0ᏸhIs,Xs

dsis a martingale

⇐⇒

ϕD0,ϕIt,Xt,tϕI0,X0, 0 t

0

∂ϕ

∂u+ᏸϕIu,Xu,uduis a martingale

, (4.6)

where the new setD0consists of the real continuous functionsϕC1b(Rd×R+,R)Esuch that for eacht0, the functionϕt: (i,x)ϕ(i,x,t) belongs toD0. ForϕD0,ᏸϕis then defined byᏸϕ(i,x,t)=ᏸϕt(i,x).

However, it is better to follow here another approach, more in the spirit of the future developments, so that we now extend the CK equations directly from (4.3).

In the following, we use the symboltto denote the derivative with respect to the vari- abletR+, whereas we recall thatstands for the gradient with respect to the variable xRd.

(10)

Proposition 4.4. Let P :R+ᏼ(E×Rd). The function P is a measure solution of the Chapman-Kolmogorov equations in the sense ofDefinition 4.2if and only if

t

0

iE

jE

a(i,j,x)

ϕ(j,y,s)μ(i,j,x)(d y)ϕ(i,x,s)

+tϕ(i,x,s) + v(i,x)· ∇ϕ(i,x,s)

Ps(i,dx)ds

+

iE

ϕ(i,x, 0)ρ0(i,dx)

ϕ(i,x,t)Pt(i,dx)

=0,

(4.7)

for alltR+andϕCc1(Rd×R+,R)E.

Proof. We only have to prove that (4.3) implies (4.7). The implication (4.7)(4.3) is clear.

For ϕCc1(Rd×R+,R)E and tR+, we first apply formula (4.3) to the function (i,x)tϕ(i,x,t). Then, we integrate such a formula with respect tot(0,T), and we perform different integrations by parts. We use again formula (4.3) for the function ϕ(·,·,T) andt=T. Thus we arrive at the desired result.

Takingt→ ∞in (4.7), we now get the following result.

Corollary 4.5. If P :R+ᏼ(E×Rd) is a measure solution of the CK equations and ϕC1c(Rd×R+,R)E, then the measurem(i,dx,ds)=Ps(i,dx)dssatisfies

m·,Rd×[0,T]<, TR+, 0=

R+

iE jE

a(i,j,x)

ϕ(j,y,s)μ(i,j,x)(d y)ϕ(i,x,s)

m(i,dx,ds) +

R+

iE

tϕ(i,x,s) + v(i,x)· ∇ϕ(i,x,s)m(i,dx,ds) +

iE

ϕ(i,x, 0)ρ0(i,dx).

(4.8) In order to prove uniqueness of such anm, we assume that ˜mstands for the difference between two solutions. We already know

iE

Rd×R+

ψ(i,x,s) ˜m(i,dx,ds)=0 (4.9)

with

ψ(i,x,s)=

jE

a(i,j,x)

ϕ(j,y,s)μ(i,j,x)(d y)ϕ(i,x,s)

+tϕ(i,x,s) + v(i,x)· ∇ϕ(i,x,s)

(4.10)

andϕCc1(Rd×R+,R)E.

To show that ˜m0, we still have to extend the CK equations to less regular functions ϕin order to prove that (4.9) is true forψbelonging to sufficiently large set of functions, such asC1c(Rd×R+,R)E. This is done in the following section.

(11)

4.3. Extension of the time-dependent test-functions set for the CK equations. We first introduce the following definitions.

Definition 4.6. ForϕCb(Rd×R+,R)E, let us define ˜ϕCb(Rd×R+,R)Eby

˜

ϕ(i,x,t)=ϕi,g(i,x,t),t. (4.11) We denote byCb(d,E, v) the set of functionsϕCb(Rd×R+,R)Esuch that the func- tion ˜ϕis continuously differentiable intwith derivativetϕ˜Cb(Rd×R+,R)E.

Remark 4.7. Note that, thanks toLemma 2.1, the functionsϕand the functions ˜ϕare in one-to-one correspondence because

ϕ(i,x,t)=ϕ˜i,g(i,x,t),t. (4.12) If ˜ϕis continuously differentiable intand such that its derivativetϕ˜Cb(Rd×R+,R)E, then the functionϕdefined by (4.12) belongs toCb(d,E, v). Also, note that ifϕCb1(Rd× R+,R)E, the functiontϕbelongs toCb(Rd×R+,R)Eand is such that

tϕ(i,˜ y,s)=tϕi,g(i,y,s),s+ vi,g(i,y,s)· ∇ϕi,g(i,y,s),s. (4.13) This yields

tϕ˜i,g(i,x,s),s=tϕ(i,x,s) + v(i,x)· ∇ϕ(i,x,s), (i,x,s)E×Rd×R+, (4.14) which is the usual expression of a partial derivative of the velocity field v(i,·).

This last expression leads to the following notation: for allϕCb(d,E, v), we denote byt,vϕCb(Rd×R+,R)Ethe function defined by

t,vϕ(i,x,s)=tϕ˜i,g(i,x,s),s. (4.15) (Recall thattϕ˜is the partial derivative of ˜ϕwith respect to its third argument.)

Remark 4.8. Suppose that ϕC1b(Rd×R+,R)E with tϕ(i,·,·) + v(i,·)· ∇ϕ(i,·,·) Cb(Rd×R+,R)E.Then,ϕCb(d,E, v). In particular, any functionϕCc1(Rd×R+,R)E belongs toCb(d,E, v). Similarly, if we take any functionϕCb1(Rd,R)Ewhich is constant in time and such that v(i,·)· ∇ϕ(i,·)Cb(Rd,R), thenϕCb(d,E, v).

We will also need the following result.

Lemma 4.9. If ˜ϕCc1(Rd×R+,R)E, thenϕC1c(Rd×R+,R)E.

Proof. Let ˜ϕC1c(Rd×R+,R)E. The only point to show is that the functionϕhas a com- pact support: as ˜ϕhas a compact support, there are some constantsM >0,T >0 such that ϕ(i,x,˜ t)=0 ift > T orx> M, for alliE. Now, let (i,x,t) be such thatϕ(i,x,t)=0.

(12)

We derive that 0tTandg(i,x,t)M. Then, xxg(i,x,t)+g(i,x,t)

gi,g(i,x,t),tgi,g(i,x,t), 0+M

t

0

vi,gi,g(i,x,t),sds+M

V1

t

0

gi,g(i,x,t),sds+V2t+M.

(4.16)

Moreover, for 0stT, we have

gi,g(i,x,t),sg(i,x,t)+gi,g(i,x,t),sgi,g(i,x,t), 0

M+V1

s

0

gi,g(i,x,t),udu+V2s (as previously)

M+V2T+V1

s

0

gi,g(i,x,t),udu.

(4.17)

Using Gronwall’s lemma, we derive gi,g(i,x,t),s

M+V2TeV1s

M+V2TeV1T. (4.18) Inequality (4.16) now implies that

xV1TM+V2TeV1T+V2T+M, (4.19) and henceϕhas a compact support. This completes the proof.

We are now ready to state the main result of this section.

Proposition 4.10. Letmbe a Radon measure such thatm(·,Rd×[0,T])<for allT R+, and (4.8) are fulfilled for allϕC1c(Rd×R+,R)E. Then,

0=

R+

iE jE

a(i,j,x)

ϕ(j,y,s)μ(i,j,x)(d y)ϕ(i,x,s)

m(i,dx,ds) +

R+

iE

t,vϕ(i,x,s)m(i,dx,ds) +

iE

ϕ(i,x, 0)ρ0(i,dx),

(4.20)

for all functionsϕCb(d,E, v) such thatϕ(·,·,t)=0 for allt > T, for someT >0.

Proof. We first prove the result in the caseϕCb(d,E, v) is Lipschitz continuous with a compact support.

We need classical mollifying functions. First, letCc (R) (resp.,Cc (Rd)) be the set of real-valued infinitely differentiable functions with a compact support inR(resp.,Rd).

Then, let ¯ρ1Cc(R) be such that ¯ρ1(s)=0 for alls(−∞,1][0, +), ¯ρ1(s)0 for alls[1, 0] andRρ¯1(s)ds=1. For allnN, we define the function ¯ρnCc(R) such that ¯ρn(s)=¯1(ns), for allsR. We also introduce a functionρ1Cc (Rd) such that ρ1(x)=0 for allxRdwith|x| ≥1,ρ1(x)0 for allxRdandRdρ1(x)dx=1. For all nN, we then define the functionρnCc(Rd) byρn(x)=ndρ1(nx), forxRd.

Références

Documents relatifs

ABSTRACT - In this paper we consider evolution inclusions driven by a time de- pendent subdifferential operator and a set-valued perturbation term.. For the nonconvex

In another paper, we have argued that the traditional randomized design of clinical trials is ethically infeasible in desperate medical situations and adaptive designs are

As expected, the molecular radical ion was inexistant in this series of compounds and the fragmentation routes of the molecular radical ion were governed either by homolytic cleavage

Cette étude a pour objectif de mieux comprendre la manière dont les jeunes homosexuels font face au stress présent lors de la divulgation de leur homosexualité

We construct a smooth curve parameterized by t ≥ 1 of symplectic pairs with a special structure, in which the curve passes through all iteration points generated by the known

In this paper we are interested in the distribution of the maximum, or the max- imum of the absolute value, of certain Gaussian processes for which the result is exactly known..

Prove that the group H 2 (G, C) acts freely and transitively on the set of isomorphism classes of extentions E (G, N, φ) (see the hints given in the lecture)..

If such strains of E corrodens be- come more frequently isolated , beta-lactamase- stable antimicrobial agents like the cephalo- sporins should become the agents of