• Aucun résultat trouvé

5 Characterization of Markov semigroups compatible with the ODE

Dans le document 3 Semigroups compatible with the ODE (Page 22-32)

Z

]x0,b]

Pt z,[x0, b]

Ps(x0, dz)

=

(1−Θ(x0))·Prob.{Yj ≥t}+ Θ(x0)·Prob.

n

ST++(t,x0,ω)∈[x0, b]

o

·Prob.{Yj ≥s}

+Θ(x0)· Z

]x0,b]

Prob.n

ST++(t,z,ω)∈[x0, b]o

· d

dzProb.n

ST++(s,x0,ω) ∈[x0, z]o dz

= (1−Θ(x0))·Prob.{Yj ≥t+s}

+Θ(x0)·Prob.n

ST++(t,x0,ω)∈[x0, b]o

·Prob.n

ST++(s,x0,ω)=x0o +Θ(x0

Z

]x0,b]

Prob.n

ST++(t,z,ω)∈[x0, b]o

· d

dzProb.n

ST++(s,x0,ω) ∈[x0, z]o dz

= (1−Θ(x0))·Prob.

n

ST(t+s,x0,ω)=x0

o

+ Θ(x0)·Prob.

n

ST++(t+s,x0,ω)∈[x0, b]

o

= Pt+s x0,[x0, b]

.

5 Characterization of Markov semigroups compatible with the ODE

The goal of the present section is to describe the most general Markov semigroup whose sam-ple paths are all solutions to the ODE (1.1). Our main result shows that all of these Markov semigroups are of the form considered in Proposition 4.1. Namely, they are all obtained start-ing with a deterministic semigroup and performstart-ing two modifications: (i) addstart-ing a countable number of random waiting times at pointsyj ∈f−1(0), each with a Poisson distribution, and (ii) assigning the probabilities of moving upwards or downwards, at isolated points from where both an increasing and a decreasing solution can initiate.

Theorem 5.1 Letf be a function satisfying(A1)-(A2). The following statements are equiv-alent.

(I) The random variablesX(t, x0, ω)yield a Markov process whose sample paths are solutions to the ODE (1.1)-(1.2).

(II) There exist: (i) a positive, atomless Borel measure µsupported on f−1(0), (ii) a count-able setS ⊆f−1(0)of stationary points, (iii) a countable setS={yj :j≥1} ⊆f−1(0) and corresponding numbers λj > 0 determining the Poisson waiting times, and (iv) a map Θ : Ω7→[0,1], such that the transition kernelsPt(x0, A) =Prob.{X(t, x0, ω)∈A}

coincide with the corresponding ones constructed at (4.3), (4.6), (4.9), (4.11).

Proof. The implication (II)=⇒(I) was proved in Proposition 4.1. Here we need to prove (I)=⇒(II).

1. We begin by defining two subsets off−1(0) Ω0X .

= n

x0; Prob.{X(1, x0, ω) =x0}= 1 o

, (5.1)

S .

= n

x0; 0<Prob.{X(1, x0, ω) =x0}<1 o

. (5.2)

Here Ω0X is the set of points where the motion stops forever, whileS contains points where the motion stops for a random time, then starts again. By the Markov property and the fact that all solutions of (1.1) are monotone, x0 ∈ Ω0X implies X(t, x0, ω) =x0 for all t ≥ 0 and a.e.ω. We claim that for eachx0 ∈ S, there holds

Prob.{X(t, x0, ω) =x0} = e−λt for all t≥0 (5.3) withλ .

=−log

Prob.{X(1, x0, ω) =x0}

<∞. Indeed, for t≥0, the map t 7→ g(t) = log

Prob.{X(t, x0, ω) =x0}

∈ ]− ∞,0]

is nonincreasing. By the Markov property, it satisfies

g(0) = 0, g(t+s) = g(t) +g(s) for all t, s >0.

Therefore, g(q) =−λ q for some λ >0 and all positive rational numbers q ∈Q+. Since g is monotone, we conclude that g is continuous on [0,+∞[ and hence

Prob.{X(t, x0, ω) =x0} = eg(t) = e−λt for all t∈[0,+∞[.

2. Next, we need to identify the maximal intervals where the random dynamics is increasing or decreasing.

Definition 5.1 We say that an intervalJ = [a, b[orJ = ]a, b[(open to the right) is adomain of increasing random dynamicsif for every x0,xb∈J with x0 <x, one hasb

t→∞lim Prob.n

X(t, x0, ω)>bxo

= 1. (5.4)

Similarly, we say that an interval J =]a, b] or J = ]a, b[ (open to the left) is a domain of decreasing random dynamicsif, for every x0,xb∈J with bx < x0, one has

t→∞lim Prob.n

X(t, x0, ω)<bxo

= 1. (5.5)

We observe that, if two intervals of increasing random dynamics have nonempty intersection, then their union is still a domain of increasing random dynamics. We can thus define a countable number of maximal open intervalsJk+=]ak, bk[,k∈ J+where the random dynamics is increasing, and a countable number of maximal open intervals Jk=]ck, dk[, k∈ J where the random dynamics is decreasing. Recalling (5.1), we shall consider the countable set

S =

ak, bk:k∈ J+} ∪

ck, dk:k∈ J}

∩Ω0X. (5.6)

Next, we consider the set Ω of isolated points from which both a decreasing and an increasing trajectory can initiate. Forzk∈Ω\ S, we define the probability of moving upwards as

Θ(zk) .

= Prob.n

X(t, zk, ω)> zk for somet >0o

. (5.7)

Of course, starting from zk, the probability of moving downwards is thus 1−Θ(zk).

3. Consider a maximal domain of increasing random dynamics, say J = ]a, b[ or J = [a, b[.

By definition, for any a < x < y < bwe have that Prob.{X(τ, x, ω)< y} .

= δ < 1 (5.8)

for someτ >0. This implies

Prob.{X(kτ, x, ω)< y} ≤ δk for all k≥1. (5.9) CallT =Txy >0 the random time needed for a solution starting atx to reachy. By (5.9) it follows

Prob.{T > kτ} ≤ δk. (5.10)

By (5.10), the probability distribution T =Txy has moments of all orders. In particular, its mean and its variance are finite.

As a consequence for every x ≤ z < z0 ≤ y the random variable Tzz0 has moments of all orders. Its expected value and its variance satisfy

E[Tzz0] ≤ E[Txy], Var.(Tzz0) ≤ Var.(Txy). (5.11) Now consider the points yj ∈ [x, y] where a random waiting time Yj occurs. The Markov property implies thatYjhas a Poisson distribution as in (4.1), with expected valueE

Yj

= 1 λj

. From (5.11), it follows that for every finite subset S1 of S∩[x, y]

X

yj∈S1

1 λj

= X

yj∈S1

E Yj

≤ E[Txy] < +∞.

In particular,

#{yj ∈ S∩[x, y] :λj ≤n} ≤ n·E[Txy] for all n≥1,

and this shows that the set S of points where a random waiting time occurs is at most countable. Thus, we can writeS∩J ={y1, y2, . . .} and

X

z≤yj<z0

1 λj

= X

z≤yj<z0

E Yj

≤ E[Tzz0]. (5.12)

4. To help the reader, we provide here an outline the remaining steps of the proof. LetJ =]a, b[

orJ = [a, b[ be a maximal domain of increasing random dynamics and {y1, y2, . . .}=J∩ S be the set of points where the trajectories stop for a random time Yj, then start again. We can then define a new Markov semigroup by removing these waiting times. More precisely, for each n≥1, we define a Markov semigroupS(n) whose trajectories t7→X(n)(t, x0, ω) satisfy

X(n)n(t, x0, ω), x0, ω) = X(t, x0, ω),

where

τn(t, x0, ω) .

= t−measn

s∈[0, t] ; X(s, x0, ω)∈ {y1, . . . , yn}o .

In turn, given S(n), we can recover the original semigroup S by inserting back the waiting times at the pointsy1, . . . , yn.

Now consider the limit semigroupSe= limn→∞S(n). Since we removed all the random waiting times, restricted to the intervalJ the trajectories ofSeare strictly increasing with probability one. This will imply that Se is a deterministic semigroup, hence it admits the representation proved in Theorem 3.1. Namely, there exists a positive atomless measure µ on J ∩f−1(0) such that trajectories ofSeare determined by the formula (3.10). In turn, the original Markov semigroup S can be recovered from Se by adding back the random waiting times Yj at the pointsyj. This will provide the desired characterization ofS.

5. We now work out details. Consider the first point y1. We claim that, starting with the Markov semigroup S and removing the random waiting time Y1(ω) at y1, we obtain another Markov semigroup S(1).

Indeed, given an initial point x0 ∈ J, let t 7→ X(t, x0, ω) be a sample path of the original process, and define the new path X(1)(·, x0, ω) by setting

X(1)(t, x0, ω) =





X(t, x0, ω) if X(t, x0, ω)∈]a, y1[ or x0> y1, X(t+Y1(ω), x0, ω) if X(t, x0, ω)∈[y1, b[ and x0 ≤y1. We claim that the transition probability kernels

Pτ(1)(x0, A) .

= Prob.n

X(1)(τ, x0, ω)∈Ao

satisfy the Chapman-Kolmogorov equation. Indeed, for any Borel setB ⊂]a, b[ , the following holds.

Ifx0> y1, then

Pt(1)(x0, B) = Pt(x0, B).

Otherwise, if x0 ≤y1, setting B1 = ]a, y1[∩B and B2 = [y1, b[∩B, one obtains Pt(1)(x0, B) = Prob.{X(t, x0, ω)∈B1}+ Prob.

X(t+Y1(ω), x0, ω)∈B2

= Pt(x0, B1) + Z

0

Prob.

X(t+τ, x0, ω)∈B2 λ1e−λ1τ

= Pt(x0, B1) + Z

0

Pt+τ(x0, B2)·λ1e−λ1τdτ .

Therefore, for any s, t >0 and any Borel set A⊆]a, b[, ifx0> y1 then we trivially have Z

Pt(1)(z, A)Ps(1)(x0, dz) = Z

Pt(z, A)Ps(x0, dz) = Pt+s(x0, A) = Pt+s(1)(x0, A).

Otherwise, ifx0≤y1, then we consider the setsA1 =A∩]a, y1[ ,A2 =A∩[y1, b[ , and compute Z

Pt(1)(z, A)Ps(1)(x0, dz)

= Z

Pt(z, A1)Ps(x0, dz) + Z

Pt(z, A2)· Z

0

Ps+τ(x0, dz)·λ1e−λ1τ

= Pt+s(x0, A1) + Z

0

Z

Pt(z, A2)·Ps+τ(x0, dz)

·λ1e−λ1τ

= Pt+s(x0, A1) + Z

0

Pt+s+τ(x0, A2)·λ1e−λ1τdτ = Pt+s(1)(x0, A).

Hence, the processS(1) with sample pathst7→X(1)(t, x0, ω) is a Markov semigroup.

6. By induction, for each n ≥ 1, we consider the process S(n) obtained from S(n−1) by removing the Poisson waiting time at the pointyn∈J∩ S. By the previous argument, every S(n) is a Markov semigroup. Conversely, one can recover the original Markov semigroup S starting withS(n) and adding the Poisson waiting timesY1, . . . , Yn at the pointsy1, . . . , yn. If the set J∩ S containsN points, the induction argument is concluded N steps. Otherwise, since the sequence of trajectoriest7→X(n)(t, x0, ω) is monotone increasing and bounded fort in bounded sets, for every ω there exists the limit X(n)(t, x0, ω) →X(t, xe 0, ω), uniformly for tin bounded sets. By part (iii) of Theorem 2.1, each limit trajectoryt7→X(t, xe 0, ω) is still a solution of the ODE (1.1).

In the next step we will prove that this limit process is deterministic. Namely Var.

X(t, xe 0, ω)

= 0, (5.13)

for everyx0∈J and t >0.

7. Givenx0<xb∈J, let us define

Tnx0bx(ω) .

= inf n

t >0 ; X(n)(t, x0, ω)≥xbo Tex0bx(ω) .

= inf n

t >0 ; X(t, xe 0, ω)≥xb o

the random times needed to reach x, for a trajectory starting atb x0, having removed respec-tively the firstnwaiting times at y1, . . . , yn or all waiting timesyj ∈J∩S.

Letε0 >0 be such thatE[Tex0bx]> ε0. By (5.12) we can choose nlarge enough so that X

j>n, x0≤yj<bx

1

λj < ε0. (5.14)

We now consider the expected values of these random times. By construction the map y 7→

E Tnx0y

is nondecreasing, and has upward jumps in the amount 1

λj at each point yj, with j > n. By (5.14), we can introduce a partition

x = x0 < x1 < · · · < xν = bx

such that

Observing that Φ is a non-increasing function which satisfies Φ(4jK/ν) ≤ Prob. we estimate the variance1

Var.(Texi−1xi) ≤ −

1Indeed, differentiating the identity 1 1x =

Since ε0 > 0 can be chosen arbitrarily small, we conclude that the variance of Tex0xb is zero.

Hence, the limit process is deterministic.

8. We now define

Setx0 .

= lim

n→∞ Eh

X(n)(t, x0, ω)i

. (5.19)

The existence of the limit follows trivially by monotonicity. Since the variance of the random variables X(n) goes to zero, again by (iii) in Theorem 2.1 it follows that each trajectory t7→Setx0 is a Carath´eodory solution to (1.1).

We claim that Se is a deterministic semigroup. Toward this goal, we first observe that the maps

t 7→ Setx0, x07→Setx0

are both Lipschitz, and strictly increasing (as long as Setx0 < b). To prove the semigroup identity

Ses(Setx0) = Set+sx0, (5.20) setx1=Setx0 and letε >0 be given. Choose δ >0 such that

Ses(x1−δ) > Sesx1−ε , Ses(x1+δ) < Sesx1+ε. (5.21) We now have

n→∞lim Prob.n

X(n)(t, x0, ω)∈[x1−δ, x1+δ]o

= 1. By (5.21) and a comparison argument, this implies

n→∞lim Prob.

n

X(n)(t+s, x0, ω)∈[Sesx1−ε,Sesx1+ε]

o

= 1. Since ε >0 was arbitrary, this proves the semigroup property (5.20).

As a consequence, by Theorem 3.1, there exists a positive atomless measure µ supported on J∩f−1(0) such that the trajectories of the semigroup are characterized by the formula (3.10).

Of course, the above construction of a measureµ can be repeated on every maximal interval J of increasing or decreasing dynamics.

9. As in Proposition 4.1, given the sets S,S ⊆f−1(0), the positive, atomless measure µon f−1(0), and the maps Λ :S7→R+, Θ : Ω7→[0,1], we construct a unique Markov semigroup S, with transition probabilities given at (4.3), (4.6), (4.9), (4.11). Recalling Definition 3.1,b let Ii+, i ∈ I+, and Ii, i ∈ I, be respectively the domains of increase and the domains of decrease corresponding to µ and S. For any k ∈ J+ and x0 ∈ Jk+ with Jk+ =]ak, bk[ or Jk+= [ak, bk[, the map [0, Tbk[3t7→Setx0 is strictly increasing and limt→TbkSetx0=bkfor some Tbk >0. Thus,Jk+ is an strictly increasing domain associated to Seand this implies that

Jk+ ⊆ Ii+ for somei∈ I+.

Similarly, for everyh∈ J it holds thatJh ⊆ Ij for somei∈ I. Thus, [

k∈J+

Jk+ ⊆ [

i∈I+

Ii+ and [

h∈J

Jh ⊆ [

i∈I

Ii.

In the remainder of the proof, we will show that the given Markov semigroupScoincides with the semigroup Sb constructed in Proposition 4.1. In other words, for everyx0 ∈R, t >0 and any Borel set A⊂R, the transition probabilities coincide:

Pt(x0, A) = Pbt(x0, A) .

= Prob.

Sbtx0 ∈A . (5.22)

Four cases must be considered:

(i) x0 ∈/ S previous construction in Proposition 4.1, it holds that

Pbt x0,[x0,Ses(x0)] Indeed, assume by a contradiction that there existsbx > x0 such that Prob.{X(bt, x0, ω)≤ bx}=δ <1 for somebt >0. By the Markov property ofX, we have

Prob.{X(kbt, x0, ω)≤x} ≤b δk for all k∈Z+.

In particular, the monotone increasing property of the maps t 7→ X(t, x0, ω) implies that limt→∞Prob.{X(bt, x0, ω)≤bx}= 0. Hence,x0 is in a domain of increasing random dynamics and this yields a contradiction.

On the other hand, sincex0 ∈/ S

k∈J+Jk+ S S

jIj

and S

h∈JJh⊆S

i∈IIi, one has that x0

ak, bk:k∈ J+} ∪

ck, dk :k∈ J}

. Thus, (5.25) and (4.3) implies thatx0∈ S and

Pbt x0,{x0}

= 1 = Pt x0,{x0} .

Similarly, one can prove (5.22) forx0 in an interval of decreasing dynamics.

Finally, consider x0 ∈ Ii+∩Ii0

\S, for some i ∈ I+ and i0 ∈ I. It is sufficient to verify (5.22) forA= [x0, b], with b∈Ii+. The caseA= [c, x0], c∈Ii0 is entirely similar. Ifx0 ∈ S/ then

Prob.{X(1, x0, ω) =x0} = 0 and Prob.{X(1, x0, ω)> x0} = Θ(x0) and this implies

Prob.{X(t, x0, ω)∈[x0, b]} = Prob.{X(t, x0, ω)∈[x0, b] and X(1, x0, ω)> x0}

= Prob.{X(1, x0, ω)> x0} ·Prob.

X(t, x0, ω)∈[x0, b]

X(1, x0, ω)> x0

= Θ(x0)·Prob.

SeT++(t,x0,ω)∈[x0, b] = Pbt(x0,[x0, b]).

Otherwise, if x0 =yj ∈ S then Prob.

X(t, x0, ω)∈[x0, b]

= Prob.

n

X(t, x0, ω)∈[x0, b] and X(s, x0, ω)> x0 for somes >0 o +Prob.

n

X(t, x0, ω)∈[x0, b] and X(s, x0, ω)< x0 for somes >0 o

= Θ(x0)·Prob.

SeT++(t,x0,ω)∈[x0, b]

+(1−Θ(x0))·Prob.

n

X(t, x0, ω) =x0

X(s, x0, ω)< x0 for somes >0 o

= Θ(x0)·Prob.n

ST++(t,x0,ω)∈[x0, b]o

+ (1−Θ(x0))·Prob.

Yj(ω)≥t

= Pbt x0,[x0, b]

.

This establishes the identity (5.22) for all initial points x0 ∈ R, completing the proof of the theorem.

References

[1] G. Alberti, S. Bianchini, and G. Crippa, A uniqueness result for the continuity equation in two dimensions.J. European Math. Soc. 16(2014), 201–234.

[2] L. Ambrosio and G. Crippa, Existence, uniqueness, stability and differentiability prop-erties of the flow associated to weakly differentiable vector fields. In: Transport equa-tions and multi-D hyperbolic conservation laws, 3–57.Lecture Notes Un. Mat. Ital. 5, Springer, Berlin, 2008.

[3] F. Ancona and A. Bressan, Patchy vector fields and asymptotic stabilization, Control, Opt. Calc. Var.4 (1999) 419–444.

[4] P. Binding, The differential equation ˙x = f ◦x. J. Differential Equations 31 (1979), 183–199.

[5] A. Bressan, Unique solutions for a class of discontinuous differential equations. Proc.

Amer. Math. Soc. 104(1988), 772–778.

[6] A. Bressan and G. Colombo, Existence and continuous dependence for discontinuous O.D.E’s,Boll. Un. Mat. Italiana, B, 4 (1990) 295–311.

[7] A. Bressan, M. Mazzola, and K. T. Nguyen, Diffusion approximations of Markovian solutions to discontinuous ODEs, to appear.

[8] A. Bressan and W. Shen, Uniqueness for discontinuous O.D.E. and conservation laws.

Nonlinear Anal. 34(1998), 637–652.

[9] F. H. Clarke, Y. S. Ledyaev, R. J. Stern, and P. R. Wolenski,Nonsmooth Analysis and Control Theory. Springer, New York, 1998.

[10] F. Colombini and J. Rauch, Uniqueness in the Cauchy problem for transport inR2 and R1+2.J. Differential Equations211 (2005), 162–167.

[11] R. M. Colombo and A. Marson, A H¨older continuous ODE related to traffic flow.Proc.

Roy. Soc. Edinburgh, Sect. A,133 (2003), 759–772.

[12] C. Dafermos, Generalized characteristics and the structure of solutions of hyperbolic conservation laws. Indiana Univ. Math. J.26(1977), 1097–1119.

[13] C. Dafermos, Generalized characteristics in hyperbolic systems of conservation laws.

Arch. Rational Mech. Anal.107 (1989), 127–155.

[14] L. Dieci and L. Lopez, A survey of numerical methods for IVPs of ODEs with discon-tinuous right-hand side. J. Comput. Appl. Math. 236 (2012), 3967–3991.

[15] R. J. DiPerna and P. L. Lions, Ordinary differential equations, transport theory and Sobolev spaces. Invent. Math.98, (1989), 511–547.

[16] W. E and E. Vanden-Eijnden, A note on generalized flows. Phys. D183 (2003), 159–

174.

[17] A. Filippov,Differential Equations with Discontinuous Right-Hand Sides. Kluwer Aca-demic, Dordrecht, 1988.

[18] F. Flandoli and J. Langa, Markov attractors: a probabilistic approach to multivalued flows.Stoch. Dyn. 8 (2008), 59–75.

[19] A. Marigo and B. Piccoli, Regular syntheses and solutions to discontinuous ODEs.

ESAIM Control Optim. Calc. Var.7 (2002), 291–307.

[20] R. Sentis, ´Equations diff´erentielles `a second membre mesurable.Boll. Un. Mat. Italiana, B15(1978), 724–742.

[21] B. Piccoli and H. Sussmann, Regular synthesis and sufficiency conditions for optimality.

SIAM J. Control Optim.39(2000), 359–410.

Dans le document 3 Semigroups compatible with the ODE (Page 22-32)

Documents relatifs