• Aucun résultat trouvé

Doubly probabilistic representation for the stochastic porous media type equation.

N/A
N/A
Protected

Academic year: 2021

Partager "Doubly probabilistic representation for the stochastic porous media type equation."

Copied!
46
0
0

Texte intégral

(1)

HAL Id: hal-01352670

https://hal.inria.fr/hal-01352670

Preprint submitted on 8 Aug 2016

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Doubly probabilistic representation for the stochastic

porous media type equation.

Viorel Barbu, Michael Röckner, Francesco Russo

To cite this version:

Viorel Barbu, Michael Röckner, Francesco Russo. Doubly probabilistic representation for the

stochas-tic porous media type equation.: Stochasstochas-tic porous media with multiplicative noise.. 2016.

�hal-01352670�

(2)

Stochastic porous media with multiplicative noise. 1

Doubly probabilistic representation for the

stochastic porous media type equation.

Viorel Barbu (1), Michael R¨ockner (2) and Francesco Russo (3)

August 3rd 2016

Summary: The purpose of the present paper consists in proposing and discussing a doubly probabilistic representation for a stochastic porous media equation in the whole space R1 perturbed by a multiplicative colored noise. For almost all random

realizations ω, one associates a stochastic differential equation in law with random coefficients, driven by an independent Brownian motion.

Key words: stochastic partial differential equations; infinite volume; singular porous media type equation; doubly probabilistic representation; multiplicative noise; singular random Fokker-Planck type equation; filtering.

2000 AMS-classification: 35R60; 60H15; 60H30; 60H10; 60G46; 35C99; 58J65; 82C31.

(1) Viorel Barbu, University Al.I. Cuza, Ro–6600 Iasi, Romania. (2) Michael R¨ockner, Fakult¨at f¨ur Mathematik, Universit¨at Bielefeld,

D–33615 Bielefeld, Germany

(3) Francesco Russo, ENSTA ParisTech, Universit´e Paris-Saclay, Unit´e de Math´e-matiques appliqu´ees, 828, boulevard des Mar´echaux, F-91120 Palaiseau, France.

(3)

1 INTRODUCTION 2

1

Introduction

We consider a function ψ : R → R and real functions e0, . . . , eN on R, for some

strictly positive integer N . In the whole paper, the following assumption will be in force.

Assumption 1.1. |ψ(u)| ≤ const|u|, u ≥ 0. In particular, ψ(0) = 0.

• ψ : R→ R is a continuous function such that its restriction to R+is monotone

increasing. Moreover we also suppose that limu→0ψ(u)u exists.

• Let ei∈ Cb2(R), 0≤ i ≤ N.

Let T > 0 and (Ω, F, P ), be a fixed probability space. A generic element of Ω will be denoted by ω. (Ft, t∈ [0, T ]) will stand for a filtration, fulfilling the usual

conditions and we suppose F = FT. Let µ(t, ξ), t∈ [0, T ], ξ ∈ R, be a random field

of the type µ(t, ξ) = N X i=1 ei(ξ)Wi t + e0(ξ)t, t∈ [0, T ], ξ ∈ R,

where Wi,1 ≤ i ≤ N, are independent continuous (F

t)-Brownian motions on

(Ω, F, P ), which are fixed from now on until the end of the paper. For technical reasons we will sometimes set W0

t ≡ t. We focus on a stochastic

partial differential equation of the following type: (

∂tX(t, ξ) = 12∂ξξ2(ψ(X(t, ξ)) + X(t, ξ)∂tµ(t, ξ),

X(0, dξ) = x0(dξ),

(1.1) which holds in the sense of Definition 2.9, where x0is a a given probability measure

on R. The stochastic multiplication above is of Itˆo type. We look for a solution of (1.1) with time evolution in L1(R). Since ψ restricted to R

+ is non-negative,

Assumption 1.1 implies ψ(u) = Φ2(u)u, u

≥ 0, Φ : R+ → R being a non-negative

continuous function which is bounded on R+.

Remark 1.2. 1. In the sequel we will consider, without further comments ex-tensions of ψ (and Φ) to the real line which fulfill the first two items of As-sumption 1.1 for u∈ R instead of u ≥ 0.

2. The restriction on u7→ Φ(u) introduced in Assumption 1.1 to be continuous is not always necessary, but here we assume this for simplicity.

When ψ(u) = |u|m−1u, m > 1, (1.1) and µ

≡ 0, (1.1) is nothing else but the classical porous media equation. When ψ is a general increasing function (and

(4)

1 INTRODUCTION 3 µ ≡ 0), there are several contributions to the analytical study of (1.1), starting from [12] for existence, [15] for uniqueness in the case of bounded solutions and [13] for continuous dependence on the coefficients. Those are the classical references when the space variable varies on the real line. For equations in a bounded domain and Dirichlet boundary conditions, for simplicity, we only refer to monographs, e.g. [28, 26, 1, 2].

As far as the stochastic porous media is concerned, most of the work for existence and uniqueness concerned the case of bounded domain, see for instance [4, 5, 3]. In the infinite volume case, i.e. when the underlying domain is Rd, well-posedness was

fully analyzed in [22], when ψ is polynomially bounded (including the fast diffusion case) when the space dimension is d≥ 3. [8] established existence and uniqueness for any dimension d≥ 1 and the authors obtained estimates for finite time extinction. To the best of our knowledge, except for [22] and [8], this seems to be the only work concerning a stochastic porous type equation in infinite volume.

We provide a probabilistic representation of solutions to (1.1) extending the results of [14, 6] which treated the deterministic case µ ≡ 0. In the deterministic case, it seems that the first author who considered a probabilistic representation (of the type studied in this paper) for the solutions of a non-linear deterministic PDE was McKean [19], particularly in relation with the so called propagation of chaos. In his case, however, the coefficients were smooth. From then on the literature steadily grew and nowadays there is a vast amount of contributions to the subject, see the reference list of [14, 6]. A probabilistic representation when ψ(u) =|u|um−1, m >1,

was provided for instance in [11], in the case of the classical porous media equa-tion. When m < 1, i.e. in the case of the fast diffusion equation, [9] provides a probabilistic representation of the so called Barenblatt solution, i.e. the solution whose initial condition is concentrated at zero.

[14, 6] discussed the probabilistic representation when µ = 0 in the so called non-degenerate and non-degenerate case respectively (see Definition 6.1), where ψ also may have jumps.

In the case µ = 0, the equation (1.1) models a non-linear phenomenon macroscopi-cally. Let us denote by u : [0, T ]× R → R the solution of that equation. The idea of the probabilistic representation is to find a process (Yt, t ∈ [0, T ]) whose law at

time t has u(t,·) for its density. In this case the equation (1.1) is conservative, in the sense that the integral (mass) of the solution is conserved along the time. The process Y turns out to be the weak solution of the non-linear stochastic

(5)

differ-1 INTRODUCTION 4 ential equation (

Yt = Y0+R0tΦ(u(s, Ys))dBs,

Law(Yt) = u(t,·), t ≥ 0,

(1.2) where B is a classical Brownian motion. The behavior of Y is the microscopic counterpart of the phenomenon described by (1.1), describing the evolution of a single particle, whose law behaves according to (1.1).

The idea of this paper is to consider the case when µ6= 0. This includes the case when µ is not vanishing but it is deterministic; it happens when only e0is non-zero,

and ei≡ 0, 1 ≤ i ≤ n. In this case our technique gives a sort of forward

Feynman-Kac formula for a non-linear PDE. One of the main interests of this paper is that it provides a (forward) probabilistic representation for non conservative (random) PDE.

We introduce a doubly stochastic representation on which one can represent the solution of (1.1) as the weighted-law with respect to the random field µ (or simply the µ-weighted law) of a solution to a non-linear SDE.

Intuitively, it describes the microscopic aspect of the SPDE (1.1) for almost all quenched ω. The terminology strongly refers to the case where the probability space (Ω, F, P ) on which the SPDE is defined, remains fixed.

We represent a solution X to (1.1) making use of another independent source of randomness described by another probability space based on some set Ω1.

The analog of the process Y , obtained when µ is zero in [6, 14], is a doubly stochastic process, still denoted by Y defined on (Ω1× Ω, Q), for which, X constitutes the

so-called family of µ-marginal weighted laws of Y , see Definition 2.4. Y is the solution of a doubly stochastic non-linear diffusion problem, see Definition 3.1. It will be a (doubly) stochastic process (ω1, ω)7→ Y (ω1, ω) solution of

Yt= Y0+

Z t 0

Φ(X(s, Ys, ω))dBs, (1.3)

and B(·, ω) is a Brownian motion on Ω1for almost any fixed ω∈ Ω. The solution of

(1.3) is in the following sense: fixing a realization ω ∈ Ω, Y (·, ω) is a weak solution to the first line of (1.2) with u(t, ξ) = X(t, ξ, ω). Moreover X(t, ξ, ω) is the µ-marginal weighted law of Yt(·, ω).

The paper includes the following main achievements.

1. If we replace in (1.3) a(s, ξ, ω) = Φ(X(s, ξ, ω)) and a is bounded and non-degenerate, we show existence and uniqueness of the solution, strongly in ω, weakly in ω1 ∈ Ω1, see Proposition 4.1. We also show the existence of law

(6)

2 PRELIMINARIES 5 2. Theorem 3.3 states that the µ-marginal weighted laws X of a solution Y of a doubly stochastic non-linear diffusion problem constitute a solution of the stochastic porous media equation (1.1).

3. Conversely, given a solution X of (1.1), under suitable conditions, there is a solution Y of the doubly stochastic non-linear diffusion. This is discussed in Theorem 6.3 and in Theorem 7.1, distinguishing respectively the cases when ψis non-degenerate and degenerate, see Definition 6.1.

4. When ψ is non-degenerate, then the doubly stochastic non-linear diffusion problem also admits uniqueness, see Theorem 6.3.

5. Section 3.2 illustrates a filtering interpretation for a solution of SPDE (1.1). Indeed, the µ-marginal weighted laws X of a solution Y of a doubly stochastic non-linear diffusion problem (1.3) can be seen as conditional densities of Yt, t∈

[0, T ] with respect to some probability measure.

6. Uniqueness of the stochastic Fokker-Planck equation obtained replacing Φ2

by a function a(t, ω, ξ) in (1.1), see Theorem 5.1.

7. Existence of a density to the solution of (1.3), see Proposition 4.4.

2

Preliminaries

2.1

Basic notations

First we introduce some basic recurrent notations. M(R) denotes the space of finite real measures.

We recall that S(R) is the space of the Schwartz fast decreasing test functions. S′(R)

is its dual, i.e. the space of Schwartz tempered distributions. On S′(R), the map

(I − ∆)s2, s ∈ R, is well-defined. For s ∈ R, Hs(R) denotes the classical Sobolev space consisting of all functions f ∈ S(R) such that (I

− ∆)s2f ∈ L2(R). We introduce the norm

kfkHs :=k(I − ∆) s 2fk

L2,

wherek · kLpis the classical Lp(R)-norm for 1≤ p ≤ ∞. In the sequel, we will often simply denote H−1(R), by H−1 and L2(R) by L2. Furthermore, Wr,p denote the

classical Sobolev space of order r∈ N in Lp(R) for 1

≤ p ≤ ∞. Definition 2.1. Given a function e belonging to L1

loc(R)∩ S′(R), we say that it

is an H−1-multiplier, if the map ϕ

7→ ϕe is continuous from S(R) to H−1 with

(7)

2 PRELIMINARIES 6 In the following lines we give some other sufficient conditions on a function e to be an H−1-multiplier.

Lemma 2.2. Let e : R→ R. If e ∈ W1,∞ (for instance if e

∈ W2,1), then e is a

H−1(R)-multiplier. In particular the functions ei,0 ≤ i ≤ N of Definition 1.1 are

H−1(R)-multipliers.

Proof. By duality arguments, we observe that it is enough to show the existence of a constant C(e) such that

kegkH1 6 C(e)kgkH1, ∀ g ∈ S(R). (2.1) (2.1) follows by product derivation rules, with for instance C(e) =√2kek2+ke

k2∞

1 2 .

With respect to the random field µ, we introduce a notation for the Itˆo type stochas-tic integral below.

Let Z = (Z(s, ξ), s ∈ [0, T ], ξ ∈ R) be a random field on (Ω, F, (Ft), P ) such that

RT 0

R

R|Z(s, ξ)|dξ

2

ds <∞ a.s. and it is an L1(R)-valued (F

s)-progressively

mea-surable process. Then, the stochastic integral Z [0,t]×R Z(s, ξ)µ(ds, ξ)dξ := N X i=0 Z t 0 Z R Z(s, ξ)ei(ξ)dξ  dWi s, is well-defined.

More generally, if s7→ Z(s, ·) is a measurable map [0, T ] × Ω 7→ M(R), where M(R) is the space of signed finite measures, such that R0TkZ(s, ·)k2

vards <∞, then the

stochastic integral Z [0,t]×R Z(s, ξ)µ(ds, ξ)dξ := N X i=0 Z t 0 Z R Z(s, dξ)  ei(ξ)dWi s, is well-defined.

We specify now better the filtration (Ft)t∈[0,T ] of the introduction. We will

con-sider a fixed filtered probability space (Ω, F, P, (Ft)t∈[0,T ]), where (Ft)t∈[0,T ] is the

canonical filtration of a standard Brownian motion (W1, . . . , WN) enlarged with

the σ-field generated by x0. We also suppose that F0 contains the P -null sets and

F= FT.

Let (Ω1, H) be a measurable space. In the sequel, we will also consider another

filtered probability space (Ω0, G, Q,(Gt)t∈[0,T ]), where Ω0= Ω1× Ω, G = H ⊗ F.

Clearly any random element Z on (Ω, F) will be implicitly extended to (Ω0, G)

(8)

2 PRELIMINARIES 7 Here we fix some conventions concerning measurability. Any topological space E is naturally equipped with its Borel σ-algebra B(E). For instance B(R) (resp. B([0, T ]) denotes the Borel σ-algebra of R (resp. [0, T ]).

Given any probability space ( ˜Ω, ˜F, ˜P), the σ-field F will always be omitted. When we will say that a map T : Ω× E → R is measurable, we will implicitly suppose that the corresponding σ-algebras are F⊗ B(E) and B(R).

All the processes on any generic measurable space (Ω2, F2) will be considered to

be measurable with respect to both variables (t, ω). In particular any processes on Ω1×Ω is supposed to be measurable with respect to ([0, T ]×Ω1×Ω, B([0, T ])⊗H⊗F).

A function (A, ω) 7→ Q(A, ω) from H × Ω → R+ is called random kernel (resp.

random probability kernel) if for each ω ∈ Ω, Q(·, ω) is a finite positive (resp. probability) measure and for each A ∈ H, ω 7→ Q(A, ω) is F-measurable. The finite measure Q(·, ω) will also be denoted by Qω. To that random kernel we can

associate a specific finite measure (resp. probability) denoted by Q on (Ω0, G)

setting Q(A× F ) =RFQ(A, ω)P (dω) =

R

FQ

ω(A)P (dω), for A∈ H, F ∈ F. The

probability Q from above will be supposed here and below to be associated with a random probability kernel.

Definition 2.3. If there is a measurable space (Ω1, H) and a random kernel Q

as before, then the probability space (Ω0, G, Q) will be called suitable enlarged

probability space (of (Ω, F, P )).

As said above, any random variable on (Ω, F) will be considered as a random variable on Ω0= Ω1× Ω. Then, obviously, W1, . . . , WN are independent Brownian motions

also (Ω0, G, Q).

Given a local martingale M on any filtered probability space, the process Z := E(M ) denotes its Dol´eans exponential, which is a local martingale. In particular it is the unique solution of dZt = Zt−dMt, Z0 = 1. When M is continuous we have

Zt= eMt−

1 2hMit.

2.2

The concept of marginal weighted laws

Let us consider a suitably enlarged probability space as in Definition 2.3.

Definition 2.4. Let Y : Ω1× Ω × [0, T ] → R be a measurable process, progressively

measurable on (Ω0, G, Q,(Gt)), where (Gt) is some filtration on (Ω0, G, Q) such that

W1, . . . , WN are (G

(9)

2 PRELIMINARIES 8 stochastic integral notation

Z t 0 µ(ds, Ys) = N X i=0 Z t 0 ei(Ys)dWsi, t∈ [0, T ]. (2.2)

As we shall see below in Proposition 2.6, for every t∈ [0, T ] EQ  Et Z · 0 µ(ds, Ys)  <∞. (2.3)

To Y , we will associate its family of µ-marginal weighted laws, (or simply family of µ-weighted laws) i.e. the family of random kernels (t∈ [0, T ]),

Γt= ΓYt(A, ω), A∈ B(R), ω ∈ Ω  defined by ϕ7→ EQω  ϕ(Yt(·, ω))Et( Z · 0 µ(ds, Ys)(·, ω))  = Z R ϕ(r)ΓY t (dr, ω), (2.4)

where ϕ is a generic bounded real Borel function. We will also say that for fixed t∈ [0, T ], Γtis the µ-marginal weighted law of Yt.

Remark 2.5. i) If Ω is a singleton 0}, ei = 0, 1 6 i 6 N , the µ-marginal

weighted laws coincide with the weighted laws ϕ7→ EQ  ϕ(Yt) exp Z t 0 e0(Ys)ds  ,

with Q = Qω0. In particular if µ≡ 0 then the µ-marginal weighted laws are the classical laws.

ii) By (2.3), for any t∈ [0, T ] , for P almost all ω ∈ Ω, EQω  Et( Z · 0 µ(ds, Ys)(· , ω))  <∞.

iii) The function (t, ω)7→ Γt(A, ω) is measurable, for any A∈ B(R), because Y is

a measurable process.

iv) In the case e0= 0, the situation is the following. For each fixed ω ∈ Ω, (2.4)

is a (random) non-negative measure which is not a probability. However the expectation of its total mass is indeed 1.

Proposition 2.6. Consider the situation of Definition 2.4. Then we have the following.

i) The process Mt := EtPNi=1

0ei(Ys)dWsi



is a martingale. We emphasize that the sum starts indeed at i = 1.

(10)

2 PRELIMINARIES 9 ii) The quantity (2.3) is bounded by exp T e0

∞  . iii) EQ(M2 t) ≤ exp(3T PN

i=1keik2∞), t ∈ [0, T ]. Consequently M is a uniformly

integrable martingale.

iv) For P -a.e. ω ∈ Ω, sup0≤t≤TkΓt(·, ω)kvar <∞, where we remind that k · kvar

stands for the total variation.

Remark 2.7. Proposition 2.6 ii) yields in particular that Y always admits µ-marginal weighted laws.

Proof. i) The result follows since the Novikov condition EQexp1 2 PN i=1 Rt sei(Ys) 2ds<

∞ is verified, because the functions ei, i= 1 . . . N , are bounded.

ii) This follows because EQ(M

t) = 1,∀t ∈ [0, T ]. iii) M2 t is equal to Ntexp  3PNi=1 Rt 0(ei)2(Ys)ds 

,where N is a positive martingale with N0= 1. iv) For t∈ [0, T ], sup t6TkΓt (· , ω)kvar= sup t6T EQω  Mtexp Z t 0 e0(Ys)ds  6exp T e0 sup t≤T EQω(Mt) .

Taking the expectation with respect to P it implies EP  sup t6T ΓY t (· , ω) var  ≤ exp T e0 ∞  EP  sup t6T EQω(Mt)  ≤ exp T e0 ∞  EP  EQω  sup t6T Mt  . By the Burkholder-Davis-Gundy (BDG) inequality this is bounded by 3 exp T e0 ∞  EQhMi 1 2 T  ≤ 3 exp T e0 ∞  EQ   "Z T 0 ds N X i=1 Ms2ei(Ys)2 #1 2  ≤ C(e, N, T ) ( EQ Z T 0 dsMs2 !)1 2 ,

where the last inequality is due to Jensen’s inequality; C(e, N, T ) is a constant depending on N, T and ei, i= 0 . . . N,. By Fubini’s Theorem and item iii), we

have EQ Z T 0 dsMs2 ! ≤ T exp(3T N X i=1 kei k∞).

(11)

2 PRELIMINARIES 10 The lemma below gives a characterization of the µ-weighted laws of a process Y living on an enlarged probability space.

Lemma 2.8. Let Y (resp. ˜Y) be a process on a suitable enlarged probability space (Ω0, G, Q) (resp. ( ˜Ω0, ˜G, ˜Q)). Set W = (W1, . . . , WN). Suppose that the law of

(Y, W ) under Q and the law of ( ˜Y , W) under ˜Qare the same. Then, the µ-marginal weighted laws of Y under Q coincide a.s. with the µ-marginal weighted laws of ˜Y under ˜Q.

Proof. Let 0 ≤ t ≤ T . Using the assumption, we deduce that for any bounded continuous function f : R→ R, and every F ∈ Ft, we have

EQ 1Ff(Yt)Et N X i=0 Z · 0 ei(Ys)dWsi !! = EQ˜ 1Ff( ˜Yt)Et N X i=0 Z · 0 ei( ˜Ys)dWsi !! . (2.5) To show this, using classical regularization properties of Itˆo integral, see e.g. The-orem 2 in [25], and uniform integrability arguments, we first observe that

Et N X i=0 Z · 0 ei(Ys)dWsi ! is the limit in L2(Ω 0, Q) of Et N X i=0 Z · 0 ei(Ys) Wi s+ε− Wsi ε ds ! .

A similar approximation property arises replacing Y with ˜Y and Q with ˜Q. Then (2.5) easily follows.

To conclude, it will be enough to show the existence of a countable family (fj)j∈N

of bounded continuous real functions for which, for P almost all ω ∈ Ω, for any j ∈ N, we have Rj= ˜Rj where Rj(ω) = EQ ω fj(Yt(·, ω))Et N X i=0 Z · 0 ei(Ys(·, ω))dWsi !! ˜ Rj(ω) = E ˜ Qω fj( ˜Yt(·, ω))Et N X i=0 Z · 0 ei( ˜Ys(·, ω))dWsi !! .

This will follow, since applying (2.5), for any F ∈ Ft, we have EP(1FRj) =

EP(1 FR˜j).

2.3

SPDE, weak-strong existence of SDEs

In this section we introduce the basic concepts related to the stochastic porous media equation and the related non-linear diffusion.

(12)

2 PRELIMINARIES 11 Definition 2.9. A random field X = (X(t, ξ, ω), t∈ [0, T ], ξ ∈ R, ω ∈ Ω) is said to be a solution to (1.1) if P a.s. we have the following.

1. X ∈ C([0, T ]; S(R))

∩ L2([0, T ]; L1 loc(R)).

2. X is an S′(R) -valued (F

t)-progressively measurable process.

3. For any test function ϕ∈ S(R) with compact support, t ∈]0, T ] a.s. we have Z R X(t, ξ)ϕ(ξ)dξ = Z R x0(dξ)ϕ(ξ) + 1 2 Z t 0 ds Z R ψ(X(s, ξ,·))ϕ′′(ξ)dξ + Z [0,t]×R X(s, ξ)ϕ(ξ)µ(ds, ξ)dξ.

At Definition 3.1, we will present the concept of double stochastic non-linear diffu-sion which is a McKean type equation with a supplementary source of randomness. Before this, as a first step, we will introduce a particular the case of simple double stochastic differential equation (DSDE). Let γ : [0, T ]× R × Ω → R be an (Ft

)-progressively measurable random fields and x0 be a probability on B(R).

Definition 2.10. a) We say that (DSDE)(γ, x0) admits weak-strong

exis-tenceif there is a suitable extended probability space (Ω0, G, Q), i.e. a

mea-surable space (Ω1, H), a probability kernel (Q(· , ω), ω ∈ Ω) on H × Ω, two

Q-a.s. continuous processes Y, B on (Ω0, G) where Ω0= Ω1× Ω, G = H ⊗ F

such that the following holds.

1) For almost all ω, Y (·, ω) is a (weak) solution to    Yt(· , ω) = Y0+ Rt 0γ(s, Ys(· , ω), ω)dBs(· , ω), Law(Y0) = x0, (2.7) with respect to Qω, where B(· , ω) is a Qω-Brownian motion for almost

all ω.

2) We denote (Yt) the canonical filtration associated with (Ys,0≤ s ≤ t) and

Gt= Yt∨ ({∅, Ω1} ⊗ Ft). W1, . . . , WN is a (Gt)-martingale under Q.

3) For every 0≤ s ≤ T , for every bounded continuous A : C([0, s]) → R, the r.v. ω7→ E

(A(Yr(·, ω), r ∈ [0, s])) is Fs-measurable.

b) We say that (DSDE)(γ, x0) admits weak-strong uniqueness if the following

holds. Consider a measurable space (Ω1, H) (resp. (eΩ1, eH)), a probability

kernel (Q(· , ω), ω ∈ Ω) (resp. ( eQ(· , ω), ω ∈ Ω)), with processes (Y, B) (resp. ( eY , eB)) such that (2.7) holds (resp. (2.7) holds with (Ω0, G, Q) replaced with

(13)

2 PRELIMINARIES 12 (eΩ0, eG0, eQ, eQbeing associated with ( eQ(· , ω))). Moreover we suppose that item

2) is verified for Y and ˜Y.

Then (Y, W1, . . . , WN) and ( eY , W1, . . . , WN) have the same law.

c) A process Y fulfilling items 1) and 2) under (a) will be called weak-strong solution of(DSDE)(γ, x0).

Remark 2.11. Let Y be a weak-strong solution of (DSDE)(γ, x0) with

correspond-ing B.

a) Since for almost all ω ∈ Ω, B(·, ω) is a Brownian motion under Qω, it is

clear that B is a Brownian motion under Q, which is independent of FT, i.e.

independent of W1, . . . , WN.

Indeed let A : C([0, T ])→ R be a continuous bounded functional, and denote by W the Wiener measure on C([0, T ])N. Let F be a bounded F

T-measurable

r.v. Since for each ω, B(·, ω) is a Wiener process with respect to Qω, we get

EQ(F A(B)) = Z Ω F EQω(A(B(·, ω)))dP (ω) = Z Ω F(ω)dP (ω) Z Ω1 A(ω1)dW(ω1) = Z Ω0 F(ω)dQ(ω0) Z Ω0 A(ω1)dQ(ω0).

This shows that (W1, . . . , WN) and B are independent. Taking F = 1 Ω in

previous expression, the equality between the left-hand side and the third term, shows that B is a Brownian motion under Q.

b) Since for any 1≤ i, j ≤ N,

[Wi, Wj]t= δijt, [Wi, B] = 0, [B, B]t= t,

L´evy’s characterization theorem, implies that (W1, . . . , WN, B) is a Q-Brownian

motion.

c) An equivalent formulation to 1) in item a) of Definition 2.10 is the following. For P a.e., ω ∈ Ω, Y (· , ω) solves the Qω-martingale problem with respect to

the (random) PDE operator

tf(ξ) = 1 2γ

2(t, ξ, ω)f′′(ξ),

and initial distribution x0. Indeed, we remark that the construction can be

performed on the canonical space Ω1= C([0, T ]; R).

Proposition 2.12. Let Y be a process as in Definition 2.10 a). We have the following.

(14)

3 THE CONCEPT OF DOUBLY PROBABILISTIC REPRESENTATION 13 1. Y is a (Gt)-martingale on the product space (Ω0, G, Q).

2. [Y, Wi] = 0,

∀1 ≤ i ≤ N.

Proof. Let 0 ≤ s < t ≤ T , Fs ∈ Fs and G : C([0, s]) → R be continuous and

bounded. We will prove below that, for 1≤ i ≤ N + 1, setting WtN+1 = 1, for all t≥ 0,

EQ(Y

tWtiG(Yr, r≤ s)1Fs) = E

Q(Y

sWsi1FsG(Yr, r≤ s)). (2.8) Then (2.8) with i = N + 1 shows item 1. Considering (2.8) with 1≤ i ≤ N, shows that Y Wi is a (G

t)-martingale, which shows item 2. Therefore, it remains to show

(2.8).

The left-hand side of that equality gives Z Ω dP (ω) Wti(ω)1Fs(ω)E Qω (Yt(·, ω)G(Yr(·, ω), r ≤ s)) = Z Ω dP (ω)1Fs(ω)W i t(ω)EQ ω (Ys(·, ω)G(Yr(·, ω), r ≤ s)) ,

because Y (·, ω) is a Qω-martingale for P -almost all ω. To obtain the right-hand

side of (2.8) it is enough to remember that Wi are (G

t)-martingales and that item

a) 3) in Definition 2.10 holds. This concludes the proof of Proposition 2.12. Remark 2.13. Lemma 2.8 shows that, whenever weak-strong uniqueness holds, then the µ-weighted marginal laws of any weak solution Y are uniquely determined.

3

The concept of doubly probabilistic

representa-tion

3.1

The doubly stochastic non-linear diffusion.

We come back to the notations and conventions of the introduction and of Section 2. Let x0 be a probability on R. The doubly probabilistic representation is based

on the following idea. Let Y : Ω0× [0, T ] → R be a measurable process where

Ω0 = Ω1× Ω is the usual enlarged probability space as introduced in Definition

2.3. Let Q be a probability inherited from a random kernel Qωas before Definition

2.3. Let (Gt), where (Gt) is some filtration on (Ω0, G) such that W1, . . . , WN are

(15)

3 THE CONCEPT OF DOUBLY PROBABILISTIC REPRESENTATION 14 Suppose that      Yt = Y0+ Rt 0Φ(X(s, Ys))dBs, µ− Weighted Law(Yt) = X(t, ξ)dξ, t∈]0, T ], µ− Weighted Law(Y0) = x0(dξ), (3.1)

where B is a Q-standard Brownian motion with respect to (Gt). Then X solves the

SPDE (1.1). This will be the object of Theorem 3.3. Vice versa, if X is a solution of (1.1) then there is a process Y solving (3.1), see Theorem 7.1.

Definition 3.1. 1) We say that the doubly stochastic non-linear diffusion (DSNLD) driven by Φ (on the space (Ω, F, P ) with initial condition x0, related to the

ran-dom field µ (shortly (DSNLD)(Φ, µ, x0)) admits weak existence if there is a

measurable random field X : [0, T ]× R × Ω → R with the following properties. a) The problem (DSDE)(γ, x0) with γ(t, ξ, ω) = Φ(X(t, ξ, ω)) admits

weak-strong existence.

b) X = X(t, ξ,·)dξ, t ∈]0, T ], is the family of µ-marginal weighted laws of Y , where Y is the solution of (2.7) in Definition 2.10. In other words X constitutes the densities of those µ-marginal weighted laws.

2) A couple (Y, X), such that Y is a (weak-strong) solution to the

(DSDE)(γ, x0), is called weak solution to the (DSNLD)(Φ, µ, x0). Y is also

called doubly stochastic representation of the random field X.

3) Suppose that, given two measurable random fields Xi: [0, T ]× R × Ω → R, i =

1, 2 on (Ω, F, P, (Ft)), and Yi, on extended probability space (Ωi0, Qi), i =

1, 2, such that (Yi, Xi) is a weak-strong solution of (DSDE)(Φ(Xi), x 0), i =

1, 2, where we always have that (Y1, W1, . . . , WN) and (Y2, W1, . . . , WN)

have the same law. Then we say that the (DSNLD)(Φ, µ, x0) admits weak

uniqueness.

Remark 3.2. If (DSNLD)(Φ, µ, x0) admits weak uniqueness then the µ-marginal

weighted laws of Y are uniquely determined, P -a.s., see Lemma 2.8.

Theorem 3.3. Let (Y, X) be a solution of (DSNLD)(Φ, µ, x0). Then X is a solution

to the SPDE (1.1).

Remark 3.4. 1. Let t∈ [0, T ]. Let ϕ : R → R be Borel and bounded. Then Z R ϕ(ξ)X(t, ξ, ω)dξ = EQω  ϕ(Yt(ω))Et Z · 0 µ(ds, Ys(ω))  . So Z R X(t, ξ, ω)dξ = EQω  Et Z · 0 µ(ds, Ys(ω))  .

(16)

3 THE CONCEPT OF DOUBLY PROBABILISTIC REPRESENTATION 15 Even though for a.e. ω ∈ Ω, the previous expression is not necessarily a probability measure, of course,

νω: ϕ7→

R

Rϕ(ξ)X(t, ξ, ω)dξ

R

RX(t, ξ, ω)dξ

is one. It can be expressed as νω(A) = EQω (1A(Yt)Et(M (·, ω))) EQωE t(M (·, ω)) , where Mt(·, ω) = Rt 0µ(ds, Ys(·, ω)), t ∈ [0, T ], is defined in (2.2).

2. Consider the particular case e0 = 0, e1 = c, c being some constant. In this

case, the µ-marginal laws are given by A7→ EQω(1A(Yt)cEt(W )) = cEt(W )EQ

ω

(1A(Yt)) = cEt(W )νω(t, A)

and νω(t,·) is the law of Yt(·, ω) under Qω.

Proof. Let B denote the Brownian motion associated to Y as a solution to (DSDE)(γ, x0),

mentioned in item a)1) of Definition 3.1. For t∈ [0, T ], we set Zt= Et Z · 0 µ(ds, Ys)  , Mt= Ztexp  − Z t 0 e0(Ys)ds  , t∈ [0, T ]. 1. Proof of Definition 2.9 1. By Proposition 2.6, (Mt, t∈ [0, T ]) is a uniformly

integrable martingale. Consequently t7→ Zt is continuous in L1(Ω0, Q). On

the other hand the process Y is continuous. This implies that P a.e. ω Ω, X ∈ C([0, T ]; M(R)), where M(R) is equipped with the weak topology. This implies that X ∈ C([0, T ]; S′(R)). Furthermore, for P a.e. ω ∈ Ω,

and t ∈]0, T ], X(t, ·, ω) ∈ L1(R) and R

RX(t, ξ, ω)dξ = kΓ(t, ·, ω)kvar. By

Proposition 2.6 iv), it follows that P -a.s. X(·, ·, ω) ∈ L∞([0, T ]; L1(R))

L2([0, T ]; L1 loc(R)).

2. Definition 2.9 2. follows from Remark 3.4 1) and Definition 2.10 a) 3). 3. Proof of Definition 2.9 3. Let ϕ∈ S(R) with compact support. Taking into

account Proposition 2.12, we apply Itˆo’s formula to get ϕ(Yt)Zt= ϕ(Y0) + Z t 0 ϕ′(Ys)ZsdYs+ Z t 0 ϕ(Ys)Zs µ(ds, Ys)− 1 2 N X i=1 (ei(Ys))2ds ! +1 2 Z t 0 ϕ′′(Ys)Φ2(X(s, Ys))Zsds + 1 2 Z t 0 ϕ(Ys)Zs N X i=1 (ei(Y s))2 ! ds.

(17)

3 THE CONCEPT OF DOUBLY PROBABILISTIC REPRESENTATION 16 Indeed we remark thatR0tϕ′(Y

s)d[Z, Y ]s= 0, because [Z, Y ]t=PNi=1 Rt 0e i(Y s)Zsd[Wi, Y]s = 0; in fact [Wi, Y] = 0 by Proposition 2.12. So ϕ(Yt)Zt = ϕ(Y0) + Z t 0 ϕ′(Ys)ZsΦ(X(s, Ys))dBs + Z t 0 ϕ(Ys)Zsµ(ds, Ys) + 1 2 Z t 0 ϕ′′(Ys)Φ2(X(s, Ys))Zsds.

Taking the expectation with respect to Qω we get dP -a.s.,

Z R dξϕ(ξ)X(t, ξ) = Z R ϕ(ξ)x0(dξ) + N X i=0 Z t 0 dWi s Z R dξϕ(ξ)ei(ξ)X(s, ξ)  +1 2 Z t 0 ds Z R dξϕ′′(ξ)Φ2(X(s, ξ))X(s, ξ),

which implies the result. Indeed, in the previous equality, we have used Lemma 3.5 below.

Lemma 3.5. Let 1≤ i ≤ N. For P a.e. ω ∈ Ω, we have EQω Z t 0 ϕ(Ys)Zsei(Ys)dWsi  (·, ω) = Z t 0 dWsi(ω) Z R ϕ(ξ)ei(ξ)X(s, ξ, ω)dξ. Proof. Since the Brownian motions Wi are not random for Qω, it is possible to

justify the permutation of the stochastic integral with respect to Wi and Eby a

Fubini argument approximating the stochastic integrals via Lebesgue integral, see e.g. Theorem 2 of [25]. A complete proof is given in [7].

3.2

Filtering interpretation

Item 1. of Remark 3.4 has an interpretation in the framework of filtering theory, see e.g. [20] for a comprehensive introduction on that subject.

Suppose e0 = 0. Let ˆQ be a probability on some probability space (Ω

0, GT), and

consider the non-linear diffusion problem (1.2) as a basic dynamical phenomenon. We suppose now that there are N observations Y1, . . . , YN related to the process Y

generating a filtration (Ft). We suppose in particular that dYti= dWti+ei(Yt)dt, 1≤

i≤ N, and W1, . . . , WN be (F

t)-Brownian motions. Consider the following

dynam-ical system of non-linear diffusion type:      Yt = Y0+R0tΦ(X(s, Ys))dBs dYi t = dWti+ ei(Yt)dt, 1≤ i ≤ N,

X(t,·) : conditional law of Yt under Ft.

(18)

4 THE DENSITIES OF THE µ-MARGINAL WEIGHTED LAWS 17 The third equality of (3.1) means, under ˆQ, that we have,

Z

R

ϕ(ξ)X(t, ξ)dξ = E(ϕ(Yt)|Ft). (3.2)

We remark that, under the new probability Q defined by dQ = d ˆQE(R0Tµ(ds, Ys)),

Y1, . . . , YN are standard (F

t)-independent Brownian motions. Then (3.2) becomes

Z R ϕ(ξ)X(t, ξ)dξ = EQˆ(ϕ(Yt)|Ft) = EQ(ϕ(Y t)Et(R0·µ(ds, Ys)|Ft)) EQ(E t(R · 0µ(ds, Ys)|Ft)) .

Consequently, by Theorem 3.3, X will be the solution of the SPDE (1.1), with x0

being the law of Y0; so (1.1) constitutes the Zakai type equation associated with

our filtering problem.

4

The densities of the µ-marginal weighted laws

This section constitutes an important step towards the doubly probabilistic rep-resentation of a solution to (1.1), when ψ is non-degenerate. Let x0 be a fixed

probability on R. We recall that a process Y (on a suitable enlarged probability space (Ω0, G, Q)), which is a weak solution to the (DSNLD)(Φ, µ, x0), is in

partic-ular a weak-strong solution of a (DSDE)(γ, x0) where γ : [0, T ]× R × Ω → R is

some suitable progressively measurable random field on (Ω, F, P ). The aim of this section is twofold.

A) To show that whenever γ is a.s. bounded and non-degenerate, (DSDE)(γ, x0)

admit weak-strong existence and uniqueness.

B) The marginal µ-laws of the solution to (DSDE)(γ, x0) admit a density for P ω

a.s.

A) We start discussing well-posedness.

Proposition 4.1. We suppose the existence of random variables A1, A2 such that

0 < A1(ω) 6 γ(t, ξ, ω) 6 A2(ω), ∀(t, ξ) ∈ [0, T ] × R, dP -a.s.

Then (DSDE)(γ, x0) admits weak-strong existence and uniqueness.

Proof. Uniqueness. This is the easy part. Let Y and eY be two solutions. Then for ω outside a P -null set N0, Y(· , ω) and eY(· , ω) are solutions to the same

one-dimensional classical SDE with measurable bounded and non-degenerate (i.e. greater than a strictly positive constant) coefficients. Then, by Exercise 7.3.3 of [27]

(19)

4 THE DENSITIES OF THE µ-MARGINAL WEIGHTED LAWS 18 the law of Y (· , ω) equals the law of eY(· , ω). Then obviously the law of Y equals the law of eY.

Existence. This point is more delicate. In fact one needs to solve the random SDE for P almost all ω but in such a way that the solution produces bimeasurable processes Y and B.

First we regularize the coefficient γ. Let φ be a mollifier with compact support; we set φn(x) = nφ(nx), x ∈ R , n ∈ N. We consider the random fields γn :

[0, T ]× R × Ω → R by γn(t, x, ω) :=RRγ(t, x− y, ω)φn(y)dy.

Let (eΩ1, eH1, eP) be a probability space where we can construct a random variable

Y0 distributed according to x0 and an independent Brownian motion B.

In this way on (eΩ1× Ω, eH1⊗ F, eP⊗ P ) we dispose of a random variable Y0 and a

Brownian motion independent of {∅, eΩ1} ⊗ F. By usual fixed point techniques, it

is possible to exhibit a (strong) solution of (DSDE)(γn, x0) on the over mentioned

product probability space. We can show that there is a unique solution Y = Yn of

Yt = Y0+R0tγn(s, Ys,·)dBs. In fact, the maps Γn : Z 7→

0γn(s, Zs, ω)dBs+ Y0,

where Γn: L2(eΩ1× Ω; eP⊗ P ) → L2(eΩ1× Ω, eP⊗ P ) are Lipschitz; by usual Picard

fixed point arguments one can show the existence of a unique solution Z = Zn in

L2(e

1× Ω; eP ⊗ P ). We observe that, by usual regularization arguments for Itˆo

integral as in Lemma 3.5, for ω-a.s., Y (·, ω) solves for P a.e. ω ∈ Ω, equation Yt(· , ω) = Y0+

Z t 0

γn(s, Ys(· , ω), ω)dBs, (4.1)

on (eΩ1, eH1, eP). We consider now the measurable space Ω0 = Ω1 × Ω, where

Ω1 = C([0, T ], R), equipped with product σ-field G = B(Ω1)⊗ F. On that

mea-surable space, we introduce the probability measures Qn where Qn(dω1, ω) =

Qn(dω1, ω)P (dω) and Qn(·, ω) being the law of Yn(· , ω) for almost all fixed ω.

We set Yt(ω1, ω) = ω1(t), where ω1 ∈ C([0, T ]; R). We denote by (Yt, t ∈ [0, T ])

(resp. (Y1

t)) the canonical filtration associated with Y on Ω0 (resp. Ω1). The next

step will be the following.

Lemma 4.2. For almost all ω dP a.s. Qn(ω, ·) converges weakly to Q(ω, ·), where

under Q(·, ω), Y (· , ω) solves the SDE Yt(· , ω) = Y0+ Z t 0 γ(s, Ys(· , ω), ω)dBs(·, ω), where B(·, ω) is an (Y1 t)-Brownian motion on Ω1.

(20)

4 THE DENSITIES OF THE µ-MARGINAL WEIGHTED LAWS 19 This shows the validity of 1) if Definition 2.10 a).

Remark 4.3. 1) Since Qn(·, ω) converges weakly to Q(·, ω), ω dP a.s., then the

limit (up to an obvious modification) is a measurable random kernel.

2) This also implies that Yn(·, ω) converges stably to Q(·, ω). For details about

the stable convergence the reader can consult [17, section VIII 5. c] and the recent monograph [16].

The considerations above allow to complete the proof of Proposition 4.1. By Lemma 4.2, Qω = Q(·, ω) is a random kernel, being a limit of random kernels. Let us

consider the associated probability measure on the suitable enlarged probability space (Ω0, G, Q). We observe that Y on (Ω0, G) is obviously measurable, because it

is the canonical process Y (ω1, ω) = ω1. Setting

Bt(·, ω) = Z t 0 dYs(·, ω) γ(s, Ys(·, ω), ω) ,

we get [B]t(·, ω) = t under Q(· , ω), so, by L´evy characterization theorem, it is a

Brownian motion. Moreover B is bimeasurable.

Let G = A(Yr(·, ω), r ∈ [0, s]), where A is a bounded functional C([0, s]) → R. We

first observe that the r.v. ω7→ E(G) is F

s-measurable. This happens because Y

is, under Qω, a martingale with quadratic variation

Rt 0γ

2(s, Y

s(·, ω), ω)ds, 0 ≤ t ≤ T



, i.e. with (random) coefficient which is (Ft

)-progressively measurable. This shows item 3) of Definition 2.10 a). The last point to check is that W1, . . . , WN are (G

t)-martingales, where Gt= Yt∨

({∅, Ω1} ⊗ Ft), 0≤ t ≤ T , i.e. item 2) of Definition 2.10.

Indeed, we justify this immediately. Consider 0≤ s ≤ t ≤ T . Taking into account monotone class arguments, given F ∈ Fs, G∈ Y1s, 1≤ i ≤ N, it is enough to prove

that

EQ(F GWi t) = E

Q(F GWi

s). (4.2)

Using the fact that Wiis an (F

t)-martingale and that EQ

ω

(G) is Fs-measurable by

item a) 3) of Definition 2.10 (established above), the left-hand side of (4.2) gives EP(F WtiEQ

ω

(G)) = EP(F WsiEQ

ω (G)),

which constitutes the right-hand side of (4.2). This concludes the proof of the proposition.

(21)

4 THE DENSITIES OF THE µ-MARGINAL WEIGHTED LAWS 20 Proposition 4.4. We suppose the existence of r.v. A1, A2 such that

0 < A1(ω) 6 γ(t, ξ, ω) 6 A2(ω),∀(t, ξ) ∈ [0, T ] × R, a.s.

Let Y be a weak-strong solution to (DSDE)(γ, x0) and we denote by (νt(dy,·), t ∈

[0, T ]), the µ-marginal weighted laws of process Y .

1. There is a measurable function q : [0, T ]× R × Ω → R+ such that dtdP

a.e., νt(dy,·) = qt(y,·)dy. In other words the µ-marginal weighted laws admit

densities. 2. R[0,T ]×Rq2

t(y, ·)dtdy < ∞ dP -a.s..

3. q is an L2(R)-valued progressively measurable process.

Proof. By 3) of Definition 2.10, the µ-marginal laws constitute an S′(R)-valued

progressively measurable process. Consequently 3. holds if 1. and 2. hold. Let Bt(·, ω) := Z t 0 dYs(·, ω) γ(s, Ys(·, ω), ω) . We denote again Qω:= Q( · , ω) according to Definition 2.10, ω ∈ Ω.

Let ω ∈ Ω be fixed. Let ϕ : [0, T ] × R → R be a continuous function with compact support. We need to evaluate

EQω Z T 0 ϕ(s, Ys)Zsds ! , (4.3)

where Zs= Msexp R0se0(Yr)drwhere Ms= EsPNi=1

0ei(Yr)dWri

 . Msis smaller or equal than expPNj=1

Rs 0ej(Yr) dWrj  which equals exp   N X j=1  Wsjej(Ys)− Z s 0 Wrj(ej)′(Yr)dYr  −12 Z s 0    N X j=1 Wrj(ej)′′(Yr)γ2(r, Yr,·)   dr   , (4.4) taking into account the fact that [Y, Wj] = 0 for any 1≤ j ≤ n, by Proposition 2.12.

Denoting kgk∞:= supt∈[0,T ]|g(t)|, for a function g : [0, T ] → R, (4.4) is smaller or

equal than exp   N X j=1 kWj k∞( ej ∞+ T 2 (ej)′′ ∞A 2 2)   exp  − Z s 0   N X j=1 Wrj(ej)(Y r)γ(r, Yr,·)   dBr   . So (4.3) is bounded by ̺(ω)EQω Z T 0 |ϕ|(s, Y s(·, ω)Rs(·, ω)ds ! , (4.5)

(22)

5 ON THE UNIQUENESS OF A FOKKER-PLANCK TYPE SPDE 21 where ̺(ω) = exp Tke0k+ N X i=1 Wi ∞ ei ∞ + TA 2 2(ω) 2 N X i=1 ( Wi 2 ∞ (ei) 2 ∞+kW i k∞k(ei)′′k∞) !

and R is the Qω-exponential martingale

Rt(· , ω) = exp − Z t 0 δ(r, · , ω)dBr− 1 2 Z t 0 δ2(r, · , ω)dr.

where δ(r, · , ω) =PNj=1Wrj(ej)′(Yr(· , ω)) γ (r, Yr(· , ω), ω) . So there is a random

(depending on ω∈ Ω) constant ̺1(ω) := const T, Wj, ej ∞, (ej) ∞, (ej)′′ ∞, 1 6 j 6 N, A2(ω)  , (4.6) so that (4.5) is smaller than

̺1(ω)EQ ω Z T 0 |ϕ(s, Y s(· , ω))|dsRT(· , ω) ! , (4.7)

where we remind that R(·, ω) is a Qω-martingale. By Girsanov theorem,

e

Bt(·, ω) = Bt(·, ω)+

Rt

0δ(r, · , ω)dr is a eQ

ω-Brownian motion with d eQω= R

T(· , ω)dQω.

At this point, the expectation in (4.7) gives EQeω Z T 0 |ϕ|(s, Y s(· , ω))ds ! , (4.8) where Yt(· , ω) = Y0+ Z t 0 γ(s, Ys(· , ω), ω)d eBs− Z t 0 γ(s, Ys(· , ω), ω)δ(s, · , ω)ds.

For fixed ω ∈ Ω, δ is bounded by a random constant ̺2(ω) of the type (4.6).

Moreover we keep in mind assumption (4.1) on γ. By Exercise 7.3.3 of [27], (4.8) is bounded by ̺3(ω)kϕkL2([0,T ]×R),where ̺3(ω) again depends on the same quantities

as in (4.6) and Φ. So for ω dP -a.s., the map ϕ7→ EQωRT

0 ϕ(s, Ys(· , ω))Zs(· , ω)ds

 prolongs to L2([0, T ]× R). Using Riesz’ theorem it is not difficult to show the ex-istence of an L2([0, T ]

× R) function (s, y) 7→ qs(y, ω) which constitutes indeed the

density of the family of the µ-marginal weighted laws.

5

On the uniqueness of a Fokker-Planck type SPDE

The next result is an extension of Theorem 3.8 of [14] to the stochastic case. It has an independent interest since it is a Fokker-Planck SPDE with possibly degenerate measurable coefficients.

(23)

6 THE NON-DEGENERATE CASE 22 Theorem 5.1. Let z0 be a distribution in S′(R). Let z1, z2 be two measurable

random fields belonging ω a.s. to C([0, T ], S′(R)) such that z1, z2:]0, T ]

×Ω → M(R). Let a : [0, T ]× R × Ω → R+ be a bounded measurable random field such that, for

any t∈ [0, T ], a(t, ·) is B([0, t]) ⊗ B(R) ⊗ Ft-measurable. We suppose moreover the

following. i) z1 − z2 ∈ L2([0, T ] × R) a.s. ii) t7→ (z1 − z2)(t,

·) is an (Ft)-progressively measurable S′(R)-valued process.

iii) R0Tkzi(s,

·k2

vards <∞ a.s.

iv) z1, z2are solutions to

   ∂tz(t, ξ) = ∂2ξξ((az)(t, ξ)) + z(t, ξ)µ(dt, ξ), z(0, · ) = z0. (5.1) Then z1 ≡ z2.

Remark 5.2. By solution of equation (5.1) we intend, as expected, the following: for every ϕ∈ S(R), ∀t ∈ [0, T ], Z R ϕ(ξ)z(t, dξ) =hz0, ϕi+ Z t 0 ds Z R a(s, ξ)ϕ′′(ξ)z(s, dξ)+ N X j=0 Z t 0 dWj s Z R ϕ(ξ)ej(ξ)z(s, dξ).

Proof of Theorem 5.1. The proof makes use of the similar arguments as in Theorem 3.8 of [14] or Theorem 3.1 in [10], in a randomized form. The full proof is given in [24] Theorem 4.2, see also [7].

6

The non-degenerate case

We are now able to discuss the doubly probabilistic representation of a solution to (1.1) when ψ is non-degenerate provided that its solution fulfills some properties. Definition 6.1. • We will say that equation (1.1) (or ψ) is non-degenerate

if on each compact, there is a constant c0>0 such that Φ≥ c0.

• We will say that equation (1.1) or ψ is degenerate if limu→0+Φ(u) = 0. One of the typical examples of degenerate ψ is the case of ψ being strictly in-creasing after some zero. This notion was introduced in [6] and it means the

(24)

6 THE NON-DEGENERATE CASE 23 following. There is 0 ≤ uc such that ψ[0,uc] ≡ 0 and ψ is strictly increasing on ]uc,+∞[.

Remark 6.2. 1. ψ is non-degenerate if and only if = limu→0+Φ(u) > 0.

2. Of course, if ψ is strictly increasing after some zero, with uc > 0 then ψ is

degenerate.

3. If ψ is degenerate, then ψκ(u) = (Φ2(u) + κ)u, for every κ > 0, is

non-degenerate.

As announced the theorem below also holds when ψ is multi-valued.

Theorem 6.3. We suppose the following assumptions. 1. x0 is a real probability measure.

2. ψ is non-degenerate.

3. There is only one random field X : [0, T ]× R × Ω → R solution of (1.1) (see Definition 2.9) such that

Z

[0,T ]×R

X2(s, ξ)dsdξ <∞ a.s. (6.1)

Then there is a unique weak solution to the (DSNLD)(Φ, µ, x0).

Remark 6.4. 1. An easy adaptation of Theorem 3.4 of [8] (taking into account e0), when ψ is Lipschitz and e0, . . . , eN belong to H1 allows to show that there

is a solution to (1.1) such that

E R

[0,T ]×R

X2(s, ξ)dsdξ !

< ∞. This holds even if x0 belongs to H−1(R).

According to Theorem B.1, that solution is unique. In particular item 3. in Theorem 6.3 statement holds.

2. Theorem 6.3 constitutes the converse of Theorem 3.3 when ψ is non-degenerate. 3. The theorem also holds if ψ is multi-valued. For implementing this, we need

to adapt the techniques of [14].

4. As side-effect of the proof of the weak-strong existence Proposition 4.1, the space (Ω0, G, Q) can be chosen as Ω0 = Ω1× Ω, Ω1= C([0, T ]; R)× R, G =

B(Ω1)× F, Q(H × F ) = R Ω1×Ω

(25)

7 THE DEGENERATE CASE 24 Proof. 1) We set γ(t, ξ, ω) = Φ (X(t, ξ, ω)). According to Proposition 4.1 there is a weak-strong solution Y to (DSDE)(γ, x0). By Proposition 4.4 ω a.s. the

µ-marginal weighted laws of Y admit densities (qt(ξ, ω), t∈]0, T ], ξ ∈ R, ω ∈ Ω)

such that dP -a.s. R

[0,T ]×R dsdξq2 s(ξ, · ) < ∞ a.s. 2) Setting νt(ξ, ω) = qt(ξ, ω)dξ : t∈]0, T ], x0 : t= 0,

ν is a solution to (5.1) with ν0 = x0, a(t, ξ, ω) = Φ2(X(t, ξ, ω)). This can be

shown applying Itˆo’s formula similarly as in the proof of Theorem 3.3.

3) On the other hand X is obviously also a solution of (5.1), which in particular verifies (6.1). Consequently z1= ν, z2 = X verify items i), ii), iii) of Theorem 5.1. So Theorem 5.1 implies that ν≡ X; this shows that Y provides a solution to (DSNLD)(Φ, µ, x0).

4) Concerning uniqueness, let Y1, Y2 be two solutions to the (DSNLD) related to

(Φ, µ, x). The corresponding random fields X1, X2 constitute the µ-marginal laws of Y1, Y2 respectively.

Now Yi, i = 1, 2, is a weak-strong solution of (DSDE)(γ

i, x) with γi(t, ξ, ω) =

Φ(Xi(t, ξ, ω)), so by Proposition 4.4 Xi, i = 1, 2 fulfills (6.1). By Theorem 3.3,

X1 and X2 are solutions to (1.1). By assumption 3. of the statement, X1 = X2.

The conclusion follows by Proposition 4.1, which guarantees the uniqueness of the weak-strong solution of (DSDE)(γ, x0) with γ = γ1= γ2.

Remark 6.5. One side-effect of Theorem 6.3 is the following. Suppose ψ to be non-degenerate. Let X : [0, T ]× R × Ω → R be a solution such that dP -a.s.

R

[0,T ]×R

X2(s, ξ)dsdξ <∞ a.s. We have the following for ω dP -a.s. i) X(t, · , ω) > 0 a.e. ∀ t ∈ [0, T ]. ii) E R R X(t, ξ)dξ  = 1, ∀ t ∈ [0, T ] if e0= 0.

Remark 6.6. If (1.1) has a solution, not necessarily unique, then (DSNLD) with respect to (Φ, µ, x0) still admits existence.

7

The degenerate case

The idea consists in proceeding similarly to [6], which treated the case µ = 0 and the case when x0is absolutely continuous with bounded density. ψ will be assumed

(26)

7 THE DEGENERATE CASE 25 to be strictly increasing after some zero uc≥ 0, see Definition 6.1. We recall that if

ψ is degenerate, then necessarily Φ(0) := limx→0Φ(x) = 0.

The theorem below concerns existence, we do not know any uniqueness result in the degenerate case.

Theorem 7.1. We suppose the following.

1. The functions ei,1≤ i ≤ N belong to H1(R).

2. We suppose that ψ : R→ R is non-decreasing, Lipschitz and strictly increasing after some zero.

3. x0 belongs to L1(R)∩ L2(R).

Then there is a weak solution to the (DSNLD)(Φ, µ, x0).

Remark 7.2. If uc >0 then ψ is necessarily degenerate and also Φ restricted to

[0, uc] vanishes.

Proof (of Theorem 7.1).

1) We proceed by approximation rendering Φ non-degenerate. Let κ > 0. We define Φκ : R → R+ by Φκ(u) =

p

Φ2(u) + κ, ψ

κ(u) = Φ2κ(u)· u. Let Xκ

be the solution so (1.1) with ψκ instead of ψ. According to Theorem 6.3 and

Remark 6.4 4., setting e

Ω1= C ([0, T ], R)× R, Y (ω1, ω) = ω1, (7.1)

H the Borel σ-algebra of eΩ1, there are families of probability kernels Qκ on

H× Ω, and measurable processes Bκon eΩ0= eΩ1× Ω such that

i) Bκ(· , ω) is a Qκ(· , ω)-Brownian motion;

ii) Y is a (weak) solution, on ( ˜Ω1, Qκ(·, ω)), of

Yt= Y0+ t

R

0

Φκ(Xκ(s, Ys, ω))dBκs(·, ω), t ∈ [0, T ];

iii) Y0 is distributed according to x0= Xκ(0, · ).

iv) The µ-marginal weighted laws of Y under Qκ are (Xκ(t, · )).

In agreement with Definition 3.1 and Definition 2.10, we need to show the exis-tence of a suitable measurable space (Ω1, H), a probability kernel Q on H× Ω,

a process B on Ω0:= Ω1× Ω such that the following holds.

(27)

7 THE DEGENERATE CASE 26 ii) Y is a (weak) solution on (Ω1, Q(·, ω)) of

Yt = Y0+ t

R

0

Φ(X(s, Ys, ω))dBs(·, ω), t ∈ [0, T ], i.e. item 1) of Definition

2.10. Moreover items 2), 3) of the same Definition are fulfilled. iii) Y0 is distributed according to x0.

iv) For every t∈]0, T ], ϕ ∈ Cb(R), if we denote Qω= Q(· , ω), we have

Z R X(t, ξ)ϕ(ξ)dξ = EQω  ϕ(Yt)Et   · Z 0 µ(ds, Ys)X(s, Ys)     .

2) We show now that Xκapproaches X in some sense when κ→ 0, where X is the

solution to (1.1). This is given in the Lemma 7.3 below.

Lemma 7.3. Under the assumptions of Theorem 7.1, according to Remark B.2, let X (resp. Xκ) be a solution of (1.1) verifying (2.1) with ψ(u) = uΦ2(u) (resp.

ψκ(u) = u(Φ2(u) + κ)), for u > 0. We have the following.

a) limκ→0supt∈[0,T ]E  kXκ(t, · ) − X(t, · )k2H−1  = 0; b) limκ→0E RT 0 dtkψ (Xκ(t, · )) − ψ (X(t, · ))k 2 L2  = 0; c) limκ→0κER[0,T ]×Rdtdξ (Xκ(t, ξ)− X(t, ξ))2  = 0. Remark 7.4. 1) a) implies of course

limκ→0E RT 0 dtkXκ(t, · ) − X(t, · )k 2 H−1  = 0.

2) In particular Lemma 7.3 b) implies that for each sequence (κn)→ 0 there is a

subsequence, still denoted by the same notation, that R

[0,T ]×R

(ψ(Xκn(t, ξ))− ψ(X(t, ξ)))2dtdξ −→

n→∞0 a.s.

3) For every t∈ [0, T ] X(t, · ) > 0 dξ ⊗ dP a.e. Indeed, for this it will be enough to show that a.s.

Z

R

dξϕ(ξ)X(t, ξ)≥ 0 for every ϕ ∈ S(R), (7.2) for every t∈ [0, T ]. Since X ∈ C ([0, T ]; S(R)) it will be enough to show (7.2)

for almost all t∈ [0, T ]. This holds true since item 1) in this Remark 7.4, implies the existence of a sequence (κn) such that

T

R

0

dtkXκn(t,·) − X(t, · )k2

(28)

7 THE DEGENERATE CASE 27 4) Since ψ is strictly increasing after uc, then, for P almost all ω, for almost all

(t, ξ)∈ [0, T ]×R, there is a sequence (κn) such that (Xκn(t, ξ)− X(t, ξ)) 1{X(t,ξ)>uc}

−→ n→∞0.

This follows from item 2) of Remark 7.4. Since Φ2(u) = 0 for 0 6 u 6 u

c and X is a.e. non-negative, this implies that

dtdξdP a.e. we have

Φ2(X(t, ξ)) (Xκn(t, ξ)− X(t, ξ)) −→

n→∞0. (7.3)

Proof (of Lemma 7.3). By Remark B.2 3. we can write dP -a.s. the following H−1(R)-valued equality. (Xκ − X) (t, · ) = t Z 0 ds (ψκ(Xκ(s, · )) − ψ (X(s, · )))′′+ N X i=0 t Z 0 (Xκ(s, · ) − X(s, · )) eidWi s. So (I− ∆)−1(Xκ − X) (t, · ) = − t Z 0 ds (ψκ(Xκ(s, · )) − ψ (X(s, · ))) + t Z 0 ds(I− ∆)−1 κ(Xκ(s, · )) − ψ (X(s, · ))) + N X i=0 t Z 0 (I− ∆)−1 ei(Xκ(s, · ) − X(s, · ))dWsi.

After regularization and application of Itˆo calculus with values in H−1, we will be able to estimate gκ(t) =

k(Xκ

− X) (t, · )k2H−1. Taking advantage of the form of ψκ− ψ, we obtain gκ(t) = N X i=1 t Z 0 ei(Xκ − X) (s, · ) 2H−1ds (7.4) −2 t Z 0 h(Xκ − X) (s, · ), ψκ(Xκ(s, · )) − ψ (X(s, · ))iL2 +2 t Z 0 ds(Xκ − X) (s, · ), (I − ∆)−1 κ(Xκ(s, · )) − ψ (X(s, · ))) L2 +2 t Z 0 ds(Xκ − X) (s, · ), (I − ∆)−1e0(Xκ − X) (s, · ) L2+ M κ t,

(29)

7 THE DEGENERATE CASE 28 where Mκis the local martingale

Mtκ= 2 N X i=1 t Z 0 (I− ∆)−1(Xκ − X) (s, · ), (Xκ − X) (s, · )ei L2dW i s.

Indeed, Mκ is a well-defined local martingale because, taking into account (B.1)

and Remark B.2, using classical arguments, we can prove that

N X i=1 t Z 0 |(Xκ − X|) (s, · ), (I − ∆)−1(Xκ − X) (s, · )ei L2| 2ds < ∞ a.s. (7.4) gives gκ(t) + 2 t Z 0 h(Xκ − X) (s, · ), ψ (Xκ(s, · )) − ψ (X(s, · ))iL2ds + 2κ t Z 0 h(Xκ − X) (s, · ), (Xκ − X) (s, · )iL2ds 6− 2κ t Z 0 dsh(Xκ − X) (s, · ), X(s, · )iL2ds + N X i=1 t Z 0 ei(Xκ − X) (s, · ) 2H−1ds + 2 t Z 0 ds(I− ∆)−1(Xκ − X) (s, · ), ψ (Xκ(s, · )) − ψ (X(s, · )) L2 + 2κ t Z 0 ds(I− ∆)−1(Xκ − X) (s, · ), (Xκ − X) (s, · ) L2 + 2κ t Z 0 ds(I− ∆)−1(Xκ − X) (s, · ), X(s, · ) L2 + 2 t Z 0 ds(I− ∆)−1(Xκ − X) (s, · ), e0(Xκ − X) (s, · ) L2+ M κ t.

We use Cauchy-Schwarz and the inequality 2√κb√κc 6 κb2+ κc2, with first

b=kXκ(s,

· ) − X(s, · )kL2, c=kX(s, · )kL2 and then b = kXκ(s,·) − X(s, ·)k

H−2, c = kX(s, · )kL2. We also take into account the property of H−1-multiplier for ei, 0 6 i 6 N . Consequently there is a constant

(30)

7 THE DEGENERATE CASE 29 C(e) depending on (ei, 0 6 i 6 N ) such that

gκ(t) + 2 t Z 0 h(Xκ − X) (s, · ), ψ (Xκ(s, · )) − ψ (X(s, · ))iL2ds (7.5) + 2κ t Z 0 kXκ(s, · ) − X(s, · )k2L2ds 6κ t Z 0 k(Xκ − X) (s, · )k2L2ds + κ t Z 0 dskX(s, · )k2L2 + C(e) t Z 0 dskXκ(s, · ) − X(s, · )k2H−1 + 2 t Z 0 k(Xκ − X) (s, · )kH−2kψ (Xκ(s, · )) − ψ (X(s, · ))kL2 + 2κ t Z 0 dsgκ(s) + κ t Z 0 dsk(Xκ − X) (s, · )k2H−2+ κ t Z 0 dskX(s, · )k2L2+ Mtκ.

Since ψ is Lipschitz, it follows (ψ(r)− ψ(r1)) (r− r1) > α (ψ(r)− ψ(r1))2,for any

r, r1≥ 0, for some α > 0. Consequently, the inequality 2bc 6 b2α+c

2

α,with b, c∈ R

and the fact thatk · kH−2 6k · kH−1 give 2 t Z 0 dsk(Xκ − X) (s, · )kH−2kψ (Xκ(s, · )) − ψ (X(s, · ))kL2 6 t Z 0 dsαgκ(s, · ) + t Z 0 dshψ (Xκ(s, · )) − ψ (X(s, · )) , Xκ(s, · ) − X(s, · )iL2. So (7.5) yields gκ(t) + t Z 0 hXκ(s, · ) − X(s, · ), ψ (Xκ(s, · )) − ψ (X(s, · ))iL2ds (7.6) + κ t Z 0 dskXκ(s, · ) − X(s, · )k2L2ds 62κ t Z 0 dskX(s, · )k2L2+ Mtκ+ (C(e) + α + 3κ) t Z 0 gκ(s)ds.

(31)

7 THE DEGENERATE CASE 30 Taking the expectation we get

E(gκ(t)) ≤ (C(e) + α + 3κ) t Z 0 E(gκ(s))ds + 2κ Z t 0 E(kX(s, ·)k2L2)ds, for every t∈ [0, T ]. By Gronwall lemma we get

E(gκ(t)) 6 2κE    T Z 0 dskX(s, · )k2L2   e (C(e)+α+3κ)T, ∀ t ∈ [0, T ]. (7.7) Taking the supremum and letting κ→ 0, item a) of Lemma 7.3 is now established. We go on with item b). Since ψ is Lipschitz, (7.6) implies that, for t∈ [0, T ],

t Z 0 dskψ (Xκ(s, · )) − ψ (X(s, · ))k2L2 61 αds D ψ(Xκ(s, · )) − ψ (X(s, · )) , X(κ)(s, · ) − X(s, · )E L2 6κ 2α t Z 0 dskX(s, · )k2L2+ C(e, α) t Z 0 gκ(s)ds + Mtκ,

where C(e, α) is a constant depending on ei,0 6 i 6 N and α. Taking the

expecta-tion for t = T , we get E   T Z 0 dskψ (Xκ(s, · )) − ψ (X(s, · ))k2L2   6 κ 2αE   T Z 0 dskX(s, · )k2L2  +C(e, α) T Z 0 E(gκ(s))ds.

Taking κ→ 0, (2.1) and (7.7) provide the conclusion of item b) of Lemma 7.3. c) Coming back to (7.6), and t = T , we have

κ T Z 0 dskXκ(s, · ) − X(s, · )k2L2 62κ T Z 0 dskX(s, · )k2L2+MTκ+(C(e) + α + 3κ) T Z 0 dsgκ(s).

Taking the expectation we have κE   T Z 0 dskXκ(s, · ) − X(s, · )k2L2   6 2κE   T Z 0 dskX(s, · )k2L2  +(C(e) + α + 3κ) E   T Z 0 gκ(s)ds   .

Using item a) and the fact that E R

[0,T ]×R

X2(s, ξ)dsdξ !

<∞, the result follows. Lemma 7.3 is finally completely established.

(32)

7 THE DEGENERATE CASE 31

We need now another intermediate lemma concerning the paths of a solution to (1.1).

Lemma 7.5. For almost all ω∈ Ω, almost all t ∈ [0, T ], 1) ξ7→ ψ(X(t, ξ, ω)) ∈ H1(R),

2) ξ7→ Φ(X(t, ξ, ω)) is continuous.

Proof. Item 1) is established in [8], see Definition 3.2 and Theorem 3.4. 1) im-plies that ξ 7→ ψ(X(t, ξ, ω)) is continuous. See also Remark B.2 1. By the same arguments as in Proposition 4.22 in [6], we can deduce item 2).

3) We go on with the proof of Theorem 7.1. We keep in mind i), ii), iii), iv) at the beginning of item 1) of the proof. Since Φ is bounded, for P -almost all ω, using Burkholder-Davis-Gundy inequality one obtains

EQκ( · ,ω)(Yt− Ys)46const(t− s)2, (7.8)

where const does not depend on ω. On the other hand, for all Qκ(

· , ω), Y0

is distributed according to x0.

At this point, we need a version of Kolmogorov-Centsov theorem for the stable convergence. Let ˜Ω0 = ˜Ω1× Ω as at the beginning of the proof of Theorem

7.1. We recall that ˜Ω1= C([0, T ])× R, Y (ω1, ω) = ω1, H is the Borel σ-field

on ˜Ω1.

Lemma 7.6. Let be a sequence Qκ(·, ω) of random kernel on H × Ω. Let us

denote by Qκthe sequence of marginal laws of the probabilities on ( ˜

0, H⊗ F)

given by Qκ(·, ω)P (dω). Suppose the following.

• The sequences of marginal laws of the probabilities Qκ at zero are tight. • There are α, β > 0 such that

EQκ(·,ω)|Yt− Ys|α≤ C(ω)(t − s)1+β, 0≤ s ≤ T,

for some positive P -integrable random constant C. Then there is a random kernel Q∞ on H

× Ω and a subsequence (κn) such

that for every bounded continuous functional G : ˜Ω1→ R, for every bounded

F-measurable r.v. F : Ω→ R, we have R ΩF(ω)dP (ω) R ˜ Ω1G(Y (ω1))Q κn(dω 1, ω)→n→∞ R ΩF(ω)dP (ω) R ˜ Ω1G(Y (ω1))Q ∞(dω 1, ω). (7.9)

(33)

7 THE DEGENERATE CASE 32 Proof. Taking the expectation with respect to P we obtain

EQκ(Yt− Ys)α≤ C0(t− s)1+β, 0≤ s ≤ T,

where C0 is the expectation of C. First, by usual arguments as Chebyshev

inequality, one can show the following: lim λ→∞supκ Q κ {(ω1, ω)||(W1, . . . , WN)(ω)(0)| > λ; |ω1(0)| > λ} = 0, lim δ→0supκ Qκ{(ω1, ω)|m((W1, . . . , WN, ω1); δ) > ε} = 0, ∀ε > 0,

where m denotes the modulus of continuity. By Theorem 4.10 of [18], the sequences of probabilities Qκ, κ > 0, on ˜

1× Ω are tight. Let Qκn be a

sequence converging weakly to a probability Q∞on H

⊗F. Since F is separable and C([0, T ])N, which is space value of process W , is a Polish space equipped

with its Borel σ-algebra, according to [23], it is possible to desintegrate Q∞,

i.e. there is random kernel Q∞(

·, ω) such that for every bounded continuous functional G : ˜Ω1 → R, for every bounded continuous ˜F : C([0, T ])N → R

such that (7.9) holds for every F = ˜F(W ), where W = (W1, . . . , WN). Since

continuous bounded functionals ˜F are dense in L2(C([0, T ])N equipped with

Wiener measure, (7.9) holds also for any F bounded F-measurable r.v. with Q∞(dω1,dω) = Q∞(dω1, ω)P (dω).

By (7.8), we apply Lemma 7.6 with α = 2, β = 1 and we consider the cor-responding Qκn(·, ω) and the limit random kernel Q(·, ω) := Q(·, ω). We define also the probability Q := Q∞ on ˜

0 = ˜Ω1× Ω according to the

con-ventions introduced before Definition 2.3. In the sequel we denote again by dQκ

1, ω) := dP (ω)Qκ(dω1, ω) and also Qω,κ:= Qκ(·, ω), Qω:= Q(·, ω).

From Lemma 7.6 derives the following.

Corollary 7.7. For any bounded random element F : ˜Ω1× Ω → R such that

for almost all ω∈ Ω, F (·, ω) ∈ C(˜Ω1). Then

R ΩdP (ω) R ˜ Ω1(dQ ω,κn

1)− dQω(ω1)) F (Y, ω) converges to zero.

Proof. See Appendix A. We need here a technical lemma. Lemma 7.8. Let t∈ [0, T ], p ∈ R.

1. There is C(p) > 0 such that EQκ  Et   · Z 0 µ(ds, Ys)   p  ≤ C(p), ∀κ > 0.

(34)

7 THE DEGENERATE CASE 33 2. For almost all ω ∈ Ω, and every p ∈ R there is a random constant C(p, ω)

such that the random variables EQω,κ  Et   · Z 0 µ(ds, Ys)   p  ≤ C(p, ω), ∀κ > 0. (7.10)

Proof. Without restriction of generality we can of course suppose e0= 0.

1. We can write Et   · Z 0 µ(ds, Ys)   p = Et  p · Z 0 µ(ds, Ys)   exp  p2− p 2 N X i=1   t Z 0 ei(Ys)2ds     ≤ Et  p · Z 0 µ(ds, Ys)   exp Tp 2 − p 2 N X i=1 kei k2∞ ! . Since p t R 0

µ(ds, Ys) is a (Gt)-Qκ-martingale, the result follows.

2. Let ω ∈ Ω excepted on a P -null set. The integrand of the expectation in (7.10) equals exp (J1(n) + J2(n)) , where

J1(n) := p N X i=1  Wi tei(Yt)− 1 2 t Z 0 ei(Ys)2ds− 1 2 t Z 0 Wsi(ei)′′(Y s)Φ2(X(s, Ys, ω)) ds   and J2(n) =−pPNi=1 t R 0 Wsi(ei)(Y

s)dYs. For each ω, exp ((J1(n)) is bounded,

so it remains to prove the existence of a random constant C(p, ω) such that for every 0 6 i 6 N EQω,κ  exp  −p t Z 0 Wsi(ei)(Y s)dYs     ≤ C(p, ω). (7.11) Since−p t R 0 Wi s(ei)′(Ys)dYs is a Qω,κ-martingale, Eκt := exp  −p t Z 0 Wsi(ei)(Y s)dYs− p2 2 t Z 0 (Wi)2 s(ei) ′ 2(Y s)Φ2κ(Xκ(s, Ys, ω)) ds  

(35)

left-7 THE DEGENERATE CASE 34 hand side of (7.11) is bounded by

EQω,κ  Eκ texp  p2 2 t Z 0 (Wi)2 s((ei)′)2(Ys)Φ2κ(Xκ(s, Ys, ω))ds     6C(p,·) := exp  p2 2 (ei) 2 ∞  kΦk2∞+ 1 ZT 0 (Wi s)2ds   .

This concludes the proof of Lemma 7.8.

Lemma 7.9. We fix ω ∈ Ω excepted on some P -null set. Let ϕ : [0, T ] × R → R continuous with compact support. The random variables

EQω,κ Z T

0 |Φ

κ(Xκ(r, Yr, ω))− Φ (X(r, Yr, ω))| ϕ(r, Yr)dr

!

converge to zero a.s. and in Lp(Ω, P ) for every p≥ 1, when κ → 0.

Proof. Let ω ∈ Ω. Since ϕ has compact support, by Cauchy-Schwarz with respect to the measure ϕ(r, Y (r))dr on [0, T ], it is enough to prove that

EQω,κ Z T 0 (Φκ(Xκ(r, Yr, ω))− Φ (X(r, Yr, ω)))2ϕ(r, Yr)dr ! (7.12) converges to zero. Since Φ is bounded it is enough to prove the convergence to zero for almost all ω ∈ Ω. In order not to overcharge the notation, in this proof we will omit the argument of ω of Y . By Fubini’s theorem the left-hand side of (7.12) equals Z T 0 drEQω,κ κ(Xκ(r, Yr, ω))− Φ(X(r, Yr, ω)))2ϕ(r, Yr)  .

Using also Lebesgue dominated convergence theorem, given a sequence (κn), when

n→ ∞, it is enough to find a subsequence (κnℓ) such that for all r∈ [0, T ] outside a possible Lebesgue null set

EQω,κnℓΦκnℓ(Xκnℓ(r, Yr, ω))− Φ (X(r, Yr, ω)) 2 ϕ(r, Yr)  →ℓ→∞0. We set Zr(ω1, ω) = Er R· 0 µ(ω)(ds, Ys(ω1)) 

.We will substitute from now on (nℓ)

Références

Documents relatifs

[1] Arnaudon M., Cruzeiro A.B., Lagrangian Navier-Stokes diffusions on manifolds: variational principle and stability, Bull.

Yong, Backward stochastic Volterra integral equations and some related problems, Stochastic Processes and their Application 116 (2006) 779 − 795.

Summary: We consider a porous media type equation over all of R d with d = 1, with monotone discontinuous coefficient with linear growth and prove a probabilistic representation of

The main aim of this work is to construct and implement a probabilistic algorithm which will allow us to approximate solutions of a porous media type equation with monotone

In the radially symmetric case, we implement the stochastic particle algorithm to the one- dimensional reduced non-linear stochastic differential equation. d &gt; 2), in most cases,

Summary: We consider a possibly degenerate porous media type equation over all of R d with d = 1, with monotone discontinuous coefficients with lin- ear growth and prove a

Key words : Stochastic porous medium equation, Random attractor, Random dynamical systems, Stochastic partial differential equations..

In this paper we design a numerical scheme for approximating Backward Doubly Stochastic Differential Equations (BDSDEs for short) which represent solution to Stochastic