Convergence of the Finite Volume Method for stochastic
scalar balance laws with additive noise
Sylvain Dotti∗ February 11, 2020
Abstract
I prove the convergence of the explicit-in-time Finite Volume method with mono-tone fluxes for the approximation of scalar first-order conservation laws with locally Lipschtiz continuous flux and additive noise. I use the standard CFL conditions on a set of probability close to one using a martingale exponential inequality. Then, with the help of stopping times on that set, I apply theorems of convergence for ap-proximate kinetic solutions of conservation laws with stochastic forcing. I conclude with the convergence in probability of the Finite Volume scheme towards the unique solution of the continuous-time equation.
Keywords: Finite Volume Method, Stochastic scalar conservation law, Kinetic formu-lation
MSC Number: 65M08 (35L60 35L65 35R60 60H15 65M12)
CONFLICT OF INTEREST
No conflict of interest exists. I wish to confirm that there are no known conflicts of inter-est associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
1
Introduction
Stochastic first-order scalar conservation law. Let (Ω, F , P, (Ft), (βk(t))) be a
stochastic basis and let T > 0. Consider the first-order scalar conservation law with stochastic forcing
du(x, t) + div(A(u(x, t)))dt = ΦdW (t), x ∈ TN, t ∈ (0, T ). (1.1) Equation (1.1) is periodic in the space variable: x ∈ TN where TN is the N -dimensional
torus. The flux function A in (1.1) is supposed to be of class C2: A ∈ C2(R; RN). I ∗
CEMOI, Universit´e de la R´eunion, 15 Avenue Ren´e Cassin, CS 92003, 97744 Saint-Denis Cedex 9, La R´eunion, France
assume that A and its derivatives have at most polynomial growth. Without loss of generality, I will assume also that A(0) = 0. The right-hand side of (1.1) is a stochastic increment in infinite dimension. It is defined as follows (see [1] for the general theory): W (t) is a cylindrical Wiener process, W (t) = P
k≥1βk(t)ek, where the coefficients βk
are independent Brownian processes and (ek)k≥1 is a complete orthonormal system of a
Hilbert space H. L2(H, R) denotes the set of Hilbert-Schmidt operators from the Hilbert
space H to the space R.
Φ ∈ L2(H, R) (1.2)
The Cauchy Problem. Let us quote the main results :
In [2], E, Khanin, Mazin, Sinai proved uniqueness and existence of the solution of the stochastic Burgers Equation with additive noise carried by a cylindrical Wiener process. They used a periodic solution in space dimension one x ∈ T in order to prove the exis-tence of an invariant measure.
In [3], Kim proved uniqueness and existence of the solution for a more general non-linear flux, with the space variable x ∈ R and a real brownian motion.
In [4], Feng and Nualart proved uniqueness of a solution in space dimension N ∈ N∗, while existence was proved only in space dimension one. They used for the first time a multiplicative noise.
In [5], Bauzet, Vallet and Wittbold proved uniqueness and existence of the solution for a non-linear flux, with the space variable x ∈ RN and a multiplicative noise driven by a real brownian motion.
In [6], Debussche and Vovelle proved uniqueness and existence of the solution for a non-linear flux and a multiplicative noise driven by a cylindrical Wiener process. Their solution is periodic in space: x ∈ TN.
All the previous solutions are entropic solutions defined by Kruzkov in [7]. While simi-lar, slight differences on assumptions or formulations always exist. For exemple [5] are following the formulation of Di Perna [8] with the measure-valued solution, while [6] are following the kinetic formulation of Lions, Perthame, Tadmor [9].
They all first proved uniqueness, then existence via the approximation given by the stochastic parabolic equation.
In [10], we followed the works of [6] by using a kinetic formulation, a multiplicative noise driven by a cylindrical Wiener process, and defining a periodic solution for the space variable x ∈ TN. We proved uniqueness of a solution, and a general framework for convergence of approximate solutions towards the unique solution.
Approximations by the finite Volume Method. Let us quote the book of Eymard, Gallou¨et, Herbin [11] for the general theory and the courses of Vovelle [12] for a quick entrance in the theory, both for the deterministic case.
For the approximation of the hyperbolic scalar conservation laws with stochastic source term by the finite Volume scheme, fews results exists:
• in space dimension 1, with strongly monotone fluxes, [13] proved the convergence of a semi-discrete Finite Volume approximation towards the solution defined in [4]
• in space dimension N ≥ 1, [14] proved the convergence of a fully discrete flux-splitting Finite Volume approximation towards the solution defined in [5]
• in space dimension N ≥ 1, with monotone fluxes, [15] proved the convergence of a fully discrete Finite Volume approximation towards the solution defined in [5] • in space dimension N ≥ 1, with monotone fluxes, [16], proved the convergence of
a fully discrete Finite Volume approximation towards the solution defined in [10] in the case of compactly supported multiplicative noise.
Some comments about my assumptions: In [16], we used a stronger constraint on Φ, that is Φ compactly supported. The aim was to do as if A was globally Lipschitz continuous, in order to have a CFL condition for the Finite Volume scheme. In [14] and [15], Bauzet, Charrier Gallou¨et used a Lipshitz continuous flux with the same aim. Here, I still take an assumption stronger than in the continuous case (see [10]): I take an additive noise with the coefficient Φ ∈ L2(H, R) and I keep the flux locally Lipschitz continuous. To
have the CFL condition of the Finite Volume scheme, I will work on a subset of Ω where the approximate solutions remain in a compact of R. The probability of the complement of that subset can be made as small as wished. In this way, I can work as if A was globally Lipschitz continuous, at lest on a subset of Ω.
I hope that this result and this method (truncation on the probabilistic space Ω) will help to prove the convergence of the Finite Volume method in the case of multiplicative noise with bounded multiplier using comparisons of diffusion processes.
Kinetic formulation. To prove the convergence of the Finite Volume method with monotone fluxes, I will use the companion paper [10] and a kinetic formulation of the Finite Volume scheme. The subject of [10] is the convergence of approximations to (1.1) in the context of the kinetic formulation of scalar conservation laws. Such kinetic formu-lations have been developed in [9, 17, 18, 19, 20]. In [18], a kinetic formulation of Finite Volume E-schemes is given (and applied in particular to obtain sharp CFL criteria). The use of kinetic formulation in [16] already improved the result of [15] by removing a con-straint of the scheme: their time step had to be negligeable compared to the space step to obtain convergence. Here I modify the sequence of generalized approximate kinetic solutions in this way: I keep the sequence of [16] on a subspace of Ω whose probability tends towards one when a parameter λ tends towards infinity, by multiplying by the in-dicator function of a time interval ended by a stopping time. In that way, verifying the assumptions of convergence given by Theorem 2.2 taken from [10], I get the convergence in probability of the approximate solution given by the Finite Volume scheme.
Plan of the paper. The plan of the paper is the following one. In the preliminary section 2, following [10], I give the definition of solution of (1.1), give a more general definition of the so-called generalized solution, that is the generalized solution up to a
stopping time and give the corresponding result of convergence of approximate solutions. In Section 3, I describe the kind of approximation to (1.1) by the Finite Volume method that I consider here.
In Section 4 I establish the kinetic formulation of the scheme. I find a subspace of Ω on which the numerical values given by the scheme are bounded, in order to define the proper CFL condition of the scheme. This subspace is found by the use of an exponential martingale inequality.
In section 5, I define the sequence of generalized approximate solutions up to a stopping time (fδ) and the associated sequence of Young measures (νδ) and sequence of kinetic
measures (mδ). I prove the tightness of (νδ) and (mδ).
In section 6, by the use of a numerical kinetic equation I prove that (fδ) verify an
approximate kinetic time-continuous equation.
In section 7 I conclude by the convergence in probability of the approximate solution given by the Finite Volume scheme towards the unique solution of (1.1).
2
Generalized solutions, approximate solutions
The object of this section is to re-write several results concerning the solutions of the Cauchy Problem associated to (1.1) and their approximations proved in [10] with slight modifications in order to be used for stopping times.
2.1 Solutions
Definition 2.1 (Random measure). Let M+b TN × [0, T ] × R be the set of bounded Borel non-negative measures. If m is a map from Ω to M+b TN × [0, T ] × R such that, for each continuous and bounded function φ on TN × [0, T ] × R, hm, φi is a random variable, then I say that m is a random measure on TN× [0, T ] × R.
Definition 2.2 (Solution). Let u0 ∈ L∞(TN). An L1(TN)-valued stochastic process
(u(t))t∈[0,T ] is said to be a solution to (1.1) with initial datum u0 if u and 1u>ξ have the
following properties:
1. u ∈ L1
P(TN × [0, T ] × Ω),
2. for all ϕ ∈ Cc∞(TN × R), almost surely, t 7→ Z
TN×R
ϕ(x, ξ)1u(x,t)>ξdxdξ is c`adl`ag,
3. for all p ∈ [1, +∞), there exists Cp ≥ 0 such that
E sup t∈[0,T ] ku(t)kp Lp(TN) ! ≤ Cp, (2.1)
4. there exists a random measure m with finite first moment (2.5), such that for all ϕ ∈ Cc∞(TN × R), for all t ∈ [0, T ], almost surely,
Z TN×R ϕ(x, ξ)1u(x,t)>ξdxdξ = Z TN×R ϕ(x, ξ)1u0(x)>ξdxdξ + Z t 0 Z TN×R a(ξ) · ∇ϕ(x, ξ)1u(x,s)>ξdxdξds + Z t 0 Z TN ϕ(x, u(x, s))dx Φ dW (s) +1 2 Z t 0 Z TN ∂ξϕ(x, u(x, s)) kΦk2L2(H,R)dxds − m(∂ξϕ)([0, t]), (2.2) where a(ξ) := A0(ξ).
In item 1, the index P in u ∈ L1
P(TN× [0, T ] × Ω) means that u is predictable. See [10,
Section 2.1.1]. The function denoted 1u>ξ is given more precisely by
(x, t, ξ) 7→ 1u(x,t)>ξ.
In item 4, in place of ’for all t ∈ [0, T ]’ I could have written ’for all predictable stopping time τ ’, and in place of t in (2.2), I could have written τ .
2.2 Generalized solutions up to a stopping time
Definition 2.3 (Young measure). Let (X, A, λ) be a finite measure space. Let P1(R)
denote the set of probability measures on R. I say that a map ν : X → P1(R) is a Young
measure on X if, for all φ ∈ Cb(R), the map z 7→ νz(φ) from X to R is measurable. I
say that a Young measure ν vanishes at infinity if, for every p ≥ 1, Z X Z R |ξ|pdν z(ξ)dλ(z) < +∞. (2.3)
Definition 2.4 (Kinetic function). Let (X, A, λ) be a finite measure space. A measur-able function f : X × R → [0, 1] is said to be a kinetic function if there exists a Young measure ν on X that vanishes at infinity such that, for λ-a.e. z ∈ X, for all ξ ∈ R,
f (z, ξ) = νz(ξ, +∞).
I say that f is an equilibrium if there exists a measurable function u : X → R such that f (z, ξ) = 1u(z)>ξ a.e., or, equivalently, νz(dξ) = δu(z)(dξ) for a.e. z ∈ X.
Definition 2.5 (Generalized solution up to a stopping time). Let f0: TN × R → [0, 1]
be a kinetic function, let τ : Ω → [0, T ] be a predictable stopping time.
An L∞(TN× R; [0, 1])-valued process (f(t))t∈[0,T ] is said to be a generalized solution up
to the stopping time τ of (1.1) with initial datum f0 if f (t) and νt:= −∂ξf (t) have the
1. Almost surely, for all t ∈ [0, τ (ω)], f (t, ω) is a kinetic function, and, for all R > 0, 1[0,τ ](t)f (x, t, ξ, ω) ∈ L1P(TN × (0, T ) × (−R, R) × Ω),
2. for all ϕ ∈ Cc∞(TN × R), a.s., the map t ∈ [0, T ] 7→
Z
TN×R
ϕ(x, ξ)f (x, t, ξ, ω)dxdξ is c`adl`ag,
3. for all p ∈ [1, +∞), there exists Cp ≥ 0 such that
E sup t∈[0,T ] Z TN Z R |ξ|pdνx,t(ξ)dx ! ≤ Cp, (2.4)
4. there exists a random measure m verifying
Em(TN × [0, T ] × R) < +∞, (2.5)
such that for all ϕ ∈ Cc∞(TN × R), almost surely, for all t ∈ [0, τ ] Z TN×R ϕ(x, ξ)f (x, t, ξ)dxdξ = Z TN×R ϕ(x, ξ)f0(x, ξ)dxdξ + Z t 0 Z TN×R a(ξ) · ∇ϕ(x, ξ)f (x, s, ξ)dxdξds + Z t 0 Z TN Z R ϕ(x, ξ)dνx,s(ξ)dx Φ dW (s) +1 2 Z t 0 Z TN Z R ∂ξϕ(x, ξ) kΦk2L2(H,R)dνx,s(ξ)dxds − m(∂ξϕ)([0, t]). (2.6)
The following statement is Theorem 3.2. in [10]. Note that I use the following termi-nology: what I call solution is a stochastic process u as in Definition 2.2. What I call generalized solution is a stochastic process f which verifies the Definition 2.5 for the constant stopping times τT : ω 7→ T .
Theorem 2.1 (Uniqueness, Reduction). Let u0 ∈ L∞(TN). Assume (1.2), then I have
the following results:
1. there is at most one solution u with initial datum u0 to (1.1).
2. If f is a generalized solution to (1.1) with initial datum f0 at equilibrium: f0 =
1u0>ξ, then there exists a solution u to (1.1) with initial datum u0 such that f (x, t, ξ) = 1u(x,t)>ξ a.s., for a.e. (x, t, ξ).
3. if u1, u2are two solutions to (1.1) associated to the initial data u1,0, u2,0∈ L∞(TN)
respectively, then for all t ∈ [0, T ]:
Ek(u1(t) − u2(t))+kL1(TN)≤ Ek(u1,0− u2,0)+kL1(TN). (2.7) This implies the L1-contraction property, and comparison principle for solutions.
2.3 Approximate solutions up to a stopping time
In [10], we gave a general method and sufficient conditions for the convergence of se-quences of approximations of (1.1). Here I give the correspondant definition and result for generalized solutions up to a stopping time.
Definition 2.6 (Approximate generalized solutions up to a stopping time). For each n ∈ N, let f0n: TN × R → [0, 1] be a kinetic function. Let τ : Ω → [0, T ] be a
pre-dictable stopping time. Let (fn(t)) t∈[0,T ]
n∈N be a sequence of L
∞(TN × R; [0,
1])-valued processes. Assume that the functions fn(t), and the associated Young measures νtn = −∂ξfn(t) are satisfying item 1, 2, 3, in Definition 2.5 and Equation (2.6) up to
an error term, i.e.: for each ϕ ∈ Cc∞(TN × R), n ∈ N, there exists an adapted process
εn(t, ϕ), with t 7→ εn(t, ϕ) almost surely continuous on the intervals [0, τ ]. Furthermore, assume that the sequence (εn(t, ϕ))n∈N satisfies
lim
n→+∞t∈[0,τ ]sup |ε
n(t, ϕ)| = 0 in probability, (2.8)
and that there exists some random measures mn with first moment (2.5), such that, for all n, for all ϕ ∈ Cc∞(TN× R), almost surely, for all t ∈ [0, τ ]
Z TN×R ϕ(x, ξ)fn(x, t, ξ)dxdξ = Z TN×R ϕ(x, ξ)f0n(x, ξ)dxdξ + εn(t, ϕ) + Z t 0 Z TN×R a(ξ) · ∇ϕ(x, ξ)fn(x, s, ξ)dxdξds + Z t 0 Z TN Z R ϕ(x, ξ)dνx,sn (ξ)dx Φ dW (s) +1 2 Z t 0 Z TN Z R ∂ξϕ(x, ξ) kΦk2L2(H,R)dν n x,s(ξ)dxds − mn(∂ξϕ)([0, t]). (2.9)
Then I say that (fn) is a sequence of approximate generalized solutions up to the stopping time τ to (1.1) with initial datum f0n.
Consider a sequence (fn) of approximate solutions to (1.1) satisfying the following
(mi-nimal) bounds.
1. There exists Cp ≥ 0 independent of n such that νn:= −∂ξfnsatisfies
E " sup t∈[0,T ] Z TN Z R |ξ|pdνx,tn (ξ)dx # ≤ Cp, (2.10)
2. the measures Emnsatisfy the bound
sup
n Em n
(TN × [0, T ] × R) < +∞, (2.11) and the following tightness condition: if BRc = {ξ ∈ R, |ξ| ≥ R}, then
lim
R→+∞supn Em n
In [10], we gave the proof of the following convergence result, see Theorem 4.15. Note that the exsitence (and uniqueness) of the solution to (1.1) is proved by [6], another way to prove it is also given in section 5. of [10].
Theorem 2.2 (Convergence). Suppose that there exists a sequence of approximate ge-neralized solutions (fn) up to the predictable stopping time τ : Ω → [0, T ] to (1.1) with initial datum f0n satisfying (2.10), (2.11) and the tightness condition (2.12) and such that (f0n) converges to the equilibrium function 1u0>ξ in L
∞
(TN × R)-weak-*, where u0 ∈ L∞(TN). If I call u the unique solution u ∈ L1P(TN × [0, T ] × Ω) to (1.1) with
initial datum u0, then the sequence of the terms
un(x, t) = Z R ξ1[0,τ ](t)dνx,tn (ξ) = Z R 1[0,τ ](t) (fn(x, t, ξ) − 10>ξ) dξ.
is converging towards 1[0,τ ](t)u(x, t) in the following sense: for all p ∈ [1, ∞),
E Z T 0 1[0,τ ](t) Z TN |un(x, t) − u(x, t)|pdxdt →n→+∞0
In the next section, I define the numerical approximation to (1.1) by the Finite Volume method. To prove the convergence of the method, I will show that the hypotheses of Theorem 2.2 are satisfied. This will be done in sections 5 and 6.
3
The finite volume scheme
Mesh. A mesh of TN is a family T# of disjoint connected open subsets K ∈ (0, 1)N
which form a partition of (0, 1)N up to a negligible set. I denote by T the mesh
{K + l; l ∈ ZN, K ∈ T #}
deduced on RN. For all distinct K, L ∈ T , I assume that K ∩ L is contained in an hyperplane; the interface between K and L is denoted K|L := K ∩ L. The set of neighbours of K is
N (K) = {L ∈ T ; L 6= K, K|L 6= ∅} . I use also the notation
∂K = [
L∈N (K)
K|L.
In general, there should be no confusion between ∂K and the topological boundary K \ K.
K
L
M
N
K|L
I also denote by |K| the N -dimensional Lebesgue Measure of K and by |∂K| (respectively |K|L|) the (N − 1)-dimensional Haussdorff measure of ∂K (respectively of K|L) (the (N − dimensional Haussdorff measure is normalized to coincide with the (N − 1)-dimensional Lebesgue measure on hyperplanes).
Scheme Let (AK→L)K∈T ,L∈N (K)be a family of monotone, locally Lipschitz continuous
numerical fluxes, consistent with A. I assume that each function AK→L: R2 → R satisfies
the following properties.
• Monotonicity: AK→L(v, w) ≤ AK→L(v0, w) for all v, v0, w ∈ R with v ≤ v0 and
AK→L(v, w) ≥ AK→L(v, w0) for all v, w, w0 ∈ R with w ≤ w0.
• Local Lipschitz regularity: ∀M ∈ R+, there exists LM
A ≥ 0 such that
|AK→L(v, w) − AK→L(v0, w0)| ≤ |K|L|LMA(|v − v0| + |w − w0|), (3.1)
for all v, v0, w, w0 ∈ [−M, M ] ⊂ R. Without loss of generality, I will assume that ∀M ∈ R+, Lip(A, [−M, M ]) ≤ LMA • Consistency: AK→L(v, v) = Z K|L A(v) · nK,LdHN −1(x) = |K|L|A(v) · nK,L, (3.2)
for all v ∈ R, where nK,L is the outward unit normal to K on K|L.
• Conservative symmetry :
AK→L(v, w) = −AL→K(w, v), (3.3)
for all K, L ∈ T , v, w ∈ R.
Let 0 = t0 < ... < tn < tn+1 < ... < tNT = T be a partition of the time interval [0, T ], with NT ∈ N∗. Given two discrete times tn and tn+1, I define ∆tn= tn+1− tn for each
(1.1) in the cell K at time tn, I compute vKn+1, the approximation of the value of u in K
at the next time step tn+1, by the formula
|K|(vKn+1− vnK) + ∆tn X L∈N (K) AK→L(vnK, vLn) = |K| Z tn+1 tn Φ dW (s), (3.4)
The initialization is given by the formula
v0K = 1 |K|
Z
K
u0(x)dx, K ∈ T . (3.5)
In (3.4), ∆tnAK→L(vKn, vnL) is the numerical flux at the interface K|L on the range of
time [tn, tn+1].
4
The kinetic formulation of the finite volume scheme
4.1 Discussion on the kinetic formulation of the finite volume scheme
The kinetic formulation of the Finite Volume method has been introduced by Makridakis and Perthame in [18] for deterministic scalar conservation laws. The principle is to replace the entropic inequality by an equality by adding a kinetic entropy defect measure, which is a non-negative measure. An artificial variable ξ ∈ R called kinetic variable is also added, it comes from the constant ξ ∈ R in the Kruzkov’s entropies u ∈ R 7→ |u − ξ|. I don’t create a probabilistic kinetic formulation, I simply share the evolution in time of the Finite Volume scheme in two artificial steps as it was already done in [16]: one deterministic step in order to use the kinetic formulation (and the results) of [18], and one probabilistic step. The time step tn+1of the discrete evolution equation (3.4) is then
reached via the two following steps:
|K|(vKn+1/2− vKn) + ∆tn
X
L∈N (K)
AK→L(vKn, vnL) = 0. (4.1)
for K ∈ T#and n ∈ N, where vn+1/2K is the result of the artificial deterministic evolution.
And then,
(vn+1K − vKn+1/2) = Z tn+1
tn
Φ dW (s). (4.2)
The first step corresponds the kinetic formulation
|K|1 vn+1/2K >ξ− 1vKn>ξ + ∆tn X L∈N (K) aK→L(ξ, vKn, vnL) = |K|∆tn∂ξmnK(ξ), (4.3) with mnK(ξ) ≥ 0, for all ξ ∈ R. (4.4)
Note that, for N = 1 Makridakis and Perthame use the following equation |K|1vn+1/2 K >ξ − 10>ξ+ 10>ξ− 1vn K>ξ + ∆tn X L∈N (K) (aK→L(ξ, vnK, vLn) + a(ξ)10>ξ) = |K|∆tn∂ξmnK(ξ),
with the function χ : (u, ξ) ∈ R2 7→ χ(u, ξ) = 1u>ξ − 10>ξ. Thus, their consistency
conditions in dimension N ∈ N∗ are now written Z R aK→L(ξ, v, w) − |K|L| a(ξ).nK,L10>ξ dξ = AK→L(v, w), (4.5) aK→L(ξ, v, v) = |K|L| a(ξ).nK,L1v>ξ, (4.6)
for all v, w ∈ R and almost all ξ ∈ R. Before proving the existence of the kinetic formulation (4.3)-(4.4)-(4.5)-(4.6), see Proposition 4.2, let us first deduce from (4.3) the kinetic formulation of the whole scheme (3.4). This is the equation
|K|(1vn+1 K >ξ− 1v n K>ξ) + ∆tn X L∈N (K) aK→L(ξ, vnK, vLn) = |K|∆tn∂ξmnK(ξ) + |K| h 1vn+1 K >ξ− 1vKn+1/2>ξ i . (4.7)
4.2 Exponential martingale inequality
Theorem 4.1. Let Φ ∈ L2(H, R), λ ∈ (0; +∞): P sup t∈[0,T ] Z t 0 ΦdW (s) ≥ λ ! ≤ 2 exp ( − λ 2 2kΦk2 L2(H,R)T ) (4.8)
and for all λ ∈ R+:
τλ : ω ∈ Ω 7→ inf t ∈ [0, T ] : Z t 0 ΦdW (s) ≥ λ (4.9)
is a predictable stopping time (with the convention inf(∅) = T )
Proof of Theorem 4.1. The first part of the Theorem is in fact the Bernstein inequality given By Revuz and Yor (cf [21] page 153, Exercise 3.16).
To obtain the second part of the theorem, it suffices to notice that
t 7→ Z t 0 ΦdW (s)
is a progressively measurable process (continuous and adapted to the filtration (Ft)t∈[0,T ])
and to apply the Debut theorem which is for example found in [22]. Such a hitting time is also predictable.
This is the martingale exponential inequality that I will use in the sequel. I will de-note Ωλ := ( ω ∈ Ω : sup t∈[0,T ] Z t 0 ΦdW (s) < λ ) .
and will work on this subset of Ω which is thus of probability greater than
1 − 2 exp − λ 2 2 kΦkL 2(H,R)T ! for any λ ∈ R+.
4.3 The CFL condition and a good truncation of Ω to make the discrete kinetic entropy defect measure non-negative
Proposition 4.2 (Kinetic formulation of the Finite Volume method). Set
aK→L(ξ, v, w) = |K|L| a(ξ).nK,L1ξ<v∧w + ∂2AK→L(v, ξ)1v≤ξ≤w+ ∂1AK→L(ξ, w)1w≤ξ≤v (4.10)
then the consistency conditions (4.5)-(4.6) are satisfied. Defining
mnK(ξ) = − 1 ∆tn h (vKn+1/2− ξ)+− (vn K− ξ)+ i − 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vnL)dζ (4.11)
the equation (4.3) is satisfied. Adding the following CFL condition
∆tn
|∂K| |K| L
ku0k∞+λ
A ≤ 1, ∀K ∈ T , ∀n ∈ {0, · · · , NT − 1} (4.12)
then the equation (4.4) is satisfied for all ω ∈ Ωb
λ the measurable subset of Ω defined as
Ωbλ:= ω ∈ Ω : vnK(ω) ∈h−ku0kL∞(TN)− λ, ku0kL∞(TN)+ λ i , ∀n ∈ {0, ..., NT − 1}, ∀K ∈ T . (4.13)
Remark 4.1 (Support of mnK). For all ω ∈ Ω, ξ 7→ mnK(ξ) is compactly supported in the convex envelope of vnK, {vLn; L ∈ N (K)}, vKn+1/2.
Proof of Remark 4.1 If ξ > sup vKn, {vnL : L ∈ N (K)} , vn+ 1 2 K then mnK(ξ) = − 1 ∆tn vn+ 1 2 K − ξ + − (vnK− ξ)+ ! − 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vnL)dζ = 0 − 0 = 0. If ξ < inf vnK, {vnL : L ∈ N (K)} , vn+ 1 2 K then mnK(ξ) = − 1 ∆tn vn+ 1 2 K − ξ + − (vKn − ξ)+ ! − 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vnL)dζ = − 1 ∆tn vn+ 1 2 K − vnK − 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vLn)dζ = 1 |K| X L∈N (K) AK→L(vKn, vnL) − 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vnL)dζ = 1 |K| X L∈N (K) AK→L(vnK, vLn) − 1 |K| X L∈N (K) (AK→L(vKn, vnL) − |K|L|A (ξ) .nK,L) = 1 |K| X L∈N (K) |K|L|A (ξ) .nK,L= 1 |K| X L∈N (K) Z K|L A (ξ) .nK,LdHn−1(x) = Z K div (A (ξ)) dx = 0.
Remark 4.2 (CFL condition in practice). Once the λ ∈ R+ is fixed, then the constant Lku0k∞+λ
A is fixed for the whole Finite Volume scheme, it means for all ω ∈ Ω, even if I
use it to prove the convergence in probability only for ω ∈ Ωλ which is included in Ωbλ
Proof of Proposition 4.2. Let us fix ω ∈ Ωλ and verify the consistency conditions (4.5) and (4.6): Z R aK→L(ξ, v, w) − |K|L| a(ξ).nK,L10>ξdξ = Z R |K|L| a(ξ).nK,L1ξ<v∧w− |K|L| a(ξ).nK,L10>ξ + ∂2AK→L(v, ξ) 1v≤ξ≤w+ ∂1AK→L(ξ, w) 1w≤ξ≤v dξ = Z 0 −∞ − |K|L| a(ξ).nK,L1ξ≥v∧wdξ + Z +∞ 0 |K|L| a(ξ).nK,L1ξ<v∧wdξ + Z R (∂2AK→L(v, ξ) 1v≤ξ≤w+ ∂1AK→L(ξ, w) 1w≤ξ≤v) dξ = (−|K|L|A (0) .nK,L+ |K|L|A (w ∧ v) .nK,L) + 1v≤w(AK→L(v, w) − AK→L(v, v)) + 1v≥w(AK→L(v, w) − AK→L(w, w)) = −|K|L|A (0) .nK,L+ AK→L(v, w) = AK→L(v, w) and aK→L(ξ, v, v) = ∂2AK→L(v, ξ) 1{v}(ξ) + ∂1AK→L(ξ, v) 1{v}(ξ) + |K|L|a(ξ).nK,L1ξ<v = |K|L|a(ξ).nK,L1ξ<v a.e. in ξ.
Equation (4.3) is the direct weak derivative of (4.11).
To finish the proof of this proposition 4.2, let us write
vn+ 1 2 K = v n K− ∆tn |K| X L∈N (K) AK→L(vnK, vLn) = H vKn, {vnL}L∈N (K).
H is a nondecreasing function in each variable vLnbecause AK→L(vKn, vLn) is nonincreasing
for its second variable. If ω ∈ Ωbλ then each H
., {vLn}L∈N (K)is also nondecreasing by the CFL condition (4.12) . Indeed, let b > a
Hb, {vnL}L∈N (K)− Ha, {vLn}L∈N (K) = b − a + ∆tn |K| X L∈N (K) (AK→L(a, vLn) − AK→L(b, vLn)) ≥ (b − a) 1 −∆tn |K|L ku0k∞+λ A X L∈N (K) |K|L | ≥ 0
To prove (4.4), let us write the inequality mnK(ξ) ≥ 0 as 1 ∆tn h ηξ(v n+1/2 K ) − ηξ(vKn) i + 1 |K| X L∈N (K) Z +∞ ξ aK→L(ζ, vKn, vnL)dζ ≤ 0 (4.14)
with the entropies ηξ(v) := (v − ξ)+. The familly of numerical entropic fluxes
ΦK→L(ξ, v, w) := Z +∞ ξ aK→L(ζ, v, w)dζ : ξ, v, w ∈ R . (4.15)
associated to the entropies ηξ is then defined. Note that
ΦK→L(ξ, vnK, vLn) =
Z
R
η0(ζ)aK→L(ζ, vKn, vLn)dζ = AK→L(vKn, vnL) − AK→L(vnK∧ ξ, vLn∧ ξ).
(4.16) Using the fact that ηξ(v) = v − v ∧ ξ, I can see that (4.14) is equivalent to
−vn+ 1 2 K ∧ ξ + v n K∧ ξ − ∆tn |K| X L∈N (K) AK→L(vKn ∧ ξ, vLn∧ ξ) ≤ 0.
The fact that H is nondecreasing for each of its variables implies that
vn+ 1 2 K = H vKn, {vnL}L∈N (K)≥ HvnK∧ ξ, {vnL∧ ξ}L∈N (K) and ξ = ξ −∆tn |K| X L∈N (K) AK→L(ξ, ξ) = H ξ, (ξ)L∈N (K) ≥ H vnK∧ ξ, {vLn∧ ξ}L∈N (K) thus vn+ 1 2 K ∧ ξ ≥ H vnK∧ ξ, {vn L∧ ξ}L∈N (K) that is (4.14). Proposition 4.3. ∀λ ∈ R+, Ωλ ⊂ Ωbλ
Proof of Proposition 4.3. Using the notations of paragraph 4.2, I have
Ωλ = {ω ∈ Ω : sup t∈[0,T ] Z t 0 ΦdW (s) < λ} ⊂ Ωbλ
because at fixed ω ∈ Ωλ, for all K ∈ T#:
If ∀K, vKn(ω) ∈ [−ku0k∞−R0tnΦ dW (s), ku0k∞+R0tnΦ dW (s)] with |R0tnΦ dW (s)| < λ,
then ∀K, vKn+1/2(ω) ∈ [−ku0k∞−R0tnΦ dW (s), ku0k∞+R0tnΦ dW (s)]
and ∀K, vKn+1(ω) ∈ [−ku0k∞−R0tn+1Φ dW (s), ku0k∞+R0tn+1Φ dW (s)], with |R0tn+1Φ dW (s)| <
λ. The induction is established.
5
Definition and properties of the approximate solution
and the approximate generalized solution up to a
stop-ping time
The Finite Volume scheme will give an approximate solution to the solution of (1.1) which is piecewise constant for each ω ∈ Ω. The choice of the sequence of approximate generalized solutions up to a stopping time of (1.1) is very crucial. It will allow me to use the convergence theorem 2.2 which was proved in [10].
5.1 Notations and definitions
5.1.1 details on the mesh
For a fixed final time T > 0, I denote by dT the set of admissible space-step and
time-step, defined as follows: if h > 0 and (∆t) = (∆t0, . . . , ∆tNT−1), NT ∈ N
∗, then I say that δ := (h, (∆t)) ∈ dT if 1 h ∈ N\{0; 1}, tNT := NT−1 X n=0 ∆tn= T, h + sup 0≤n<NT ∆tn≤ 1 (5.1) I say that δ → 0 if |δ| := h + sup 0≤n<NT ∆tn→ 0. (5.2)
For a given mesh parameter δ = (h, (∆t)) ∈ dT, I assume that a mesh T is given, with
the following properties:
diam(K) ≤ h, (5.3)
αNhN ≤ |K|, (5.4)
|∂K| ≤ 1 αN
hN −1, (5.5)
for all K ∈ T , where
diam(K) = max
is the diameter of K and αN is a given positive absolute constant depending on the
dimension N only. Note the following consequence of (5.4)-(5.5):
h|∂K| ≤ 1
α2N|K|, (5.6)
for all K ∈ T .
5.1.2 Definitions linked with the approximate solution
I can now define the approximate solution given by the Finite Volume Method vδ for
almost every x ∈ TN and all t ∈ [0, T ] by
vδ(x, t) = vnK, ∀x ∈ K, tn≤ t < tn+1, and vδ(x, T ) = vKNT−1, ∀x ∈ K. (5.7)
I will also need the intermediary discrete function
v[δ(x, tn+1) = vKn+1/2, ∀x ∈ K, (5.8)
defined for all n ∈ {0, · · · , NT − 1}.
Let us now introduice another approximation of the solution: the function v]δ defined ∀x ∈ K, ∀K ∈ T , ∀t ∈ [0, T ) by vδ](x, t) = vK] (t) = X n∈{0,..NT−1} 1[tn,tn+1)(t) vn+ 1 2 K + Z t tn Φ dW (s) (5.9) and ∀x ∈ K, ∀K ∈ T , t = T by v]δ(x, T ) = vK] (T ) = vNT− 1 2 K + Z T tNT −1 Φ dW (s).
It has to be compared with the intermediate approximation ¯vT ,∆tof the paper [15] which
is continuous in time and defined ∀t ∈ [tn, tn+1), ∀x ∈ K by:
¯ vT ,∆t(x, t) = vnK− 1 |K| Z t tn X L∈N (K) AK→L(vKn, vLn)ds + Z t tn Φ dW (s).
My aim is to use the theorem 2.2, and to define a sequence of approximate generalized solutions up to a stopping time (fδ) associated with the scheme (3.4). The facts that
the vnK are bounded on the subset Ωλ, that the probability of Ωλ tends to one with λ
towards +∞, and that the approximations are considered in probability allows me to define the function fδλ(x, t, ξ, ω), ∀ξ ∈ R, x ∈ TN, ∀n ∈ {0, ..., NT − 1}, ∀t ∈ [tn, tn+1), as
fδλ(x, t, ξ, ω) = 1[0,τλ](tn) t − tn ∆tn 1v] δ(x,t)>ξ +tn+1− t ∆tn 1vδ(x,t)>ξ (5.10)
and
fδλ(x, T, ξ, ω) = 1[0,τλ](tNT−1)1v]δ(x,T )>ξ
which makes almost surely
Fδλ: t ∈ [0, T ] 7→ Z
TN×R
fδλ(x, t, ξ)ϕ(x, ξ) dxdξ
c`adl`ag for all ϕ ∈ Cc∞(TN× R). In fact, at fixed ω ∈ Ω, it has only one point of discon-tuity, that is at the first tnfor which 1[0,τλ](tn) = 0. It is very important for two reasons: the Fδλ are almost surely continuous for all t ∈ [0, τλ] and it is necessary to obtain (2.9)
from the discrete kinetic equation (6.1) in section 6.
To fδλ I can associate the Young measures νx,t,ωδ,λ (dξ) := −∂ξfδλ(x, t, ξ, ω) defined
∀n ∈ {0, ..., NT − 1}, ∀t ∈ [tn, tn+1), ∀x ∈ TN by: νx,t,ωδ,λ (dξ) = 1[0,τλ(ω)](tn) t − tn ∆tn δv] δ(x,t,ω) (dξ) +tn+1− t ∆tn δvδ(x,t,ω)(dξ) (5.11) and νx,T ,ωδ,λ (dξ) = 1[0,τλ(ω)](tNT−1)δv] δ(x,T ,ω) (dξ)
I also denote by mλδ the discrete random measure given by
dmλδ(x, t, ξ, ω) = NT−2 X n=0 X K∈T# 1K×[tn,tn+1)(x, t)1[0,τλ(ω)](tn)m n K(ξ, ω) dxdtdξ + X K∈T# 1K×[t NT −1,T ](x, t)1[0,τλ(ω)](tNT−1)m NT−1 K (ξ, ω) dxdtdξ (5.12)
Remark 5.1. The CFL condition (4.12) implies that the inequality
1[0,τλ(ω)](tn)m
n
K(ξ, ω) ≥ 0
holds almost surely ∀n ∈ {0, ..., NT − 1} , ∀K ∈ T , ∀ξ ∈ R.
5.2 First properties of the sequence (fλ δ)δ
Let us prove now that the sequence fδλ
δ satisfies the first item of the definition of a
sequence of approximate generalized solutions up to the stopping time τλ of (1.1)
5.2.1 First point At fixed t ∈ [0, T ], ω ∈ Ω,
• fλ
δ (x, t, ξ) ∈ [0, 1],
• x 7→ vδ(x, t) et x 7→ v]δ(x, t) are constant on each K ∈ T . The K are open sets of
(0, 1)N thus vδ(., t), vδ](., t) ∈ L∞(TN) hence fδλ(., t, .) ∈ L∞(TN× R; [0, 1]).
• At fixed ω ∈ Ω, and fixed t ∈ [0, τλ(ω)], νx,t,ωδ,λ (dξ) = −∂ξfδλ(x, t, ξ) are Young
measures which vanish at infinity thanks to proposition 5.2 i.e. thanks to the proposition which gives the tightness of associated Young measures. Thus, almost surely, ∀t ∈ [0, τλ(ω)], fδλ(., t, .) are kinetic functions.
5.2.2 Second point
To clarify the measurability, let us write the parameter ω ∈ (Ω, F ) often omitted : Let R > 0,
1[0,τλ]fδλ : (t, ω) ∈ [0, T ] × Ω 7→ 1[0,τλ(ω)](t)fδλ(., t, ., ω) ∈ L1 [0, 1)N× (−R, R); R . Due to the separability of L1 [0, 1)N × (−R, R); R, to show that 1[0,τλ]fδλ is a pre-dictable process, it is sufficient to prove that ∀g ∈ L∞([0, 1)N× (−R, R); R), the process
(t, ω) ∈ [0, T ] × Ω 7→ Z [0,1)N×(−R,R) 1[0,τλ(ω)](t)f λ δ(x, t, ξ, ω)g(x, ξ)dxdξ ∈ R
is predictable. By sum of measurable functions, it is sufficient for such g, to prove that ∀K ∈ T#, the process LK : (t, ω) ∈ [0, T ] × Ω 7→ Z K×(−R,R) 1[0,τλ(ω)](t)f λ δ(x, t, ξ, ω)g(x, ξ)dxdξ ∈ R
is predictable. The process is in fact left-continuous and adapted to the filtration (Ft)t∈[0,T ], therefore this process is predictable.
By periodic extension, the process
1[0,τλ]fδλ : (t, ω) ∈ [0, T ] × Ω 7→ 1[0,τλ(ω)](t)fδλ(., t, ., ω) ∈ L1(TN × (−R, R); R). is also predictable.
The fact that 1[0,τλ]fλ
δ ∈ L1([0, T ] × Ω; L1(TN× (−R, R); R)) is a consequence of the fact
that 1[0,τλ]f
λ
5.2.3 Initial condition satisfied by the sequence fδλδ Proposition 5.1. ∀ϕ ∈ L1 TN× R, Z TN×R fδλ(x, 0, ξ) − 1u0(x)>ξϕ (x, ξ) dxdξ →|δ|→00. Proof of Proposition 5.1
Thanks to the assumption (5.4) : hNαN ≤ |K|, denoting T#,h the mesh of (0, 1)N
associated to the value h = n1 for n ∈ N∗, [
T#,h:1h∈N∗ [
K∈T#,h K
can be called (in the sens of Rudin p146, [23]) substantial family because ∀x ∈ K
K ⊂ B (x, h) and |B (x, h) | = hN|B (x, 1) | ≤ |B (x, 1) | αN
|K|.
Therefore I can apply the Lebesgue differentiation theorem for almost all x ∈ K:
lim h→0 1 |K| Z K u0(y) dy − u0(x) = 0.
More precisely (see W. Rudin p151 [23]),
∀ε > 0, ∃δ > 0 :
∀K : diam (K) < δ and for almost all x ∈ K,
1 |K| Z K |u0(y) − u0(x) |dy < ε .
Thus, thanks to the assumption (5.3) : ∀K ∈ T#,h, diam (K) < h, I have for h small
enough X K∈T#,h Z K 1 |K| Z K u0(y) dy − u0(x) dx < ε and thus, ∀ϕ ∈ Cc∞ TN× R, Z TN×R fδλ(x, 0, ξ) − 1u0(x)>ξ ϕ (x, ξ) dxdξ = Z TN×R 1vδ(x,0)>ξ− 1u0(x)>ξ ϕ (x, ξ) dxdξ ≤ kϕk∞ Z TN |vδ(x, 0) − u0(x) |dx = kϕk∞ X K∈T#,h Z K 1 |K| Z K u0(y) dy − u0(x) dx < kϕk∞× ε
if h is small enough (it is the case if |δ| tends towards 0).
By density of Cc∞ TN × R in L1 TN × R, I also prove that ∀ϕ ∈ L1 TN × R, Z TN×R fδλ(x, 0, ξ) − 1u0(x)>ξ ϕ (x, ξ) dxdξ →|δ|→00.
5.3 Tightness of the sequence of Young measures (νδ,λ) δ
Proposition 5.2 (Tightness of (νδ,λ)). Let u0 ∈ L∞(TN), T > 0 and δ ∈ dT. Assume
that (3.1) and (4.12) are satisfied. Let (vδ(t)) be the numerical unknown defined by
(3.4)-(3.5)-(5.7) and let νδ,λ be defined by (5.11). Let p ∈ [1, +∞), then
E sup t∈[0,T ] Z TN Z R (|ξ|p)dνx,tδ,λ(ξ)dx ! ≤ 2 ku0kL∞(TN)+ λ p + 22p−1λp (5.13)
for λ > 0 big enough.
Proof of Proposition 5.2 Let us find a bound to
E sup t∈[0,T ] Z TN Z R (|ξ|p)νx,tδ,λ(dξ)dx ! = E max n∈{0,...,NT−1} sup t∈[tn,tn+1) Z TN Z R |ξ|pνδ,λ x,t (dξ) dx ! = E max n∈{0,...,NT−1} sup t∈[tn,tn+1) 1[0,τλ](tn) Z TN t − tn ∆tn |vδ](x, t)|p+tn+1− t ∆tn |vδ(x, t)|p dx ! ≤ E max n∈{0,...,NT−1} sup t∈[tn,tn+1) Z TN 2p−1 ku0kL∞(TN)+ λ p + ku0kL∞(TN)+ λ p dx ! + E max n∈{0,...,NT−1} sup t∈[tn,tn+1) 1[0,τλ](tn) Z TN 2p−1 Z t tn ΦdW (s) p dx ! ≤ 2 ku0kL∞(TN)+ λ p + max 22p−1λp; max n∈{0,...,NT−1} E sup t∈[tn,tn+1) 1[0,τλ](tn) Z TN 2p−1 Z t tn ΦdW (s) p dx ≤ 2 ku0kL∞(TN)+ λ p + max 22p−1λp; 2p−1CpE " Z tn+1 tn kΦk2L 2(H,R)ds p/2# ≤ 2 ku0kL∞(TN)+ λ p + max 22p−1λp; 2p−1CpkΦkpL 2(H,R)
where Cp is a universal constant of the Burkholder Davis Gundy inequality and because ∀n ∈ {0, ..., NT − 1} , ∀K ∈ T#: 1[0,τλ](tn) |v n K| ≤ ku0kL∞(TN)+ λ; 1[0,τ λ](tn) v n+1/2 K ≤ ku0kL∞(TN)+ λ. and for n such that t ≤ tn+1≤ τλ:
1[0,τλ](tn) Z t tn Φ dW (s) ≤ 1[0,τ λ](tn) Z t 0 Φ dW (s) + 1[0,τλ](tn) Z tn 0 Φ dW (s) ≤ 2λ
Remark 5.2. It means the condition (2.10) is satisfied by the sequence fδλ
δ.
5.4 Tightness of the sequence of kinetic measures (mλ δ)δ
Lemma 5.3. The kinetic measures mλδ defined on TN × [0, T ] × R by ω ∈ Ω 7→ mλδ(dx, dt, dξ)(ω) = NT−2 X n=0 X K∈T# 1K×[tn,tn+1)(x, t)1[0,τλ(ω)](tn)m n K(ξ) dxdtdξ + X K∈T# 1K×[tNT −1,T ](x, t)1[0,τλ(ω)](tNT−1)m NT−1 K (ξ) dxdtdξ !
are random measures.
Remark 5.3. Usually the parameter ω is omitted, but to talk about random measures, I have to write it.
Proof of Lemma 5.3 At fixed ω ∈ Ω, the bounded measures mλδ(dx, dt, dξ)(ω) are absolutely continuous with respect to the Lebesgue measure dxdtdξ because the mnK(.) are continuous and compactly supported. Their non-negativity is due to the remark 5.1. Let ψ ∈ Cb(TN× [0, T ] × R), ω ∈ Ω 7→ mλδ(ψ)(ω) = X K∈T# NT−1 X n=0 Z R Z K Z tn+1 tn ψ(x, t, ξ)1[0,τλ(ω)](tn)mnK(ξ, ω)dxdtdξ.
To show that it is a random variable, it suffices to show that
ω ∈ Ω 7→ Z R Z K Z tn+1 tn ψ(x, t, ξ)1[0,τλ(ω)](tn)m n K(ξ, ω)dxdtdξ
is a random variable for each n and each K. The F -measurability of vnK, vn+
1 2
K is proved by induction on n, that implies the
B(R)⊗F-measurability of mn
K(ξ). The F -measurability of 1[0,τλ(ω)](tn) is due to the fact that τλ is a stopping time. As the ξ 7→ mnK(ξ) are bounded and compactly supported, the ψ(x, t, ξ)1[0,τλ(ω)](tn)m
n
K(ξ) are integrable on TN × [0, T ] × R × Ω, the Fubini theorem
allows me to conclude. Lemma 5.4. ∀δ ∈ dT, ∀λ ∈ R+, NT−1 X n=0 E 1[0,τλ](tn) kvδ(tn+1)k2L2(TN)− kvδ[(tn+1)k2L2(TN) ≤ T kΦk2L 2(H,R) (5.14) NT−1 X n=0 E 1[0,τλ](tn) kvδ(tn+1)k4L4(TN)− kv[δ(tn+1)k4L4(TN) ≤ 12T kΦk2L 2(H,R) (ku0k∞+ λ)2+ kΦk2L2(H,R) (5.15)
Proof of Lemma 5.4 The numerical scheme gives the equality
vKn+1= vn+ 1 2 K + Z tn+1 tn Φ dW (s)
for all n ∈ {0, · · · , NT − 1}. Thus, I use the Itˆo’s formula to the function s ∈ R 7→ s2
and to the stochastic process
t 7→ vK] (t) = vn+ 1 2 K + Z t tn Φ dW (s)
defined on [tn, tn+1) to obtain (passing to the limit when t tends to tn+1)
vKn+12 = vn+1/2K 2 + 2 Z tn+1 tn vn+ 1 2 K + Z t tn Φ dW (s) Φ dW (t) + Z tn+1 tn kΦk2L 2(H,R)dt Multiplying by 1[0,τλ(ω)](tn) and taking the expectation, I obtain
E 1[0,τλ(ω)](tn) vn+1K 2 −vKn+1/22 = ∆tnkΦk2L2(H,R)
Multiplying by |K|, summing over the K ∈ T#, and summing over the n ∈ {0, · · · , NT − 1},
I otain (5.14).
If now, I use the Itˆo’s formula to the function s ∈ R 7→ s4 and to the stochastic process
t 7→ vK] (t) = vn+ 1 2 K + Z t tn Φ dW (s)
defined on [tn, tn+1), I obtain (passing to the limit when t tends to tn+1): vKn+14=vKn+1/24+ 4 Z tn+1 tn vn+ 1 2 K + Z t tn Φ dW (s) 3 Φ dW (t) + 6 Z tn+1 tn vn+ 1 2 K + Z t tn Φ dW (s) 2 kΦk2L 2(H,R)dt
Multiplying by 1[0,τλ](tn) and taking the expectation, I obtain
E 1[0,τλ](tn) vKn+14 −vn+1/2K 4 = 6 Z tn+1 tn E 1[0,τλ](tn) vn+ 1 2 K + Z t tn Φ dW (s) 2! kΦk2 L2(H,R)dt ≤ 12 Z tn+1 tn E 1[0,τλ](tn) vn+ 1 2 K 2! + ∆tnkΦk2L2(H,R) ! kΦk2 L2(H,R)dt ≤ 12∆tnkΦk2L2(H,R) (ku0k∞+ λ)2+ ∆tnkΦk2L2(H,R)
Multiplying by |K|, summing over the K ∈ T#, and summing over the n ∈ {0, · · · , NT − 1},
I obtain (5.15).
Note that the following proposition gives a stronger result than the expected assumptions (2.11) and (2.12) needed to use theorem 2.2. Indeed,
E Z TN×[0,T ]×R (1 + |ξ|2)dmλδ(x, t, ξ) ≤ Cste implies E Z TN×[0,T ]×R 1dmλδ(x, t, ξ) + R2E Z TN×[0,T ]×BcR 1dmλδ(x, t, ξ) ≤ Cste.
The first left-hand term of the previous inequality gives (2.11), the second gives (2.12).
Proposition 5.5 (Tightness of (mλδ)δ). Let u0 ∈ L∞(TN), T > 0 and δ ∈ dT. Let
(vδ(t)) be the numerical unknown defined by (3.4)-(3.5)-(5.7) and let mλδ be defined by
(5.12). Then, I have
E Z
TN×[0,T ]×R
(1 + |ξ|2)dmλδ(x, t, ξ) ≤ Cste, (5.16)
where Cste is a constant depending on kΦkL2(H,R), T , ku0kL∞(TN) and the parameter λ only.
Proof of Proposition 5.5. Let p = 2 or p = 4. I start from (4.3), I multiply by (ξp)0 = pξp−1, I integrate over R to obtain
Z R (ξp)0(1 vn+ 12 K >ξ − 1vn K>ξ)dξ + Z R (ξp)0∆tn |K| X L∈N (K) aK→L(ξ, vnK, vLn)dξ = ∆tn Z R (ξp)0∂ξmnK(ξ)dξ that is vn+ 1 2 K p − |vKn|p+ Z R (ξp)0∆tn |K| X L∈N (K) aK→L(ξ, vKn, vnL)dξ = −∆tn Z R p(p − 1)ξp−2mnK(ξ)dξ
because ξ 7→ mnK(ξ) is compactly supported and its support is included in conv{vn+ 1 2 K , v n K, vLn: L ∈ N (K)}, thus X K∈T# |K| vn+ 1 2 K p − |K| |vn K|p + X K∈T# |K| Z R (ξp)0∆tn |K| X L∈N (K) aK→L(ξ, vKn, vnL)dξ = X K∈T# −∆tn|K| Z R p(p − 1)ξp−2mnK(ξ)dξ
which can be written
X K∈T# |K| vn+ 1 2 K p + X K∈T# ∆tn|K| Z R p(p − 1)ξp−2mnK(ξ)dξ = X K∈T# |K| |vKn|p
because AK→L being locally Lipschitz-continuous, for almost all ξ ∈ R
• ∂2AL→K(vnL, ξ) = lims→ξ AL→K(vLn, s) − AL→K(vnL, ξ) s − ξ = lim s→ξ −AK→L(s, vnL) + AK→L(ξ, vLn) s − ξ = −∂1AK→L(ξ, v n L), • ∂1AL→K(ξ, vKn) = −∂2AK→L(vKn, ξ).
and X K∈T# |K|∆tn |K| X L∈N (K) aK→L(ξ, vnK, vLn) = 1 2 X K∈T# |K|∆tn |K| X L∈N (K) (aK→L(ξ, vKn, vnL) + aL→K(ξ, vLn, vnK)) = 1 2 X K∈T# |K|∆tn |K| X L∈N (K) |K|L|a(ξ).nK,L1ξ<(vn K∧v n L)+ ∂2AK→L(v n K, ξ)1vn K≤ξ≤vLn + ∂1AK→L(ξ, vnL)1vn L≤ξ≤v n K + |K|L|a(ξ).nL,K1ξ<(vKn∧vLn) + ∂2AL→K(vLn, ξ)1vnL≤ξ≤vn K + ∂1AL→K(ξ, v n K)1vnK≤ξ≤vn L = 0. Finally, kvδ[(tn+1)kpLp(TN)+ X K∈T# ∆tn|K| Z R p(p − 1)ξp−2mnK(ξ)dξ = kvδ(tn)kpLp(TN). (5.17)
Multiplying by 1[0,τλ](tn), summing over n ∈ {0, . . . , NT − 1}, I obtain
p(p − 1) Z TN×[0,T ]×R ξp−2mλδ(dx, dt, dξ, ω) ≤ NT−1 X n=0 1[0,τλ](tn) kvδ(tn)kpLp(TN)− kvδ[(tn+1)kpLp(TN) (5.18)
Taking the expectation, and using Lemma (5.4), I obtain for p = 2
2E Z TN×[0,T ]×R 1mλδ(dx, dt, dξ) ≤ E NT−1 X n=0 1[0,τλ](tn) kvδ(tn)k2L2(TN)− kvδ[(tn+1)k2L2(TN) ≤ E NT−1 X n=0 1[0,τλ](tn)kvδ(tn)k 2 L2(TN)− 1[0,τλ](tn+1)kvδ(tn+1)k 2 L2(TN) + E NT−1 X n=0 1[0,τλ](tn)kvδ(tn+1)k 2 L2(TN)− 1[0,τλ](tn)kv [ δ(tn+1)k2L2(TN) ≤ Ekvδ(0)k2L2(TN)+ T kΦk 2 L2(H,R)≤ ku0k 2 L∞(TN)+ T kΦk 2 L2(H,R) (5.19)
For p = 4, I also take the expectation in (5.18), and using Lemma (5.4), I obtain : 12E Z TN×[0,T ]×R ξ2mλδ(dx, dt, dξ) ≤ E NT−1 X n=0 1[0,τλ](tn) kvδ(tn)k4L4(TN)− kvδ[(tn+1)k4L4(TN) ≤ E NT−1 X n=0 1[0,τλ](tn)kvδ(tn)k 4 L4(TN)− 1[0,τλ](tn+1)kvδ(tn+1)k 4 L4(TN) + E NT−1 X n=0 1[0,τλ](tn)kvδ(tn+1)k 4 L4(TN)− 1[0,τλ](tn)kv [ δ(tn+1)k4L4(TN) ≤ Ekvδ(0)k4L4(TN)+ 12T kΦk 2 L2(H,R) (ku0k∞+ λ)2+ kΦk2L2(H,R) ≤ ku0k4L∞(TN)+ 12T kΦk2L 2(H,R) (ku0k∞+ λ)2+ kΦk2L2(H,R)
Remark 5.4. It means that conditions (2.11) and (2.12) are satisfied by the sequence mλδδ.
6
Approximate kinetic equation
In this section, I will prove that fδλ
δ defined in (5.10) and m λ δ
δ defined in (5.12)
verify the approximate kinetic equation (2.9) up to the stopping time τλ defined in (4.9).
For that, I need the following discrete kinetic equation true on each interval [tn, tn+1):
Proposition 6.1. Discrete kinetic equation
∀t ∈ [tn, tn+1), n ∈ {0, ..., NT − 1}, x ∈ K, K ∈ T , ψ ∈ Cc∞(R) : Z R fδλ(x, t, ξ) ψ(ξ)dξdx − Z R fδλ(x, tn, ξ) ψ(ξ)dξdx = −1[0,τλ](tn) Z t tn 1 |K| Z R X L∈N (K) aK→L(ξ, vKn, vLn) ψ (ξ) dξ + Z R ∂ξψ (ξ) mnK(ξ) dξ ds + 1[0,τλ](tn) t − tn ∆tn Z t tn ψvK] (s)Φ dW (s) + 1 2kΦk 2 L2(H,R) Z t tn ∂ξψ v]K(s)ds (6.1) with vK] (t) := vn+ 1 2 K + Z t tn Φ dW (s) , ∀t ∈ [tn, tn+1)
Proof of Proposition 6.1. I have for t ∈ [tn, tn+1), n ∈ {0, ..., NT − 1}, x ∈ K, K ∈ T : Z R fδλ(x, t, ξ) ψ(ξ)dξdx − Z R fδλ(x, tn, ξ) ψ(ξ)dξdx = Z R 1[0,τλ](tn) t − tn ∆tn 1v# K(t)>ξ ψ (ξ) dξ − Z R 1[0,τλ](tn) t − tn ∆tn 1vn K>ξψ (ξ) dξ = Z R 1[0,τλ](tn) t − tn ∆tn 1v# δ(x,t)>ξ − 1 vn+ 12 K >ξ ψ (ξ) dξ + Z R 1[0,τλ](tn) t − tn ∆tn 1 vn+ 1K 2>ξ − 1vn K>ξ ψ (ξ) dξ = 1[0,τλ](tn) t − tn ∆tn Z v# δ(x,t) 0 ψ (ξ) dξ − Z v n+ 12 K 0 ψ (ξ) dξ + Z R 1[0,τλ](tn) × (t − tn) − 1 |K| X L aK→L(ξ, vnK, vLn) + ∂ξmnK(ξ) ! ψ (ξ) dξ = 1[0,τλ](tn) t − tn ∆tn Z t tn ψ vn+ 1 2 K + Z s tn ΦdW (r) ΦdW (s) + 1 2kΦk 2 L2(H,R) Z t tn ψ0 vn+ 1 2 K + Z s tn ΦdW (r) ds ! − 1[0,τ λ](tn) Z t tn Z R 1 |K| X L aK→L(ξ, vnK, vLn) ψ (ξ) dξds + Z t tn Z R mnK(ξ) ψ0(ξ) dξds !
thanks to the Itˆo formula applied to the process v]K(t) and to the function
r ∈ R 7→ Z r
0
6.1 Beginining of the proof leading to the approximate kinetic equation up to the stopping time τλ
Let us apply the discrete kinetic equation to any fixed x ∈ K, ξ 7→ ϕ (x, ξ) ∈ Cc∞(R) and n ∈ {0, ..., NT − 1}, t ∈ [tn, tn+1) such that t ≤ τλ:
Z R fδλ(x, t, ξ) ϕ (x, ξ) dξ − Z R fδλ(x, tn, ξ) ϕ (x, ξ) dξ = − 1[0,τ λ](tn) Z t tn 1 |K| Z R X L∈N (K) aK→L(ξ, vKn, vLn) ϕ (x, ξ) dξ + Z R ∂ξϕ (x, ξ) mnK(ξ) dξ ! ds + 1[0,τλ](tn) t − tn ∆tn Z t tn ϕ x, vK] (s) Φ dW (s) +1 2kΦk 2 L2(H,R) Z t tn ∂ξϕ x, vK] (s) ds !
I integrate over K ∈ T#, then I sum over the K ∈ T#:
Z TN Z R fδλ(x, t, ξ) ϕ (x, ξ) dξdx − Z TN Z R fδλ(x, tn, ξ) ϕ (x, ξ) dξdx = −1[0,τλ](tn) X K∈T# Z K 1 |K| Z t tn Z R X L∈N (K) aK→L(ξ, vnK, vLn) ϕ (x, ξ) dξdsdx − 1[0,τ λ](tn) X K∈T# Z K Z t tn Z R ∂ξϕ (x, ξ) mnK(ξ) dξdsdx + 1[0,τλ](tn) t − tn ∆tn X K∈T# Z K Z t tn ϕ x, v]K(s) Φ dW (s)dx + 1[0,τλ](tn) 1 2 t − tn ∆tn X K∈T# Z K Z t tn ∂ξϕ x, vK] (s) kΦk2L 2(H,R)dsdx.
I can invert RK and Rtt
n in the first, second, and forth term of the right-hand side of the previous equality thanks to the Fubini theorem, and thanks to the stochastic Fubini theorem in the third term using the fact that
Z K E Z t tn ϕ x, vK] (s, ω) Φ 2 L2(H,R) dsdx ≤ kϕk2∞ Z K E Z t tn kΦk2L 2(H,R)dsdx < +∞.
I obtain Z TN Z R fδλ(x, t, ξ) ϕ (x, ξ) dξdx − Z TN Z R fδλ(x, tn, ξ) ϕ (x, ξ) dξdx = −1[0,τλ](tn) X K∈T# Z t tn Z K 1 |K| Z R X L∈N (K) aK→L(ξ, vnK, vLn) ϕ (x, ξ) dξdxds − Z t tn Z TN Z R ∂ξϕ (x, ξ) mλδ(dx, ds, dξ) + X K∈T# Z t tn Z K Z R ϕ (x, ξ) µδx,s,t(dξ) dx Φ dW (s) +1 2 X K∈T# Z t tn Z K Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)µδx,s,t(dξ) dxds,
where µδx,s,t the Borel measure on R defined ∀x ∈ TN, t ∈ [0, T ], s ∈ [0, t) by
µδx,s,t(dξ) = NT−1 X n=0 1[0,τλ](tn)1[tn,tn∨(t∧tn+1))(s) tn+1∧ t − tn∧ t ∆tn δv] δ(x,s) (dξ) , (6.2)
I then sum over the n from 0 to the integer l such that t ∈ [tl, tl+1) or t ∈ [tNT−1, T ] to obtain (using the previous formula at the limit when t →
< tn+1, for n < l and at t ∈ [tl, tl+1) or t ∈ [tNT−1, T ] ) Z TN Z R fδλ(x, t, ξ) ϕ (x, ξ) dξdx − Z TN Z R fδλ(x, 0, ξ) ϕ (x, ξ) dξdx = − NT−1 X n=0 1[0,τλ](tn) X K∈T# Z t∧tn+1 t∧tn Z R X L∈N (K) aK→L(ξ, vnK, vLn) 1 |K| Z K ϕ (x, ξ) dxdξds − Z t 0 Z TN Z R ∂ξϕ (x, ξ) mλδ(dx, ds, dξ) + Z t 0 Z TN Z R ϕ (x, ξ) µδx,s,t(dξ) dx Φ dW (s) +1 2 Z t 0 Z TN Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)µ δ x,s,t(dξ) dxds
This equation is true for all t ∈ [0, τλ]. It remains to compare three terms of the
right-hand side of this equality to the three corresponding terms of the kinetic equation (2.6) up to the stopping time τλ for t ∈ [0, τλ] and to show that their differences have the
6.2 Comparison of the fluxes
For the readability, I will denote LλAthe constant Lku0k∞+λ
A in the sequel of the paper.
Proposition 6.2 (space consistency).
Let u0∈ L∞(TN), T > 0 and δ ∈ dT. Under the CFL condition (4.12),
for all ϕ ∈ Cc∞(TN × R) supported in TN × Λ with Λ compact of R, I have
Z TN Z t 0 Z R fδλ(x, s, ξ) × a(ξ) · ∇xϕ(x, ξ)dxdsdξ = − NT−1 X n=0 1[0,τλ](tn) X K∈T# Z t∧tn+1 t∧tn Z R X L∈N (K) anK→L(ξ) 1 |K| Z K ϕ (x, ξ) dxdξds + εδspace,0(t, ϕ) + εδspace,1(t, ϕ), (6.3)
almost surely, for all t ∈ [0, τλ], with the estimates
E sup t∈[0,τλ] ε δ space,0(t, ϕ) ! ≤ sup ξ∈Λ |a(ξ)| × k|∇ϕ|k∞ T kΦkL2(H,R) r sup 0≤n<NT ∆tn + sup n∈{0,...,NT−1} ∆tnT LλA+ sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)| ! , (6.4) and E sup t∈[0,τ|ambda] |εδspace,1(t, ϕ)| ! ≤ h sup x∈TN,ξ∈R |∇xϕ (x, ξ) |4LλAT (ku0k∞+ λ) . (6.5) Proof of proposition 6.2
The aim is to show that for t ∈ [0; τλ],
− NT−1 X n=0 1[0,τλ](tn) X K∈T# Z tn+1∧t tn∧t Z R X L∈N (K) aK→L(ξ, vnK, vLn) 1 |K| Z K ϕ (x, ξ) dxdξds is close to Z t 0 Z TN Z R fδλ(x, s, ξ) × a (ξ) .∇xϕ (x, ξ) dξdxds.
Let us start by showing that
NT−1 X n=0 Z tn+1∧t tn∧t Z TN Z R 1[0,τλ](tn)1vδ(x,s)>ξ× a (ξ) .∇xϕ (x, ξ) dξdxds
is close to − NT−1 X n=0 1[0,τλ](tn) X K∈T# Z tn+1∧t tn∧t Z R X L∈N (K) aK→L(ξ, vKn, vnL) 1 |K| Z K ϕ (x, ξ) dxdξds.
For that, I use the following equalities :
NT−1 X n=0 Z tn+1∧t tn∧t Z TN Z R 1[0,τλ](tn)1vδ(x,s)>ξ× a (ξ) .∇xϕ (x, ξ) dξdxds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ× a (ξ) . Z K ∇xϕ (x, ξ) dxdξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ× a (ξ) . Z ∂K ϕ (x, ξ) nKdHN −1(x) dξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ × X L∈N (K) |K|L|a (ξ) . 1 |K|L| Z K|L ϕ (x, ξ) nK,LdHN −1(x) dξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ× X L∈N (K) |K|L|a(ξ).ϕK|L(ξ)nK,Ldξds,
where ϕK|L(ξ) stands for
1 |K|L|
Z
K|L
ϕ (x, ξ) dHN −1(x) .
Then I notice that
NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t X K∈T# Z R X L∈N (K) aK→L(ξ, vKn, vLn) ϕK|L(ξ) dξds = 1 2 NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t X K∈T# Z R X L∈N (K) aK→L ξ, vnK, vLn ϕK|L(ξ) + aL→K(ξ, vLn, vnK) ϕL|K(ξ) dξds = 0
and that NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ× X L∈N (K) |K|L|a(ξ).ϕK(ξ) nK,Ldξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ ϕK(ξ) × X L∈N (K) Z K|L a (ξ) .nK,LdHN −1(y) dξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ ϕK(ξ) × Z ∂K a (ξ) .nKdHN −1(y) dξds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ ϕK(ξ) × Z K divy(a (ξ)) dydξds = 0,
where ϕK(ξ) stands for
1 |K| Z K ϕ (x, ξ) dHN −1(x) , to compare NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z R X K∈T# 1vn K>ξ× X L∈N (K) |K|L|a(ξ). ϕK|L(ξ) − ϕK(ξ) nK,Ldξds and NT−1 X n=0 1[0,τλ](tn) X K∈T# Z tn+1∧t tn∧t Z R X L∈N (K) aK→L(ξ, vKn, vnL) ϕK|L(ξ) − ϕK(ξ) dξds.
Now, let us compare aK→L(ξ, vKn, vnL) and 1vn
K>ξ|K|L|a(ξ).nK,L for n such that tn≤ τλ. ϕ ∈ Cc∞ TN × R implies that ξ 7→ ϕK|L(ξ) and ξ 7→ ϕK(ξ) are also Cc∞(R). So let us
continue with γ ∈ Cc∞(R) : If vKn ≤ vn L, Z R γ (ξ) 1vn K>ξ|K|L|a(ξ).nK,L− aK→L(ξ, v n K, vLn) dξ = Z R γ (ξ) −∂2AK→L(vKn, ξ) 1vn K≤ξ≤vnL dξ ≤ Z R |γ (ξ)| × |K|L| LλA1vn K≤ξ≤vnL dξ because aK→L(ξ, vKn, vnL) = |K|L|a(ξ).nK,L1vn K∧vnL>ξ + ∂2AK→L(vKn, ξ) 1vn K≤ξ≤vnL+ ∂1AK→L(ξ, v n L) 1vn L≤ξ≤vKn.
If vKn ≥ vn L Z R γ (ξ)|K|L|a(ξ).nK,L1vn K>ξ − aK→L(ξ, v n K, vLn) dξ = Z R γ (ξ) |K|L|a(ξ).nK,L 1vn K>ξ− 1vnL>ξ − ∂1AK→L(ξ, vnL) 1vn L≤ξ≤vKn dξ ≤ Z R |γ (ξ)| × 1vn L≤ξ≤v n K|K|L| 2L λ Adξ. Moreover, ∀ξ ∈ R, I have ϕK|L(ξ) − ϕK(ξ) = 1 |K|L| Z K|L ϕ (x, ξ) dHN −1(x) − 1 |K| Z K ϕ (y, ξ) dy = 1 |K| Z K 1 |K|L| Z K|L ϕ (x, ξ) dHN −1(x) dy − 1 |K|L| Z K|L 1 |K| Z K ϕ (y, ξ) dydHN −1(x) ≤ 1 |K| Z K 1 |K|L| Z K|L |ϕ (x, ξ) − ϕ (y, ξ) |dHN −1(x) dy ≤ sup x∈TN,ξ∈R |∇xϕ (x, ξ)| h. Thus I find E sup t∈[0,τλ] NT−1 X n=0 Z tn+1∧t tn∧t Z TN 1[0,τλ](tn) Z R 1vδ(x,s)>ξ× a (ξ) .∇xϕ (x, ξ) dξdxds − − NT−1 X n=0 X K∈T# Z tn+1∧t tn∧t 1[0,τλ](tn) Z R X L∈N (K) aK→L(ξ, vKn, vnL) ϕK(ξ)dξds ! ≤ h sup x∈TN,ξ∈R |∇xϕ (x, ξ) |4LλAT (ku0k∞+ λ)
Then, to show that
Z t 0 Z TN Z R fδλ(x, s, ξ) × a (ξ) .∇xϕ (x, ξ) dξdxds is close to NT−1 X n=0 Z tn+1∧t tn∧t Z TN Z R 1[0,τλ](tn)1vδ(x,s)>ξ× a (ξ) .∇xϕ (x, ξ) dξdxds,
let us compute for t ∈ [0, τλ]: Z t 0 Z TN Z R fδλ(x, s, ξ) × a (ξ) .∇xϕ (x, ξ) dξdxds − NT−1 X n=0 Z tn+1∧t tn∧t Z TN Z R 1[0,τλ](tn)1vδ(x,s)>ξ× a (ξ) .∇xϕ (x, ξ) dξdxds = NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z TN Z R s − tn ∆tn 1v] δ(x,s)>ξ − 1vδ(x,s)>ξ×a (ξ) .∇xϕ (x, ξ) dξdxds (6.6) which gives E sup t∈[0,τλ] NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z TN Z R s − tn ∆tn 1v] δ(x,s)>ξ − 1v δ(x,s)>ξ × a (ξ) .∇xϕ (x, ξ) dξdxds ! ≤ sup ξ∈Λ |a(ξ)| × k|∇ϕ|k∞× E NT−1 X n=0 1[0,τλ](tn) Z tn+1∧t tn∧t Z TN v ] δ(x, s) − v [ δ(x, tn+1) + v [ δ(x, tn+1) − vδ(x, s) dxds ! ≤ sup ξ∈Λ |a(ξ)| × k|∇ϕ|k∞ NT−1 X n=0 Z tn+1 tn E Z s tn ΦdW (r) 2!1/2 ds + sup n∈{0,...,NT−1} ∆tn !NT−1 X n=0 X K∈T# |K|E1[0,τλ](tn) v n+1/2 K − v n K ! . With NT−1 X n=0 Z tn+1 tn E Z s tn ΦdW (r) 2!1/2 ds ≤ T kΦkL 2(H,R) r sup n∈{0,...,NT−1} ∆tn,
it remains to give a bound to NT−1 X n=0 X K∈T# |K|E1[0,τλ](tn) v n+1/2 K − v n K ≤ NT−1 X n=0 X K∈T# |K|E 1[0,τλ](tn) ∆tn |K| X L∈N (K) AK→L(vKn, vnL) − AK→L(vnK, vKn) + |K|L| A(vKn).nK,L ≤ NT−1 X n=0 X K∈T# |K|∆tn |K| X L∈N (K) |K|L| LλA+ |K|L| sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)| ≤ TLλA+ sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)| (6.7)
6.3 Comparison of stochastic integrals
Proposition 6.3. Let u0 ∈ L∞(TN), T > 0 and δ ∈ dT. Let µδ be the Borel measure
defined by (6.2). Assume that (1.2), (3.1) and (4.12) are satisfied. Then, for all ϕ ∈ Cc∞(TN× R), Z t 0 Z TN Z R ϕ (x, ξ) µδx,s,t(dξ) dx Φ dW (s) = Z t 0 1[0,τλ](s) Z TN Z R ϕ (x, ξ) νx,sδ,λ(dξ) dx Φ dW (s) + εδW,2(t, ϕ), (6.8)
almost surely, for all t ∈ [0, τλ], with the following estimations:
E sup t∈[0,τλ] |εδW,2(t, ϕ) |2 ! ≤ 8 kΦk4L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn+ 8 kΦk2L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn 1 αN hN −1 LλA 2 + sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|2 (6.9) Proof of proposition 6.3
Let t ∈ [0, τλ] and εδW,2(t, ϕ) := Z t 0 Z TN Z R ϕ (x, ξ) dµδx,s,t(ξ) dx Φ dW (s) − Z t 0 1[0,τλ](s) Z TN Z R ϕ (x, ξ) dνx,sδ,λ(ξ) dx Φ dW (s) .
I notice that n ∈ {0, ..., NT − 1} 7→ εδW,2(tn, ϕ) is a Ftn-martingale. If t ∈ [tl, tl+1) or t ∈ [tl; T ] when l = NT − 1, I do the following decomposition:
εδW,2(t, ϕ) = εδW,2(tl, ϕ) + Z t tl Z TN Z R ϕ (x, ξ) dµδx,s,t(ξ) dx Φ dW (s) − Z t tl Z TN Z R ϕ (x, ξ) dνx,sδ,λ(ξ) dx Φ dW (s)
Then with a maximal inequality (see [21] page 53), I obtain
E sup l∈{0..NT−1} |εδW,2(tl, ϕ) |2 ! ≤ 4E|εδW,2(tNT−1, ϕ) | 2= 4E Z tNT −1 0 Z TN Z R ϕ (x, ξ) µδ(x,s,t NT −1)(dξ) − Z R ϕ (x, ξ) νx,tδ NT −1(dξ) dx Φ 2 L2(H,R) ds = 4E NT−2 X n=0 1[0,τλ](tn) Z tn+1 tn Z TN Z R ϕ (x, ξ) µδ(x,s,t NT −1)(dξ) − Z R ϕ (x, ξ) νx,sδ,λ(dξ) 2 dx kΦk2L2(H,R)ds = 4E NT−2 X n=0 1[0,τλ](tn) Z tn+1 tn Z TN tn+1− s ∆tn ϕx, vδ#(x, s) −tn+1− s ∆tn ϕ (x, vδ(x, s)) 2 dx kΦk2L2(H,R)ds ≤ 4 kΦk2L 2(H,R)k∂ξϕk 2 ∞E NT−2 X n=0 1[0,τλ](tn) Z tn+1 tn Z TN v#δ (x, s) − vδ(x, s) 2 dxds ≤ 4 kΦk2L 2(H,R)k∂ξϕk 2 ∞ NT−2 X n=0 Z tn+1 tn X K∈T# |K| E Z s tn ΦdW (r) 2 + E1[0,τλ](tn) v n+1/2 K − v n K 2! ds
thanks to an independence argument. Then, using (6.7) and Itˆo’s Lemma E sup l∈{0..NT−1} |εδW,2(tl, ϕ) |2 ! ≤ 4 kΦk4L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn + 4 kΦk2L 2(H,R)k∂ξϕk 2 ∞ NT−2 X n=0 Z tn+1 tn X K∈T# |K|E1[0,τλ](tn) v n+1/2 K − v n K 2 ds ≤ 4 kΦk4L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn+ 4 kΦk2L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn 1 αN hN −1 LλA 2 + sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|2
Let l ∈ {0, ..., NT − 1}, with almost the same computations, it remains to give a bound
to E sup t∈[tl,tl+1) |εδW,2(t, ϕ) − εδW,2(tl, ϕ) |2 ! = E sup t∈[tl,tl+1) Z t tl Z TN Z R ϕ (x, ξ) µδx,s,t(dξ) dx − Z TN Z R ϕ (x, ξ) νx,sδ,λ(dξ) dxΦ dW (s) 2 ≤ 4E Z tl+1 tl Z TN Z R ϕ (x, ξ) µδx,s,t(dξ) dx − Z TN Z R ϕ (x, ξ) νx,sδ,λ(dξ) dx Φ dW (s) 2 ≤ 4E1[0,τλ](tl) Z tl+1 tl Z TN Z R ϕ (x, ξ) µδx,s,t(dξ) − Z R ϕ (x, ξ) νx,sδ,λ(dξ) 2 dx kΦk2L 2(H,R)ds ≤ 4 kΦk4L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn+ 4 kΦk2L 2(H,R)k∂ξϕk 2 ∞T sup n∈{0,...,NT−1} ∆tn 1 αN hN −1 LλA 2 + sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|2
6.4 Comparison of Itˆo terms
Proposition 6.4. Let u0 ∈ L∞(TN), T > 0 and δ ∈ dT. Assume that (1.2), (3.1) and
(4.12) are satisfied. Then, for all ϕ ∈ Cc∞(TN × R), Z t 0 Z TN Z R×R ∂ξϕ(x, ζ) kΦδ(x, ξ)k2L2(H,R)dµ δ x,s,t(ξ, ζ)dxds = Z t 0 Z TN Z R ∂ξϕ(x, ξ) kΦ(x, ξ)k2L2(H,R)dν δ,λ x,s(ξ)dxds + εδW,3(t, ϕ), (6.10)
almost surely, for all t ∈ [0, τλ], with the following estimations: E " sup t∈[0,τλ] |εδW,3(t, ϕ)| # ≤ 2 kΦk3L 2(H,R) ∂ξ2ϕ ∞T sup n∈{0,...,NT−1} p ∆tn + 2 kΦk2L 2(H,R) ∂2ξϕ ∞T sup n∈{0,...,NT−1} ∆tn LλA+ sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|. (6.11)
Proof of proposition 6.4 Let us define for t ∈ [0, τλ]:
εδW,3(t, ϕ) := Z t 0 Z TN Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)µ δ x,s,t(dξ) dxds − Z t 0 Z TN Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)ν δ,λ x,s(dξ) dxds = εδW,3(tl, ϕ) + Z t tl Z TN Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)µδx,s,t(dξ) dxds − Z t tl Z TN Z R ∂ξϕ (x, ξ) kΦk2L2(H,R)ν δ,λ x,s(dξ) dxds
with l ∈ {0, ..., NT − 1} such that t ∈ [tl, tl+1) or t ∈ [tNT−1, T ] if l = NT − 1. With almost the same computation as in the proof of Proposition 6.3, I obtain:
E sup l∈{0,...,NT−1} |εδW,3(tl, ϕ) | ! ≤ E NT−2 X n=0 1[0,τλ](tn) Z tn+1 tn Z TN Z R ∂ξϕ (x, ξ) µδ(x,s,t NT −1)(dξ) − Z R ∂ξϕ (x, ξ) νx,sδ,λ(dξ) dx kΦk 2 L2(H,R)ds ≤ kΦk2L 2(H,R) ∂ξ2ϕ ∞E NT−2 X n=0 1[0,τλ](tn) Z tn+1 tn Z TN v # δ (x, s) − vδ(x, s) dxds ≤ kΦk2L 2(H,R) ∂ξ2ϕ ∞ NT−2 X n=0 Z tn+1 tn X K∈T# |K| E Z s tn ΦdW (r) + E1[0,τλ](tn) v n+1/2 K − v n K ! ds
Then, using (6.7) and Itˆo’s Lemma
E sup l∈{0,...,NT−1} |εδ W,3(tl, ϕ) | ! ≤ kΦk3L 2(H,R) ∂ξ2ϕ ∞T sup n∈{0,...,NT−1} p ∆tn + kΦk2L2(H,R) ∂ξ2ϕ ∞T sup n∈{0,...,NT−1} ∆tn LλA+ sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|.
Let l ∈ {0, ..., NT − 1}, with almost the same computations, it remains to give a bound to E sup t∈[tl,tl+1) |εδW,3(t, ϕ) − εδW,3(tl, ϕ) | ! ≤ E1[0,τλ](tl) Z tl+1 tl Z TN Z R ∂ξϕ (x, ξ) µδx,s,t(dξ)− Z R ∂ξϕ (x, ξ) νx,sδ,λ(dξ) dx kΦk 2 L2(H,R)ds ≤ kΦk3L 2(H,R) ∂ξ2ϕ ∞T sup n∈{0,...,NT−1} p ∆tn + kΦk2L 2(H,R) ∂ξ2ϕ ∞T sup n∈{0,...,NT−1} ∆tn LλA+ sup ξ∈[−ku0k∞−λ;ku0k∞+λ] |A(ξ)|.
7
Convergence of the approximation given by the Finite
Volume scheme towards the unique solution of the
time-continuous equation
Theorem 7.1. Let u0 ∈ L∞(TN), T > 0. Assume that the assumptions (1.2), (3.1),
(5.4), (5.5) are satisfied. Let u be the solution to (1.1) with initial datum u0 and let vδ
be the solution to the Finite Volume scheme (3.4)-(3.5)-(5.7). Then for all p ∈ [1, +∞):
Z T
0
kvδ(., t) − u(., t)kpLp(TN)dt →δ→00 in probability.
Proof of Theorem 7.1. I can now apply the Theorem 2.2 to fδλδ and the stopping time τλ to obtain that zδλ
δ defined for t ∈ [tn, tn+1), ∀n ∈ {0, ..., NT − 2} and for
t ∈ [tNT−2, T ] by zδλ(x, t) := Z R ξdνx,tδ (ξ) = 1[0,τλ](tn) t − tn ∆tn v]δ(x, t) +tn+1− t ∆tn vδ(x, t) ,
converges towards the unique solution u of (1.1) when δ → 0 in the following sense : at fixed λ > 0, for all p ∈ [1, +∞):
E Z T 0 1[0,τλ](t) Z TN z λ δ(x, t) − u(x, t) p dxdt →δ→00 (7.1)
It is straightforward to see that:
E Z T 0 Z TN 1[0,τλ](t) × u(x, t) − u(x, t) p dxdt →λ→+∞0, (7.2)