• Aucun résultat trouvé

L^1-error estimate for numerical approximations of Hamilton-Jacobi-Bellman equations in dimension 1.

N/A
N/A
Protected

Academic year: 2021

Partager "L^1-error estimate for numerical approximations of Hamilton-Jacobi-Bellman equations in dimension 1."

Copied!
32
0
0

Texte intégral

(1)

L

1

-error estimates for numerical approximations of Hamilton-Jacobi-Bellman equations in dimension 1

Olivier Bokanowski, Nicolas Forcadel, Hasnaa Zidani

Abstract. The goal of this paper is to study some numerical approximations of particular Hamilton- Jacobi-Bellman equations in dimension 1 and with possibly discontinuous initial data. We investigate two anti-diffusive numerical schemes, the first one is based on the Ultra-Bee scheme and the second one is based on the Fast Marching Method. We prove the convergence and deriveL1-error estimates for both schemes. We also provide numerical examples to validate their accuracy in solving smooth and discontinuous solutions.

Keywords: Hamilton-Jacobi-Bellman equations, Fast Marching Method, Ultra-Bee scheme, L1-error estimate, anti-diffusive scheme.

Mathematics Subject Classification: 49L99, 65M15.

1 Introduction

This paper discuss two explicit numerical approximations of the following one-space dimensional Hamilton-Jacobi-Bellman (HJB) equation

ϑt+ max (f+(x)ϑx, f(x)ϑx) = 0 in R×(0, T)

ϑ(·,0) =v0 in R (1.1)

wherev0 ∈L(R). In particular, v0 can be discontinuous.

In optimal control theory, the solution ϑof equation (1.1) corresponds to the value function of an optimization problem [3, 2]. It often happens that this function, as well as the “final” cost v0, is discontinuous (for instance for target or Rendez-Vous problems). The discontinuities of ϑ will represent, for example, the interface between the domain of admissible trajectories and the one of prohibited trajectories and then it is very important to localize the discontinuities. This is the reason why, in the discontinuous case, the classical monotone schemes for HJB equations ([8, 11, 1, 13]) are no more adapted. Indeed, if we attempt to use these schemes, we observe an increasing numerical diffusion around discontinuities, and this is due to the fact that monotone schemes use at some level finite differences and/or interpolation technics.

In this work, we investigate two different schemes to solve (1.1) for discontinuous initial data.

The first one is a Fast Marching Method type scheme. This method, introduced by Sethian [14], is a very efficient scheme to solve numerically the eikonal equation for given positive velocityc(x).

This scheme has been improved by Carlini et al in [7, 12] where the case of changing sign velocity is considered.

Lab. JLL, Universit´es Paris 6 & 7, Chevaleret 75013 Paris. Email: boka@math.jussieu.fr

Projet Commands, INRIA Saclay & ENSTA, 32 Bd Victor, 75739 Paris Cx15. Email: Nicolas.Forcadel@ensta.fr

Projet Commands, INRIA Saclay & ENSTA, 32 Bd Victor, 75739 Paris Cx15. Email: Hasnaa.Zidani@ensta.fr

(2)

Recall that the FMM method was built [14, 12] to deal with eikonal equations with initial condition taking values in {0,1}, this schemes allows to concentrate numerical effort only in a Narrow band around the interface separating the zone of 0-value from the zone of 1-value.

Here we consider the case where the initial conditionv0 is any bounded lower semi continuous (l.s.c) function. We first define a level-set approximation w0 of v0 in the following form: given p≥1 and given (hk)k=1,···,p ⊂R+, we set

w0(x) = Xp

k=1

hk w0,k(x), withw0,k(x) :=

(1 ifv0(x)>Pp i=1hi 0 otherwise.

Now for each level k = 1,· · · , p, the function w0,k takes values only in {0,1}. Therefore, we propose an algorithm based on the FMM which make evolve each level-set function w0,k, for k = 1,· · · , p. Hence, we obtain for every k an approximation ϑρk (with ρ := (∆x,(hk)k)) of the solution of (1.1) associated to the initial condition w0,k. Thanks to a comparison principle, we prove that an approximation of the solutionϑof (1.1) is obtained by:

ϑρ(t, x) = Xp

k=1

hk ϑρk(t, x).

We derive anL1-error estimate (in finite time t) in order of ∆x, that is the mesh step size.

Let us mention that Y. Brenier [6] has used a similar level decomposition in the case of conservation laws.

The second scheme is a modified version of the Ultra-Bee scheme for HJB equations proposed in [5] and which convergence has been proved in [4]. Let us mention that this scheme was first studied in [9, 10] for linear advection equations with constant velocity. In this case, it is proved that the scheme is exact whenever the initial function takes values in {0,1} and the discontinuities are separated by 3∆x(∆x being the mesh step size). The scheme keeps nice anti-diffusive properties when we deal with advection or HJB equations with changing sign velocities and initial condition taking only values 0−1. A generalisation of the Ultrabee scheme is also proposed in [4] for HJB equation with l.s.c bounded initial condition v0. This generalisation use additional steps of truncation and predictionwhen two discontinuities get close (closer than 3∆x).

In this paper, we use the level-set decomposition of v0 (as explained above). We lead back to an HJB equation in the form of (1.1) with initial conditionw0,k which takes only values 0 and 1.

The evolution of each level-set function can be accurately approximated by Ultrabee scheme, and the resulting approximation of the solution ϑ of (1.1) is very satisfactory. The Ultrabee scheme combined with level-set decomposition has almost the same L1-error bound than the Ultrabee scheme studied in [4], but numerically it seems that the method proposed in this paper give more accurate results (see Section 5, for a numerical comparison).

The aim of future work is to use the level set decomposition idea in order to treat two- dimensional (or more) HJB evolution equations of the formut+ maxα(fα(x)· ∇u) = 0 by related schemes.

The paper is organized as follow. In Section 2 we present our main results: a scheme based on the Fast Marching Method (FMM), an Ultra-Bee scheme (UB), and main convergence results for both schemes in an L1-error approximation bound. Section 3 is devoted to some preliminary results. Next Section deals with the convergence proof for the FMM. Numerical simulations are finally presented in Section 5. Some technical proofs are postponed to the Appendix.

(3)

2 Main results

In this section, we present the convergence results for the FMM scheme and for the Ultra-Bee scheme.

The principal assumptions on the dynamics are the following ones:

(H1) f+ and f areL-Lipschitz continuous.

(H2) ∃ε >0∀x∈R,f(x) +ε≤f+(x).

Remark 2.1. This last assumption will allow us to compare the velocitiesf+andf on different nearby points. It can be replaced by

(H2’) f(x)≤f+(x), ∀x∈R, andf+ and f are non-decreasing functions on R. or by

(H2”) f(x)≤0≤f+(x), ∀x∈R. On the initial condition v0, we assume that

(H3) v0 ∈L(R),v0 is lower semi-continuous, and has a finite number of extrema, in the following sense:







There exist real numbersA1, . . . , Aq+1 and B1, . . . , Bq with A1=−∞ ≤B1 < A2 <· · ·< Bq ≤Aq+1 = +∞,

(with possiblyB1=−∞orBq= +∞), such that v0 րon each [Ai, Bi[, v0ց on each ]Bi, Ai+1], andv0(Bi) = min(v0(Bi), v0(Bi+)).

(Ai are local minima of v0, and Bi are local maxima ofv0).

We also assume that v0≥0 and is compactly supported: ∃α, β≥0,

supp(v0)⊂[α, β]. (2.2)

We finally assume that v0 is locally Lipschitz continuous in a neighbourhood of its local minima:

(H4) ∃δ0 >0, ∀i= 2, . . . q, v0 is Lipschitz continuous in [Ai−δ0, Ai0] Let us recall the definition of total variation of a real-valued function.

Definition 2.1. Let w be a real-valued function. the total variation ofw is defined by:

T V(w) := sup



 X

j=1,...,k

|w(yj+1)−w(yj)|; k∈N,and (yj)1≤j≤k+1 non-decreasing sequence



.

(4)

2.1 Level-set decomposition

Let us consider steps (hk)k=1,...,p such thathk>0 and Xp

k=1

hk>||v0||, and let

¯hk:=

Xk

i=1

hi, fork≥1, and h:= sup

1≤k≤p

hk. (2.3)

Let ∆x > 0 be a step size of a spatial grid, and let xj := j ∆x denote a uniform mesh, with j∈Z. Consider also

xj+1

2 := (j+1

2) ∆x, and Ij :=]xj−1

2, xj+1

2[.

We definew0, a level set decomposition of v0, and the functionw0,k of level k, by:

w0,k(xi) := 1¯

hk<v0 (xi) =

1 if ¯hk< v0(xi)

0 otherwise, (2.4a)

w0,k(x) =w0,k(xi) for x∈Ii. (2.4b) We also set

w0(x) := X

k=1,...,p

hkw0,k(x), x∈R. (2.4c)

Remark 2.2. Definition (2.4) clearly implies that wk(t, x)∈ {0,1}. Moreover, if 1≤k1≤k2 ≤ p, then from the comparison principle [3] and the fact that w0,k1(x)≤w0,k2(x),∀x∈R, we obtain wk1(t, x)≤wk2(t, x), ∀t≥0 and x∈R.

Now, the idea is to propose two algorithms to compute numerically the approximation ϑρk (where ρ = (∆x, h) represents the space discretization) of the solution wk of (1.1) with initial dataw0,k. This two scheme are based on the Fast Marching Method and on the Ultra-Bee Method.

As soon as we have computed the numerical solution ϑρk, a natural approximation of the solution ϑof (1.1) is simply given by ϑρ=Pp

k=1hkϑρk (see Proposition 3.3). We now describe in details the two algorithms as well as the convergence result we obtain.

First, we give an error approximation estimate between v0 and its projection w0. The proof is left to the reader.

Lemma 2.2. (Error at initial time) We have the following estimate

kw0−v0kL1(R)≤(β−α)h+T V(v0)∆x, ∀x∈R. (2.5) Next, we will compare the evolution of the viscosity solutions of (1.1) associated to initial datav0 and w0.

Proposition 2.3. Assume (H1)-(H2)-(H3) with ∆x ≤ δ0. Let wk (resp. w) be the viscosity solution of (1.1) with initial data w0,k (resp. w0). Then

(i) w(t, x) = X

k=1,...,p

hkwk(t, x), ∀t >0, x∈R,

(ii) kw(t, .)−ϑ(t, .)kL1(R) ≤eLt(β−α+M0t)h+ eLtT V(v0) +M1

∆x.

(5)

where

M0 =M0(v0, f) := X

i=2,...,q

(|f+(Ai)|+|f(Ai)|) and M1 = max

j,∃i, AiI¯j

kv0kL(Ij)

are constant.

Remark 2.3. In this proposition indeed only f(x)≤f+(x) is needed, not (H2).

2.2 Fast Marching Method

The idea is to make evolve each level setw0,k using an adaptation of the Generalized Fast Marching Method introduced in [7] (see also [12]).

2.2.1 Notations and algorithm

Let ∆x >0 be a mesh step size of a uniform grid. For k= 1,· · · , p and i∈Z, we consider:

θ0,ki := 2w0,k(xi)−1 =

1 if v0(xi)>¯hk

−1 if v0(xi)≤¯hk (2.6) (the introduction of θis just useful to formulate the algorithm in a simple way and in particular to have some symmetry properties of the algorithm).

As in [7], we also define approximated piece-wise constant velocity functions fb+ and fb, for x∈]xj−1

2, xj+1

2[ and for j∈Z: fbα(x) :=

0 if∃i∈ {j±1} s.t.fα(xi)fα(xj)≤0 and|fα(xj)| ≤ |fα(xi)|,

fα(xj) otherwise. (2.7)

This “regularization” allows us to introduce a numerical band of zero to separate the region where the velocity is positive from the one where the velocity is negative. This separation is needed to avoid the duplication of the front (see [7]).

Let us remark that, from (1.1), when θi0,k =−1 =−θ0,ki+1, then the discontinuity evolves with the velocity f+ (to the right if f+ >0 and to the left if f+ <0), while when θi0,k = 1 = −θi+10,k, the discontinuity evolves with the velocity f (see Figure 1).

We now define, for each control α ∈ {−,+}, the stencil of grid points useful to compute the value at pointxi, fori∈Z

Uαn,k(i) :=













i+ 1 if θn,ki =−θi+1n,k =−α1 andfbα(xi)<0 i−1 if θn,ki−1=−θin,k=−α1 andfbα(xi)>0

∅ otherwise and

Uαn,k =[

i

Uαn,k(i).

The set Un,k will play the role of the frozen points of the classical Fast Marching Method.

We point out that the setUαn,k(i) is either empty or a singleton. We also define a set of Narrow Bands by:

NBn,kα :=

i, Uαn,k(i)6=∅

, NBn,k:= NBn,k+ ∪NBn,k and NBn:= [

k=1,..,p

NBn,k.

(6)

i1∈ U+(i) iN B+ iN B+ i+ 1∈ U+(i)

Figure 1: Representation of the useful points and of the Narrow Band for the controlα= +. Left:

f+(xi) > 0 and the discontinuity moves to the right. Right: f+(xi) < 0 and the discontinuity moves to the left.

We describe now our FMM for Hamilton-Jacobi-Bellman equation (1.1). This is an adaptation of the one proposed in Carlini et al. [7, 12]. In order to track correctly the evolution we need to introduce a discrete functionτi,αn,k ∈R+, defined only for the points i∈ Uαn,k, to represent the approximated physical time for the front propagation at the nodesifor the level setk, the control α and at then-th iteration of the algorithm (FMM).

The idea of the algorithm is then very simple. For each point iof the Narrow Band N B, we compute a tentative value ˜τi of the arrival time of the front, using the time of the useful points.

We then find the minimum of the ˜τi and we accept the nodes that realize the minimum (i.e. we change the value of theθ) and we iterate the process. Let us now give our algorithm in details.

Initialization: forn= 0, initialize the fieldθ0,k as in (2.6), and set τi,α0,k :=

0 if i∈ Uα0,k,

+∞ otherwise, fork= 1,· · · , p andα =±. Loop: forn≥1,

1. Compute ˜τn−1,k on NBn−1,k as follows: forα=±, define

˜

τi,αn−1,k:=







τın−1,k + ∆x

|fbα(xi)| if i∈NBn−1,kα ,

+∞ otherwise,

whereı∈ Uαn−1,k(i), and set

˜

τin−1,k := min

α∈{+,−}τ˜i,αn−1,k. 2. Settn:= minn

˜

τin−1,k, i∈NBn−1,k, k ∈ {1, ..., p}o . 3. Define the new accepted point

NAn,k={i∈NBn−1,k, τ˜in−1,k =tn}. 4. Update the values of θn,k:

θin,k=

( −θn−1,ki if i∈NAn,k θin−1,k otherwise

(7)

5. Reinitialize τi,αn,k on Uαn,k: τi,αn,k=

( min(tn, τi,αn−1,k) ifi∈ Uαn,k

+∞ otherwise

6. Iftn≥T then stop. Else, setn:=n+ 1.

Remark 2.4. Let us remark that in our algorithm, the minimum time tn is taken on all the level sets. This allows us in particular to have a comparison principle between the level sets (see Corollary 2.5).

2.2.2 Main results for the FMM scheme We extend the function θn,k in the following way

θρ,k(t, x) :=θn,ki if x∈[xi−1

2, xi+1

2 + ∆x) andt∈[tn, tn+1) (2.8) whereρ denotes (∆x, h). Hence, we define a function ϑρby:

ϑρ(t, x) :=

Xp

k=1

θρ,k(t, x) + 1 2

hk. (2.9)

First we shall check thatϑρwell define a numerical approximation of the solutionϑof (1.1). This claim is a consequence of a comparison principles for the numerical level-set functions (θρ,k).

Theorem 2.4. (Discrete comparison principle)

Let 1< k1< k2 ≤p. For all n∈Nand all i∈Z, we have either θin,k1 > θin,k2

or

θn,ki 1in,k2 =:σi =±1 and if i∈ Uαn,k1 ∩ Uαn,k2, then

( τi,αn,k1 ≤τi,αn,k2 if σi = +1 τi,αn,k1 ≥τi,αn,k2 if σi =−1

The proof of this theorem is technical and is given in Appendix B. A first straightforward consequence of this discrete comparison principle can be formulated as follows:

Corollary 2.5. (Comparison principle for the level-set functions) Let 1≤k1< k2 ≤p. Then

θρ,k2 ≤θρ,k1.

Now, we can give the statement of the main result of this section. (The proof will be done in Section 4.)

Theorem 2.6. (Convergence of the FMM scheme)

Assume (H1)-(H4), and let ρ= (∆x, h) with ∆x <min(Lε, δ0) andh as in (2.3). The numerical

(8)

solution ϑρ given by the FMM scheme, defined as in (2.9), converges to the viscosity solution ϑ of (1.1), and for t≥0, the following error estimate holds:

ρ(t, .)−ϑ(t, .)kL1(R)≤ 5

2eLt+ 3LteLt

T V(v0) +M1

∆x+eLt(β−α+ 2M0t)h, (2.10) where

M0 =M0(v0, f) := X

i=2,...,q

(|f+(Ai)|+|f(Ai)|) and M1 = max

j,∃i, AiI¯j

kv0kL(Ij)

are constant.

Remark 2.5. Furthermore if h is chosen to be of the order of ∆x (for instance using hk≡h:=

∆x, ∀k), we deduce a global estimate of order ∆x in L1-error.

Remark 2.6. In the level set decomposition, we can choose (hj)j such that v0(Ai) =w0(Ai) for i= 2, . . . , q. In this case, assumption (H4) is not needed (see the proof of Proposition 2.3 in the appendix).

Remark 2.7. When the velocities f+andfdepend on time, it is possible to adapt the algorithm as in [12] and to obtain the comparison principle and the convergence result (in the same way as [7, 12]). Nevertheless, we are not able, in this case, to prove the L1 error estimate.

2.3 Ultra-Bee scheme 2.3.1 Algorithm (UB)

Let ∆t > 0 be a constant time step, andtn := n∆t for n≥ 0. Let us notice that in the FMM approach, each iteration takes into account the evolution of all levels-set functions wk. On the contrary, the UB scheme should be performed starting from eachw0,k independently of the others.

This scheme aims to compute, for every k = 1,· · ·, p, a numerical approximation of the averages wn,kj := ∆x1 R

Ijwk(tn, x)dx, for j ∈Z. Since the function wk(tn,·) takes only values in {0,1}, their averages wn,kj contain the information of the discontinuities localization. The UB scheme gives an accurate approximation of (wn,kj )j as long as the discontinuities are separated by more than 2∆x. Otherwise, when two discontinuities are sufficiently close, a truncation step is made in order to avoid numerical diffusion around these discontinuities. The scheme takes the following form.

Vjn+1,1−Vjn

∆t + max

α=±

fα(xj) Vj+n,L1

2−Vj−n,R1 2

∆x

= 0, (2.11a)

Vjn+1= [Trunc(Vn+1,1)]j, (2.11b)

with the initialization:

Vj0:= 1

∆x Z

Ij

w0,k(x)dx. (2.12)

Here Vn,L

j+12 and Vn,R

j+12 are numerical fluxes that will be defined below, while Trunc denotes a truncation operator that will also be made precise below.

(9)

We first set, for j∈Zand α=±,

νj,α:= ∆t

∆xfα(xj), the “local CFL” number. We assume that

j,α| ≤1, ∀j∈Z, and for α=±. (2.13) We also consider

ifνj,α>0,





b+j,α:= max(Vjn, Vj−1n ) + 1 νj,α

(Vjn−max(Vjn, Vj−1n )), Bj,α+ := min(Vjn, Vj−1n ) + 1

νj,α

(Vjn−min(Vjn, Vj−1n )),

(2.14a)

ifνj,α<0,





bj,α:= max(Vjn, Vj+1n ) + 1

j,α|(Vjn−max(Vjn, Vj+1n )), Bj,α := min(Vjn, Vj+1n ) + 1

j,α|(Vjn−min(Vjn, Vj+1n )).

(2.14b)

Under condition (2.13), these numbers satisfy b+j,α ≤ Bj,α+ , bj,α ≤ Bj,α , and correspond to flux limiters that ensure stability properties.

Now, we define the UB scheme as follows (see [5, 4]).

UB Algorithm: For each level-set functionw0,k,k= 1,· · · , p, we consider the following evolu- tion algorithm.

Initialization: We compute the initial averages (Vj0)j∈Z as in (2.12).

Loop: Forn≥0,We computeVn+1 = (Vjn+1)j∈Z by:

A) Evolution. Define “fluxes”Vn

j+12 forα∈ {−,+}as follows:

• If νj,α≥0, set Vn,L

j+12:=



min(max(Vj+1n , b+j,α), Bj,α+ ) ifνj,α>0

Vj+1n ifνj,α= 0 and Vjn6=Vj−1n Vjn ifνj,α= 0 and Vjn=Vj−1n ,

• If νj,α≤0, set Vn,R

j−12:=



min(max(Vj−1n , bj,α), Bj,α ) ifνj <0

Vj−1n ifνj,α= 0 and Vjn6=Vj+1n Vjn ifνj,α= 0 and Vjn=Vj+1n , (where b+j,α, bj,α, Bj,α+ and Bj,α are defined by (2.14a)-(2.14b)).

• If νj,α≥0 and νj+1,α>0, set Vn,R

j+12:=Vn,L

j+12.

• If νj+1,α≤0 and νj,α<0, set Vn,L

j+12:=Vn,R

j+12.

• If νj,α<0 and νj+1,α>0, then set Vn,R

j+12:=

Vj+1n ifVj+1n =Vj+2n

Vjn otherwise and Vn,L

j+12:=

Vjn ifVjn=Vj−1n Vj+1n otherwise.

(2.15)

(10)

SetVjn+1,1 := min{α=±}

Vjn−νj,α

Vj+n,L1

2−Vj−n,R1 2 . B) Truncation: SetVn+1:= Trunc(Vn+1,1) as follows.

• For all indexes j such that





max(Vj−1n+1,1, Vj+1n+1,1)<1, and Vjn+1,1 = 1 or

0<max(Vjn+1,1, Vj+1n+1,1)<1, andVj−1n+1,1 =Vj+2n+1,1= 0, set

Vj−1n+1=Vjn+1 =Vj+1n+1:= 0.

• Otherwise setVjn+1 :=Vjn+1,1.

For everyk= 1,· · · , p, we associate to the scheme values (Vjn)j, the l.s.c. step functionϑρkdefined for everyt≥0,x∈R by

ϑρk(t, x) :=

(Vjn ifx∈]xj−1

2, xj+1

2[, t∈[tn, tn+1[, min(Vjn, Vj+1n ) ifx=xj+1

2, t∈[tn, tn+1[. (2.16) The UB scheme approximation of the solution ϑof (1.1) is finally determined by

ϑρ(t, x) :=

Xp

k=1

hkϑρk(t, x). (2.17)

Remark 2.8. A general version of the Ultra-bee scheme is given in [4], for any l.s.c. initial condition in L1loc(R). Here, the algorithm (and specially the truncation step) is specified to the case of an initial condition taking values only in {0,1}. Also, in the algorithm of [4] there is a prediction step that is unnecessary in our context.

Several stability and convergence properties of this scheme can be found in [5, 4].

2.3.2 Convergence Result for the Ultra-Bee scheme

For everyk = 1,· · · , p, the function ϑρk (given by (2.16)) corresponds to a step wise function of levelk. We have the followingL1 error estimate between the solutionϑof (1.1) and its numerical approximation ϑρ given by the Ultra-Bee scheme. (Proof is postponed to the end of Section 3.) Theorem 2.7. (Error estimate)

Assume (H1)-(H2)-(H3). We assume that ∆x ≤ min(2Lε , δ0). Let ϑρ be as in (2.17) defined by the algorithm UB. Then for all tn≥0, we have the following estimate:

ρ(tn, .)−ϑ(tn, .)kL1(R)≤ (1 +eLtn +LtneLtn)T V(v0) +M1

∆x+eLtn(β−α+ 2M0tn)h, where M0 and M1 are the same constant as in Theorem 2.6.

(11)

Proof of Theorem 2.7: ¿From [4, Theorem 4], for every k= 1,· · ·, p we have kϑρk(tn, .)−wk(tn, .)kL1(R) ≤(LtneLtn+ 1)T V(w0,k)∆x.

Summing for k= 1, . . . , p, we obtain

ρ(tn, .)−w(tn, .)kL1(R)≤(LtneLtn+ 1)T V(v0)∆x where we have used that

Xp

k=1

hkT V(w0,k) =T V(w0)≤T V(v0) (see for instance [4, Lemma B.3]).

Together with Proposition 2.3, we get the desired estimate.

3 Preliminary results

From now on, consider an initial conditionv0 satisfying (H3), ϑsolution of (1.1) with the initial condition v0, and the level set decompositionw0 ofv0 (w0 defined as in (2.4)).

Mesh approximation. First, we define exact and approximated characteristics that will be useful throughout the paper. As the dynamics f and f+ are Lipschitz continuous, then for any a∈R we can define characteristicsXa,+ and Xa,− as the solutions of the following Cauchy problems:

a,+(t) =f+(Xa,+(t)),

Xa,+(0) =a, and

a,−(t) =f(Xa,−(t)).

Xa,−(0) =a. (3.18)

In general, the differential equation

˙

χa(t) =fb+a(t)), a.e. t≥0, χa(0) =a, (3.19) may have more than one absolutely continuous solution. The non-uniqueness comes from the behaviour on boundary points (xj+1

2) in the case when the velocity vanishes (or changes sign).

Throughout this paper, we shall denote byXa,+S the function defined by:

Xa,+S is an absolutely continuous solution of (3.19), and (3.20) if ∃t ≥0, ∃j∈Zs.t.

( Xa,+S (t) =xj+1

2, f+(xj)f+(xj+1)≤0

!

thenXa,+S (t) =xj+1

2 ∀t≥t. We have uniqueness of such solution (see [4, Appendix A]). We construct Xa,−S in a similar way.

By using the same arguments as in [4, Lemma 1 and Lemma 9], we get:

Lemma 3.1. Assume that (H1) and (H2) hold. Let a, b be in R. The following assertions are satisfied:

(i) Let s≤t and assume that Xb,−S (θ)≥Xa,+S (θ), for every θ∈[s, t]. Then

|Xb,−S (t)−Xa,+S (t)|+ 2∆x≤eL(t−s)(|Xb,−S (s)−Xa,+S (s)|+ 2∆x).

(ii) If a≥b+ ∆x and ∆x≤ Lε, then Xa,+S (t)≥Xb,−S (t) + ∆x for everyt≥0.

Also, we have the following representation of the solutions wk, k= 1,· · ·, p.

(12)

Lemma 3.2. (Lemma 2 of [4])

Assume that (H1)-(H2) hold.Then the unique viscosity solution of (1.1) with initial condition w0,k is given by:

wk(t, x) = min

y∈[Xx,+(−t),Xx,−(−t)]w0,k(y), ∀t >0, x∈R. (3.21) We also consider the functionwSk which is defined in an analogous way as in (3.21), but with the approximated characteristics Xx,+S , Xx,−S instead of Xx,+ and Xx,−:

wSk(t, x) := min

y∈[Xx,+S (−t),Xx,−S (−t)]w0,k(y), ∀t >0, x∈R, (3.22) The approximate function wSk will play an important role throughout the paper: the two studied schemes will give approximation of the functionwkS.

By using the same arguments as in [4, Proposition 1], we have the followingL1-error estimate:

kwSk(t, .)−wk(t, .)kL1(R)≤3LteLtT V(w0,k) ∆x. (3.23) The aim now is to approximate wSk by some discrete scheme.

Proposition 3.3. (Reconstruction of global approximation)

Assume (H1)-(H4). Let ρ := (∆x, h) with ∆x ≤ δ0 and h as in (2.3). Assume that we have constructed, for everyk= 1,· · · , p, an approximation ϑρk of wSk such that

ρk(t, .)−wSk(t, .)kL1(R) ≤CtT V(w0,k) ∆x, (3.24) for some constant Ct≥0. Define a global approximation by

ϑρ:= X

k=1,...,p

hkϑρk.

Then we have the estimate

ρ(t, .)−ϑ(t, .)kL1(R)≤ (Ct+eLt+ 3LteLt)T V(v0) +M1

∆x+eLt(β−α+ 2M0t)h, (3.25) where M0 and M1 are defined as in Theorem 2.6.

Proof: Set wS(t, .) :=P

k=1,...,phkwSk(t, .). By summing the estimate (3.24) for k= 1, . . . , p, we obtain

ρ(t, .)−wS(t, .)kL1(R) ≤ Ct

Xp

k=1

hkT V(w0,k)∆x≤CtT V(v0)∆x,

where we have used that Xp

k=1

hkT V(w0,k) =T V(w0)≤T V(v0) (see for instance [4, Lemma B.3]).

On the other hand, by using (3.23), we obtain

kwS(t, .)−w(t, .)kL1(R)≤3LteLtT V(v0) ∆x.

We conclude the proof by combining Proposition 2.3 and the previous bounds.

(13)

4 Fast Marching Method

4.1 General remarks on the algorithm Proposition 4.1. The following properties hold:

(i) For all i∈ Uαn,k, we have

τi,αn,k≤tn (ii) If i∈NAn,k, then

τi,αn,k=

tn if i∈ Uαn,k +∞ otherwise (iii) If i∈ Uαn−1,k∩ Uαn,k, then

τi,αn,ki,αn−1,k. (iv) If i∈ Uαn,k\Uαn−1,k, then

τi,αn,k =tn.

Proof(i) This is a straightforward consequence of Step 5 of the algorithm.

(ii) By Step 5 of the algorithm, we just have to prove that ifi∈NAn∩Uαn,k, then min(ταn−1,k, tn) = tn. Let us consider the case when α = + and i ∈ U+n,k(i+ 1) (the other cases being similar).

Thus,fb+(xi+1)>0, θin,k=−1 and θn,ki+1= 1.Also, sincei∈N An, then we have:

θn−1,ki = 1, and i∈N Bn−1,k. (4.26)

Assume that min(τ+n−1,k, tn) = τ+n−1,k. This implies in particular that τ+n−1,k < ∞ and so i∈ U+n−1,k. We claim that: i∈NBn−1,k , andi+ 1∈NAn.

Indeed, (4.26) implies that i∈ U+n−1,k(i−1) and fb+(xi−1)<0 which gives thati6∈NBn−1,k+ . Sincei∈N Bn−1,k, we get thati∈N Bn−1,k. Moreover, the fact thatθn−1,ki = 1 andi∈NBn−1,k implies thati+ 1∈ Un−1,k(i). It yields that

fb(xi)<0 and θi+1n−1,k=−1.

Since θi+1n−1,k =−1 =−θi+1n,k, we deduce thati+ 1∈NAn.

On the other hand, the fact that θi+1n−1,k = −1 andfb(xi) implies that i+ 16∈ NBn−1 . But i+1∈NAnand soi+1∈NBn−1,k+ . This implies thatfb+(xi+1)<0, which leads to a contradiction.

(iii) Ifi∈ Uαn−1,k∩ Uαn,k, then by Step 5 of the algorithm, we have τi,αn,k = min(τi,αn−1,k, tn) =τi,αn−1,k. (iv) If i∈ Uαn,k\Uαn−1,k, then by Step 5 of the algorithm, we have

τi,αn,k= min(τi,αn−1,k, tn) =tn

where we have used that τi,αn−1,k = +∞since i6∈ Uαn−1,k.

(14)

4.2 Proof of Theorem 2.6

In view of Proposition 3.3 we mainly have to deal with the case p= 1 (one level set).

For simplicity of notation we denote θn ≡ θn,1. Let (ai)i=1,...,d, (bi)i=1,...,d ⊂ (Z+ 12)∆x be such that

a1 < b1 < a2 < b2 < ... < ad< bd. We consider an initial data of the form

w0(x) :=

Xd

i=1

1]ai,bi[(x). (4.27)

We have the following convergence result for one level set which proof is given in the following subsection:

Proposition 4.2. (Convergence of the FMM scheme for one level)

Assume (H1)-(H2) and p= 1. Let ρ = ∆x ≤ Lε. We have the following error estimate between the numerical solution ϑρ given by the FMM scheme, defined as in (2.9), and the approximate solution wS defined in (3.22)

ρ(t, .)−wS(t, .)kL1(R)≤ 3

2eLtT V(v0)∆x. (4.28) The proof of this proposition is given in the following subsection.

Proof of Theorem 2.6

Theorem 2.6 is a straightforward consequence of Propositions 3.3 and 4.2.

4.3 Proof of Proposition 4.2

Before to give the proof of Proposition 4.2, we have to give some definitions and preliminary results.

Notations. We define the numerical discontinuities in the following way: Let a ∈ (Z+12)∆x and i0a ∈ Z such that a = xi0

a12. Assume that θ0i0

a = α1 = −θ0i0

a−1 for some α ∈ {+,−}. In particular, the numerical discontinuity starting fromawill move with the velocity fbα. We denote by (tn,∆a )n the sequence of time at which the numerical discontinuity starting from a moves and by Xa,αS,∆(tn,∆a ) the position of the discontinuity at time tn,∆a . More precisely, we define:

t0,∆a = 0 and Xa,αS,∆=a=xi0 a12

and for n≥1,

























tn,∆a = infn

tm> tn−1,∆a s.t. in−1a ∈NAm orin−1a −1∈NAmo ,

ina =





in−1a + 1 if fbα(xin−1

a )>0 andin−1a ∈NAm, in−1a −1 if fbα(xin−1

a −∆x)<0 andin−1a −1∈NAm, in−1a otherwise

Xa,αS,∆(tn,∆a ) =xin a12.

(4.29)

(15)

We now define the extinction time Ta of the numerical discontinuity starting fromaby Ta= inf{tn,∆a , θimn

aimn

a−1 form such thattm =tn,∆a }

Remark 4.1. In the definition of ina (4.29), we haveina =in−1a only if the discontinuity Xa,αS,∆ do not move and disappear.

Lemma 4.3. (Profile of the discontinuity) Let i0a ∈N be such that θ0i0

a−1 =α1 =−θ0i0

a with α∈ {+,−}. Let n∈N be such that tn,∆a < Ta. Then, for all m∈Nsuch that tn,∆n ≤tm < tn+1,∆a , we have

θmin

a =−θmin

a−1=α1.

This lemma claims in fact that the numerical discontinuity starting fromaand evolving with the velocity fα is located at a discontinuity of θm and always keep the same profile.

Proof. Let us assume that α= + (the case α =−being similar). By recurrence, let us assume that

θiln−1

a =−θiln−1

a −1 =α1 for all l∈Nsuch that tn−1,∆a ≤tl< tn,∆a .

Let us define m such thattm =tn,∆a . We have

in−1a ∈NAm or in−1a −1∈NAm. Step 1: ina 6=in−1a

By contradiction, assume thatina =in−1a . Let us assume that in−1a ∈NAm (the other case being similar). This implies in particular thatfb+(xin−1

a )≤0. Moreover, we have θm−1

in−1a = 1 =−θm−1

in−1a −1. This implies thatin−1a 6∈NBm−1+ . Butin−1a ∈NAm and soim−1a ∈NBm−1 which implies that

fb(xin−1

a )<0. (4.30)

Since in−1a ∈ NAm, we also deduce that θmin

aimn−1

a =−1. But tn,∆a < Ta and so θmin−1

a −1 =

θmin

a−1= 1. This implies thatin−1a −1∈NAm. Sinceina =in−1a , we then deduce thatfb+(xin−1

a −1)≥ 0 and soin−1a −16∈NBm−1+ . This implies that in−1a −1 ∈NBm−1 and so fb(xin−1

a −1)>0. This contradicts (4.30).

Step 2: θmin

a =−θmin a−1= 1

Let us assume that ina = in−1a + 1 (the case ina = in−1a −1 being similar). This implies that in−1a ∈NAm. But θm−1

in−1a = 1 and so θimn

a−1imn−1

a =−1. Moreover, since tn,∆a < Ta, we deduce that

θimn

a =−θimn

a−1= 1.

Step 3: conclusion

By definition oftn+1,∆a , for alllsuch that tn,∆a ≤tl< tn+1,∆a , we haveina 6∈NAl andina−16∈NAl and so

θlin amin

a and θiln

a−1min a−1. By Step 2, we deduce the result.

Références

Documents relatifs

Let u be the unique uniformly continuous viscosity solution of (1.4). Minimal time function.. To prove the claim of the theorem, we will use arguments from control theory. Let x be

On the other hand, we refer to the work of Vila and Villedieu [VV03], who prove, again under a strict CFL condition, an h 1/2 -error estimate in the L 2 loc space-time norm for

Comm. Viscosity solutions of a system of nonlinear second-order elliptic PDEs arising in switching games. On the rate of convergence of approximation schemes for Bellman

The goal of our work is to propose a new scheme for mean curvature motion and to prove an error estimate between the continuous solution and its numerical approximation.. This

Mancini, Existence and multiplicity results for nonlinear elliptic problems with linear part at resonance.. The case of the

Among fast methods, the prototype algorithm for the local single-pass class is the Fast Marching Method (FMM) [9, 12], while that for the iterative class is the Fast Sweeping

In this case, the truncation corresponds to a recomputation of the average values such that the right (resp. left) discontinuity position is correctly coded. Hence the truncation

It is explained in [15] that the proof of the comparison principle between sub- and super-solutions of the continuous Hamilton-Jacobi equation can be adapted in order to derive