• Aucun résultat trouvé

Hamilton-Jacobi equations for optimal control on networks with entry or exit costs

N/A
N/A
Protected

Academic year: 2021

Partager "Hamilton-Jacobi equations for optimal control on networks with entry or exit costs"

Copied!
32
0
0

Texte intégral

(1)

HAL Id: hal-01548133

https://hal.archives-ouvertes.fr/hal-01548133v3

Submitted on 29 Sep 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Distributed under a Creative Commons Attribution| 4.0 International License

Hamilton-Jacobi equations for optimal control on networks with entry or exit costs

Manh Khang Dao

To cite this version:

Manh Khang Dao. Hamilton-Jacobi equations for optimal control on networks with entry or exit costs. ESAIM: Control, Optimisation and Calculus of Variations, EDP Sciences, 2019, 25, pp.15.

�10.1051/cocv/2018003�. �hal-01548133v3�

(2)

https://doi.org/10.1051/cocv/2018003

www.esaim-cocv.org

HAMILTON-JACOBI EQUATIONS FOR OPTIMAL CONTROL ON NETWORKS WITH ENTRY OR EXIT COSTS

Manh Khang Dao

*

Abstract. We consider an optimal control on networks in the spirit of the works of Achdou

et al.

[NoDEA Nonlinear Differ. Equ. Appl.

20

(2013) 413–445] and Imbert

et al.

[ESAIM: COCV

19

(2013) 129–166]. The main new feature is that there are entry (or exit) costs at the edges of the network leading to a possible discontinuous value function. We characterize the value function as the unique viscosity solution of a new Hamilton-Jacobi system. The uniqueness is a consequence of a comparison principle for which we give two different proofs, one with arguments from the theory of optimal control inspired by Achdou

et al.

[ESAIM: COCV

21

(2015) 876–899] and one based on partial differential equations techniques inspired by a recent work of Lions and Souganidis [Atti Accad. Naz. Lincei Rend. Lincei

Mat. Appl.27

(2016) 535–545].

Mathematics Subject Classification. 34H05, 35F21, 49L25, 49J15, 49L20, 93C30 Received June 27, 2017. Accepted January 8, 2018.

1. Introduction

A network (or a graph) is a set of items, referred to as vertices or nodes, which are connected by edges (see Fig. 1 for example). Recently, several research projects have been devoted to dynamical systems and differential equations on networks, in general or more particularly in connection with problems of data transmission or traffic management (see for example Garavello and Piccoli [14] and Engel et al. [12]).

An optimal control problem is an optimization problem where an agent tries to minimize a cost which depends on the solution of a controlled ordinary differential equation (ODE). The ODE is controlled in the sense that it depends on a function called the control. The goal is to find the best control in order to minimize the given cost.

In many situations, the optimal value of the problem as a function of the initial state (and possibly of the initial time when the horizon of the problem is finite) is a viscosity solution of a Hamilton-Jacobi-Bellman partial differential equation (HJB equation). Under appropriate conditions, the HJB equation has a unique viscosity solution characterizing by this way the value function. Moreover, the optimal control may be recovered from the solution of the HJB equation, at least if the latter is smooth enough.

The first articles about optimal control problems in which the set of admissible states is a network (therefore the state variable is a continuous one) appeared in 2012: in [2], Achdou et al. derived the HJB equation associated to an infinite horizon optimal control on a network and proposed a suitable notion of viscosity solution.

Obviously, the main difficulties arise at the vertices where the network does not have a regular differential

Keywords and phrases:Optimal control, networks, Hamilton-Jacobi equation, viscosity solutions, uniqueness, switching cost.

IRMAR, Universit´e de Rennes 1, 35000 Rennes, France.

* Corresponding author:manh-khang.dao@univ-rennes1.fr

c

The authors. Published by EDP Sciences, SMAI 2019

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

(3)

structure. As a result, the new admissible test-functions whose restriction to each edge is C

1

are applied.

Independently and at the same time, Imbert et al. [17] proposed an equivalent notion of viscosity solution for studying a Hamilton-Jacobi approach to junction problems and traffic flows. Both [2] and [17] contain first results on comparison principles which were improved later. It is also worth mentioning the work by Schieborn and Camilli [22], in which the authors focus on eikonal equations on networks and on a less general notion of viscosity solution. In the particular case of eikonal equations, Camilli and Marchi established in [10] the equivalence between the definitions given in [2, 17, 22].

Since 2012, several proofs of comparison principles for HJB equations on networks, giving uniqueness of the solution, have been proposed.

1. In [3], Achdou et al. give a proof of a comparison principle for a stationary HJB equation arising from an optimal control with infinite horizon (therefore the Hamiltonian is convex) by mixing arguments from the theory of optimal control and PDE techniques. Such a proof was much inspired by works of Barles et al.

[6, 7], on regional optimal control problems in R

d

(with discontinuous dynamics and costs).

2. A different and more general proof, using only arguments from the theory of PDEs was obtained by Imbert and Monneau in [16]. The proof works for quasi-convex Hamiltonians, and for stationary and time-dependent HJB equations. It relies on the construction of suitable vertex test functions.

3. A very simple and elegant proof, working for non convex Hamiltonians, has been very recently given by Lions and Souganidis [19, 20].

The goal of this paper is to consider an optimal control problem on a network in which there are entry (or exit) costs at each edge of the network and to study the related HJB equations. The effect of the entry/exit costs is to make the value function of the problem discontinuous. Discontinuous solutions of Hamilton-Jacobi equation have been studied by various authors, see for example Barles [4], Frankowska and Mazzola [13], and in particular Graber et al. [15] for different HJB equations on networks with discontinuous solutions.

To simplify the problem, we will first study the case of junction, i.e., a network of the form G = ∪

Ni=1

Γ

i

with N edges Γ

i

i

is the closed half line R

+

e

i

) and only one vertex O, where {O} = ∩

Ni=1

Γ

i

. Later, we will generalize our analysis to networks with an arbitrary number of vertices. In the case of the junction described above, our assumptions about the dynamics and the running costs are similar to those made in [3], except that additional costs c

i

for entering the edge Γ

i

at O or d

i

for exiting Γ

i

at O are added in the cost functional.

Accordingly, the value function is continuous on G\ {O}, but is in general discontinuous at the vertex O. Hence, instead of considering the value function v, we split it into the collection (v

i

)

1≤i≤N

, where v

i

is continuous function defined on the edge Γ

i

. More precisely,

v

i

(x) =

( v (x) if x ∈ Γ

i

\ {O} , lim

δ→0+

v (δe

i

) if x = O.

Our approach is therefore reminiscent of optimal switching problems (impulsional control): in the present case the switches can only occur at the vertex O. Note that our assumptions will ensure that v|

Γi\{O}

is Lipschitz continuous near O and that lim

δ→0+

v (δe

i

) does exist. In the case of entry costs for example, our first main result will be to find the relation between v (O), v

i

(O) and v

j

(O) + c

j

for i, j = 1, N.

This will show that the functions (v

i

)

1≤i≤N

are (suitably defined) viscosity solutions of the following system

λu

i

(x) + H

i

x, du

i

dx

i

(x)

= 0 if x ∈ Γ

i

\ {O} , λu

i

(O) + max

−λ min

j6=i

{u

j

(O) + c

j

} , H

i+

O, du

i

dx

i

(O)

, H

OT

= 0 if x = O. (1.1)

(4)

Figure 1. The network G (N = 5).

Here H

i

is the Hamiltonian corresponding to edge Γ

i

. At vertex O, the definition of the Hamiltonian has to be particular, in order to consider all the possibilities when x is close to O. More specifically, if x is close to O and belongs to Γ

i

then:

• The term min

j6=i

{u

j

(O) + c

j

} accounts for situations in which the trajectory enters Γ

i0

where u

i0

(O) + c

i0

= min

j6=i

{u

j

(O) + c

j

}.

• The term H

i+

O, du

i

dx

i

(O)

accounts for situations in which the trajectory does not leave Γ

i

.

• The term H

OT

accounts for situations in which the trajectory stays at O.

The most important part of the paper will be devoted to two different proofs of a comparison principle leading to the well-poseness of (1.1): the first one uses arguments from optimal control theory coming from Barles et al. [6, 7] and Achdou et al. [3]; the second one is inspired by Lions and Souganidis [19] and uses arguments from the theory of PDEs.

The paper is organized as follows: Section 2 deals with the optimal control problems with entry and exit costs:

we give a simple example in which the value function is discontinuous at the vertex O, and also prove results

on the structure of the value function near O. In Section 3, the new system of (1.1) is defined and a suitable

notion of viscosity solutions is proposed. In Section 4, we prove our value functions are viscosity solutions of the

above mentioned system. In Section 5, some properties of viscosity sub and super-solution are given and used

to obtain the comparison principle. Finally, optimal control problems with entry costs which may be zero and

related HJB equations are considered in Section 6.

(5)

2. Optimal control problem on junction with entry/exit costs 2.1. The geometry

We consider the model case of the junction in R

d

with N semi-infinite straight edges, N > 1. The edges are denoted by (Γ

i

)

i=1,N

where Γ

i

is the closed half-line R

+

e

i

. The vectors e

i

are two by two distinct unit vectors in R

d

. The half-lines Γ

i

are glued at the vertex O to form the junction G

G =

N

[

i=1

Γ

i

.

The geodetic distance d (x, y) between two points x, y of G is d (x, y) =

( |x − y| if x, y belong to the same egde Γ

i

,

|x| + |y| if x, y belong to different edges Γ

i

and Γ

j

. 2.2. The optimal control problem

We consider infinite horizon optimal control problems which have different dynamic and running costs for each and every edge. For i = 1, N,

• the set of control on Γ

i

is denoted by A

i

• the system is driven by a dynamics f

i

• there is a running cost `

i

.

Our main assumptions, referred to as [H] hereafter, are as follows:

[H 0] (Control sets) Let A be a metric space (one can take A = R

d

). For i = 1, N, A

i

is a nonempty compact subset of A and the sets A

i

are disjoint.

[H 1] (Dynamics) For i = 1, N , the function f

i

: Γ

i

× A

i

→ R is continuous and bounded by M . Moreover, there exists L > 0 such that

|f

i

(x, a) − f

i

(y, a)| ≤ L |x − y| for all x, y ∈ Γ

i

, a ∈ A

i

. Hereafter, we will use the notation F

i

(x) for the set {f

i

(x, a) e

i

: a ∈ A

i

}.

[H 2] (Running costs) For i = 1, N, the function `

i

: Γ

i

× A

i

→ R is a continuous function bounded by M > 0.

There exists a modulus of continuity ω such that

|`

i

(x, a) − `

i

(y, a)| ≤ ω (|x − y|) for all x, y ∈ Γ

i

, a ∈ A

i

. [H 3] (Convexity of dynamic and costs) For x ∈ Γ

i

, the following set

FL

i

(x) = {(f

i

(x, a) e

i

, `

i

(x, a)) : a ∈ A

i

} is non-empty, closed and convex.

[H 4] (Strong controllability) There exists a real number δ > 0 such that [−δe

i

, δe

i

] ⊂ F

i

(O) = {f

i

(O, a) e

i

: a ∈ A

i

} .

Remark 2.1. The assumption that the sets A

i

are disjoint is not restrictive. Indeed, if A

i

are not disjoint, then

we define ˜ A

i

= A

i

× {i} and ˜ f

i

(x, a) = ˜ f

i

(x, a) , ` ˜

i

(x, a) = ˜ `

i

(x, a) with ˜ a = (a, i) with a ∈ A

i

. The assumption

(6)

[H3] is made to avoid the use of relaxed control. With assumption [H 4], one gets that the Hamiltonian which will appear later is coercive for x close to the O. Moreover, [H 4] is an important assumption to prove Lemmas 2.7 and 5.3.

Let

M =

(x, a) : x ∈ G, a ∈ A

i

if x ∈ Γ

i

\ {O} , and a ∈ ∪

Ni=1

A

i

if x = O . Then M is closed. We also define the function on M by

for all (x, a) ∈ M, f (x, a) =

( f

i

(x, a) e

i

if x ∈ Γ

i

\ {O} and a ∈ A

i

, f

i

(O, a) e

i

if x = O and a ∈ A

i

. The function f is continuous on M since the sets A

i

are disjoint.

Definition 2.2 (The speed set and the admissible control set). The set ˜ F (x) which contains all the “possible speeds” at x is defined by

F ˜ (x) =

( F

i

(x) if x ∈ Γ

i

\ (O) , S

N

i=1

F

i

(O) if x = O.

For x ∈ G, the set of admissible trajectories starting from x is Y

x

=

(

y

x

∈ Lip R

+

; G :

( y ˙

x

(t) ∈ F ˜ (y

x

(t)) for a.e. t > 0 y

x

(0) = x

) .

According to Theorem 1.2 from [3], a solution y

x

can be associated with several control laws. We introduce the set of admissible controlled trajectories starting from x

T

x

=

(y

x

, α) ∈ L

loc

R

+

; M

: y

x

∈ Lip R

+

; G

and y

x

(t) = x + Z

t

0

f (y

x

(s) , α (s)) ds

.

Notice that, if (y

x

, α) ∈ T

x

then y

x

∈ Y

x

. Hereafter, we will denote y

x

by y

x,α

if (y

x

, α) ∈ T

x

. For any y

x,α

, we can define the closed set T

O

= {t ∈ R

+

: y

x,α

(t) = O} and the open set T

i

in R

+

= [0, +∞) by T

i

= {t ∈ R

+

: y

x,α

(t) ∈ Γ

i

\ {O}}. The set T

i

is a countable union of disjoint open intervals

T

i

= [

k∈Ki⊂N

T

ik

=

( [0, η

i0

) ∪ S

k∈Ki⊂N?

(t

ik

, η

ik

) if x ∈ Γ

i

\ {O} , S

k∈Ki⊂N?

(t

ik

, η

ik

) if x / ∈ Γ

i

\ {O} ,

where K

i

= 1, n if the trajectory y

x,α

enters Γ

i

n times and K

i

= N if the trajectory y

x,α

enters Γ

i

infinite times.

Remark 2.3. From the above definition, one can see that t

ik

is an entry time in Γ

i

\ {O} and η

ik

is an exit time from Γ

i

\ {O} . Hence

y

x,α

(t

ik

) = y

x,α

ik

) = O.

Let C = {c

1

, c

2

, . . . , c

N

} be a set of entry costs and D = {d

1

, d

2

, . . . , d

N

} be a set of exit costs. We

underline that, except in Section 6, entry and exist costs are positive.

(7)

In the sequel, we define two different cost functionals (the first one corresponds to the case when there is a cost for entering the edges and the second one corresponds to the case when there is a cost for exiting the edges):

Definition 2.4 (The cost functionals and value functions with entry/exit costs). The costs associated to trajectory (y

x,α

, α) ∈ T

x

are defined by

J (x; (y

x,α

, α)) = Z

+∞

0

` (y

x,α

(t) , α (t)) e

−λt

dt +

N

X

i=1

X

k∈Ki

c

i

e

−λtik

(cost functional with entry cost),

and

J b (x; (y

x,α

, α)) = Z

+∞

0

` (y

x,α

(t) , α (t)) e

−λt

dt +

N

X

i=1

X

k∈Ki

d

i

e

−ληik

(cost functional with exit cost),

where the running cost ` : M → R is

` (x, a) =

( `

i

(x, a) if x ∈ Γ

i

\ {O} and a ∈ A

i

,

`

i

(O, a) if x = 0 and a ∈ A

i

.

Hereafter, to simplify the notation, we will use J (x, α) and J b (x, α) instead of J (x; (y

x,α

, α)) and J b (x; (y

x,α

, α)), respectively.

The value functions of the infinite horizon optimal control problem are defined by:

v (x) = inf

(yx,α,α)∈Tx

J (x; (y

x,α

, α)) (value function with entry cost), and

b v (x) = inf

(yx,α,α)∈Tx

J b (x; (y

x,α

, α)) (value function with exit cost).

Remark 2.5. By the definition of the value function, we are mainly interested in a control law α such that J (x, α) < +∞. In such a case, if |K

i

| = +∞, then we can order {t

ik

, η

ik

: k ∈ N } such that

t

i1

< η

i1

< t

i2

< η

i2

< · · · < t

ik

< η

ik

< · · · , and

k→∞

lim t

ik

= lim

k→∞

η

ik

= +∞.

Indeed, assuming if lim

k→∞

t

ik

= t < +∞, then

J (x, α) ≥ − M λ +

+∞

X

k=1

e

−λtik

c

i

= − M λ + c

i

+∞

X

k=1

e

−λtik

= +∞,

in contradiction with J (x, α) < +∞. This means that the state cannot switch edges infinitely many times in

finite time, otherwise the cost functional is obviously infinite.

(8)

The following example shows that the value function with entry costs is possibly discontinuous (the same holds for the value function with exit costs).

Example 2.6. Consider the network G = Γ

1

∪ Γ

2

where Γ

1

= R

+

e

1

= (−∞, 0] and Γ

2

= R

+

e

2

= [0, +∞). The control sets are A

i

= [−1, 1] × {i} with i ∈ {1, 2}. Set

(f (x, a) , ` (x, a)) =

( (f

i

(x, (a

i

, i)) e

i

, `

i

(x, (a

i

, i))) if x ∈ Γ

i

\ {O} and a = (a

i

, i) ∈ A

i

, (f

i

(O, (a

i

, i)) e

i

, `

i

(O, (a

i

, i))) if x = O and a = (a

i

, i) ∈ A

i

,

where f

i

(x, (a

i

, i)) = a

i

and `

1

≡ 1, `

2

(x, (a

2

, 2)) = 1 − a

2

. For x ∈ Γ

2

\ {O}, then v (x) = v

2

(x) = 0 with optimal strategy consists in choosing α (t) ≡ (1, 2). For x ∈ Γ

1

, we can check that v (x) = min

1

λ , 1 − e

−λ|x|

λ + c

2

e

−λ|x|

. More precisely, for all x ∈ Γ

1

, we have

v (x) =

 

 

 1

λ if c

2

≥ 1

λ , with the optimal control α (t) ≡ (−1, 1), 1 − e

−λ|x|

λ + c

2

e

−λ|x|

if c

2

< 1

λ , with the optimal control α (t) =

( (1, 1) if t ≤ |x| , (1, 2) if t ≥ |x| . Summarizing, we have the two following cases

1. If c

2

≥ 1 λ , then

v (x) =

0 if x ∈ Γ

2

\ {O} , 1

λ if x ∈ Γ

1

. The graph of the value function with entry costs c

2

≥ 1

λ = 1 is plotted in Figure 2a.

2. If c

2

< 1 λ , then

v (x) =

0 if x ∈ Γ

2

\ {O} , 1 − e

−λ|x|

λ + c

2

e

−λ|x|

if x ∈ Γ

1

. The graph of the value function with entry costs c

2

= 1

2 < 1 = 1

λ is plotted in Figure 2b.

Lemma 2.7. Under assumptions [H1] and [H4], there exist two positive numbers r

0

and C such that for all x

1

, x

2

∈ B (O, r

0

) ∩ G, there exists y

x1x1,x2

, α

x1,x2

∈ T

x1

and τ

x1,x2

≤ Cd (x

1

, x

2

) such that y

x1

x1,x2

) = x

2

. Proof of Lemma 2.7. This proof is classical. It is sufficient to consider the case when x

1

and x

2

belong to same edge Γ

i

, since in the other cases, we will use O as a connecting point between x

1

and x

2

. According to Assumption [H 4], there exists a ∈ A

i

such that f

i

(O, a) = δ. Additionally, by the Lipschitz continuity of f

i

,

|f

i

(O, a) − f

i

(x, a)| ≤ L |x| , hence, if we choose r

0

:= δ

2L > 0, then f

i

(x, a) ≥ δ

2 for all x ∈ B (O, r

0

) ∩ Γ

i

. Let x

1

, x

2

be in B (O, r

0

) ∩ Γ

i

with |x

1

| < |x

2

|: there exist a control law α and τ

x1,x2

> 0 such that α (t) = a if 0 ≤ t ≤ τ

x1,x2

and

(9)

Figure 2. An example of value function with entry cost.

y

x1

x1,x2

) = x

2

. Moreover, since the velocity f

i

(y

x1

(t) , α (t)) is always greater than δ

2 when t ≤ τ

x1,x2

, then τ

x1,x2

≤ 2

δ d (x

1

, x

2

) . If |x

1

| > |x

2

|, the proof is achieved by replacing a ∈ A

i

by a ∈ A

i

such that f

i

(O, a) = −δ and applying the same argument as above.

2.3. Some properties of value function at the vertex

Lemma 2.8. Under assumption [H], v|

Γi\{O}

and b v|

Γi\{O}

are continuous for any i = 1, N. Moreover, there exists ε > 0 such that v|

Γi\{O}

and b v|

Γi\{O}

are Lipschitz continuous in (Γ

i

\ {O}) ∩ B (O, ε). Hence, it is possible to extend v|

Γi\{O}

and b v|

Γi\{O}

at O into Lipschitz continuous functions in Γ

i

∩ B (O, ε). Hereafter, v

i

and b v

i

denote these extensions.

Proof of Lemma 2.8. The proof of continuity inside the edge is classical by using [H4], see [1] for more details.

The proof of Lipschitz continuity is a consequence of Lemma 2.7. Indeed, for x, y belong to Γ

i

∩ B (0, ε), by Lemma 2.7 and the definition of value function, we have

v (x) − v (z) = v

i

(x) − v

i

(z) ≤ Z

τx,z

0

`

i

y

x,αx,z

(t) , α

x,z

(t)

e

−λt

dt + v

i

(z) e

−λτx,z

− 1 .

Since `

i

is bounded by M (by [H2]), v

i

is bounded in Γ

i

∩ B (O, ε) and e

−λτx,z

− 1 is bounded by τ

x,y

, there exists a constant C such that

v

i

(x) − v

i

(z) ≤ Cτ

x,z

≤ CC |x − z| .

The last inequality follows from the Lemma 2.7. The inequality v

i

(z) − v

i

(x) ≤ CC |x − z| is obtained in a

similar way. The proof is done.

(10)

Let us define the tangential Hamiltonian H

OT

at vertex O by H

OT

= max

i=1,N

max

ai∈AOi

{−`

j

(O, a

j

)} = − min

i=1,N

min

ai∈AOi

{`

j

(O, a

j

)}, (2.1)

where A

Oi

= {a

i

∈ A

i

: f

i

(O, a

i

) = 0} . The relationship between the values v(O), v

i

(O) and H

OT

will be given in the next theorem. Hereafter, the proofs of the results will be supplied only for the value function with entry costs v, the proofs concerning the value function with exit costs b v are totally similar.

Theorem 2.9. Under assumption [H ], the value functions v and b v satisfy v (O) = min

min

i=1,N

{v

i

(O) + c

i

} , − H

OT

λ

, and

b v (O) = min

min

i=1,N

{ b v

i

(O)} , − H

OT

λ

.

Remark 2.10. Theorem 2.9 gives us the characterization of the value function at vertex O.

The proof of Theorem 2.9, makes use of Lemmas 2.11 and 2.12.

Lemma 2.11 (Value functions v and b v at O). Under assumption [H ], then max

i=1,N

{v

i

(O)} ≤ v (O) ≤ min

i=1,N

{v

i

(O) + c

i

} , and

max

i=1,N

{ b v

i

(O) − d

i

} ≤ b v (O) ≤ min

i=1,N

{ b v

i

(O)} . Proof of Lemma 2.11. We divide the proof into two parts.

Prove that max

i=1,N

{v

i

(O)} ≤ v (O). First, we fix i ∈ {1, . . . , N } and any control law α such that (y

O,¯α

, α) ¯ ∈ T

O

. Let x ∈ Γ

i

\ {O} such that |x| is small. From Lemma 2.7, there exists a control law α

x,O

connecting x and O and we consider

α (s) =

( α

x,O

(s) if s ≤ τ

x,O

,

¯

α (s − τ

x,O

) if s > τ

x,O

.

It means that the trajectory goes from x to O with the control law α

x,O

and then proceeds with the control law ¯ α. Therefore

v (x) = v

i

(x) ≤ J (x, α) = Z

τx,O

0

`

i

(y

x,α

(s)) e

−λs

ds + e

−λτx,O

J (O, α) ¯ . Since α is chosen arbitrarily and `

i

is bounded by M , we get

v

i

(x) ≤ M τ

x,O

+ e

−λτx,O

v (O) .

(11)

Let x tend to O then τ

x,O

tend to 0 from Lemma 2.7. Therefore, v

i

(O) ≤ v (O). Since the above inequality holds for i = 1, N, we obtain that

max

i=1,N

{v

i

(O)} ≤ v (O) .

Prove that v (O) ≤ min

i=1,N

{v

i

(O) + c

i

}. For i = 1, N; we claim that v (O) ≤ v

i

(O) + c

i

. Consider x ∈ Γ

i

\ {O}

with |x| small enough and any control law ¯ α

x

such that (y

x,α¯x

, α ¯

x

) ∈ T

x

. From Lemma 2.7, there exists a control law α

O,x

connecting O and x and we consider

α (s) =

( α

O,x

(s) if s ≤ τ

O,x

,

¯

α

x

(s − τ

O,x

) if s > τ

O,x

.

It means that the trajectory goes from O to x using the control law α

O,x

then proceeds with the control law

¯

α

x

. Therefore

v (O) ≤ J (O, α) = c

i

+ Z

τO,x

0

`

i

(y

O,α

(s)) e

−λs

ds + e

−λτO,x

J (x, α ¯

x

) . Since α

x

is chosen arbitrarily and `

i

is bounded by M , we get

v (O) ≤ c

i

+ M τ

O,x

+ e

−λτO,x

v

i

(x)

Let x tend to O then τ

O,x

tends to 0 from Lemma 2.7, then v (O) ≤ c

i

+ v

i

(O) . Since the above inequality holds for i = 1, N, we obtain that

v (O) ≤ min

i=1,N

{v

i

(O) + c

i

} .

Lemma 2.12. The value functions v and b v satisfy

v (O) , b v (O) ≤ − H

OT

λ (2.2)

where H

OT

is defined in (2.1).

Proof of Lemma 2.12. From (2.1), there exists j ∈ {1, . . . , N } and a

j

∈ A

Oj

such that H

OT

= − min

i=1,N

min

ai∈AOi

{`

i

(O, a

i

)} = −`

j

(O, a

j

) Let the control law α be defined by α (s) ≡ a

j

for all s, then

v (O) ≤ J (O, α) = Z

+∞

0

`

j

(O, a

j

) e

−λs

ds = `

j

(O, a

j

)

λ = − H

OT

λ .

We are ready to prove Theorem 2.9.

(12)

Proof of Theorem 2.9. According to Lemma 2.11 and Lemma 2.12,

v (O) ≤ min

min

i=1,N

{v

i

(O) + c

i

} , − H

OT

λ

.

Assuming that

v (O) < min

i=1,N

{v

i

(O) + c

i

} , (2.3)

it is sufficient to prove that v (O) = − H

OT

λ . By (2.3), there exists a sequence {ε

n

}

n∈N

such that ε

n

→ 0 and v (O) + ε

n

< min

i=1,N

{v

i

(O) + c

i

} for all n ∈ N .

On the other hand, there exists an ε

n

-optimal control α

n

, v (O) + ε

n

> J (O, α

n

). Let us define the first time that the trajectory y

O,αn

leaves O

t

n

:= inf

i=1,N

T

in

,

where T

in

is the set of times t for which y

O,αn

(t) belongs to Γ

i

\ {O}. Notice that t

n

is possibly +∞, in which case y

O,αn

(s) = O for all s ∈ [0, +∞). Extracting a subsequence if necessary, we may assume that t

n

tends to t ∈ [0, +∞] when ε

n

tends to 0.

If there exists a subsequence of {t

n

}

n∈N

(which is still noted {t

n

}

n∈N

) such that t

n

= +∞ for all n ∈ N , then for a.e. s ∈ [0, +∞)

( f (y

O,αn

(s) , α

n

(s)) = f (O, α

n

(s)) = 0,

` (y

O,αn

(s) , α

n

(s)) = ` (O, α

n

(s)) .

In this case, α

n

(s) ∈ ∪

Ni=1

A

Oi

for a.e. s ∈ [0, +∞). Therefore, for a.e. s ∈ [0, +∞)

` (y

O,αn

(s) , α

n

(s)) = ` (O, α

n

(s)) ≥ −H

OT

, and

v (O) + ε

n

> J (O, α

n

) = Z

+∞

0

` (O, α

n

(s)) e

−λs

ds ≥ Z

+∞

0

−H

OT

e

−λs

ds = − H

OT

λ .

By letting n tend to ∞, we get v (O) ≥ − H

OT

λ . On the other hand, since v (O) ≤ − H

OT

λ by Lemma 2.12, this implies that v (O) = − H

OT

λ .

(13)

Let us now assume that 0 ≤ t

n

< +∞ for all n large enough. Then, for a fixed n and for any positive δ ≤ δ

n

where δ

n

small enough, y

O,αn

(s) still belongs to some Γ

i(n)

\ {O} for all s ∈ (t

n

, t

n

+ δ]. We have v (O) + ε

n

> J (O, α

n

)

= Z

tn

0

` (y

O,αn

(s) , α

n

(s)) e

−λs

ds + c

i(n)

e

−λtn

+ Z

tn

tn

`

i(n)

(y

O,αn

(s) , α

n

(s)) e

−λs

ds +e

−λ(tn+δ)

J (y

O,αn

(t

n

+ δ) , α

n

(· + t

n

+ δ))

≥ Z

tn

0

` (y

O,αn

(s) , α

n

(s)) e

−λs

ds + c

i(n)

e

−λtn

+ Z

tn

tn

`

i(n)

(y

O,αn

(s) , α

n

(s)) e

−λs

ds +e

−λ(tn+δ)

v (y

O,αn

(t

n

+ δ))

= Z

tn

0

` (y

O,αn

(s) , α

n

(s)) e

−λs

ds + c

i(n)

e

−λtn

+ Z

tn

tn

`

i(n)

(y

O,αn

(s) , α

n

(s)) e

−λs

ds +e

−λ(tn+δ)

v

i(n)

(y

O,αn

(t

n

+ δ)) .

By letting δ tend to 0,

v (O) + ε

n

≥ Z

tn

0

` (y

O,αn

(s) , α

n

(s)) e

−λs

ds + c

i(n)

e

−λtn

+ e

−λtn

v

i(n)

(O) .

Note that y

O,αn

(s) = O for all s ∈ [0, t

n

], i.e., f (O, α

n

(s)) = 0 a.e. s ∈ [0, t

n

). Hence

v (O) + ε

n

≥ Z

tn

0

` (O, α

n

(s)) e

−λs

ds + c

i(n)

e

−λtn

+ e

−λtn

v

i(n)

(O)

≥ Z

tn

0

−H

OT

e

−λs

ds + c

i(n)

e

−λtn

+ e

−λtn

v

i(n)

(O)

= 1 − e

−λtn

λ −H

OT

+ c

i(n)

e

−λtn

+ e

−λtn

v

i(n)

(O) . Choose a subsequence {ε

nk

}

k∈

N

of {ε

n

}

n∈

N

such that for some i

0

∈ {1, . . . , N }, c

i(nk)

= c

i0

for all k. By letting k tend to ∞, recall that lim

k→∞

t

nk

= t, we have three possible cases

1. If t = +∞, then v (O) ≥ − H

OT

λ . By Lemma 2.12, we obtain v (O) = − H

OT

λ . 2. If t = 0, then v (O) ≥ c

i0

+ v

i0

(O). By (2.3), we obtain a contradiction.

3. If t ∈ (0, +∞), then v (O) ≥ 1 − e

−λt

λ −H

OT

+ [c

i0

+ v

i0

(O)] e

−λt

. By (2.3), c

i0

+ v

i0

(O) > v (O), so

v (O) > 1 − e

−λt

λ −H

OT

+ v (O) e

−λt

.

This yields v (O) > − H

OT

λ , and finally obtain a contradiction by Lemma 2.12.

(14)

3. The Hamilton-Jacobi systems. Viscosity solutions 3.1. Test-functions

Definition 3.1. A function ϕ : Γ

1

× · · · × Γ

N

→ R

N

is an admissible test-function if there exists (ϕ

i

)

i=1,N

, ϕ

i

∈ C

1

i

), such that ϕ (x

1

, . . . , x

N

) = (ϕ

1

(x

1

) , . . . , ϕ

N

(x

N

)). The set of admissible test-function is denoted by R (G).

3.2. Definition of viscosity solution

Definition 3.2 (Hamiltonian). We define the Hamiltonian H

i

: Γ

i

× R → R by H

i

(x, p) = max

a∈Ai

{−pf

i

(x, a) − `

i

(x, a)}

and the Hamiltonian H

i+

(O, ·) : R → R by

H

i+

(O, p) = max

a∈A+i

{−pf

i

(O, a) − `

i

(O, a)} ,

where A

+i

= {a

i

∈ A

i

: f

i

(O, a

i

) ≥ 0}. Recall that the tangential Hamiltonian at O, H

OT

, has been defined in (2.1).

We now introduce the Hamilton-Jacobi system for the case with entry costs λu

i

(x) + H

i

x, du

i

dx

i

(x)

= 0 if x ∈ Γ

i

\ {O} , λu

i

(O) + max

−λ min

j6=i

{u

j

(O) + c

j

} , H

i+

O, du

i

dx

i

(O)

, H

OT

= 0 if x = O, (3.1)

for all i = 1, N and the Hamilton-Jacobi system with exit costs λ b u

i

(x) + H

i

x, d b u

i

dx

i

(x)

= 0 if x ∈ Γ

i

\ {O} , λ u b

i

(O) + max

−λ min

j6=i

{ u b

j

(O) + d

i

} , H

i+

O, d u b

i

dx

i

(O)

, H

OT

− λd

i

= 0 if x = O, (3.2)

for all i = 1, N and their viscosity solutions.

Definition 3.3 (Viscosity solution with entry costs).

• A function u := (u

1

, . . . , u

N

) where u

i

∈ U SC (Γ

i

; R ) for all i = 1, N, is called a viscosity sub-solution of (3.1) if for any (ϕ

1

, . . . , ϕ

N

) ∈ R (G), any i = 1, N and any x

i

∈ Γ

i

such that u

i

− ϕ

i

has a local maximum point on Γ

i

at x

i

, then

λu

i

(x

i

) + H

i

x, dϕ

i

dx

i

(x

i

)

≤ 0 if x

i

∈ Γ

i

\ {O} , λu

i

(O) + max

−λ min

j6=i

{u

j

(O) + c

j

} , H

i+

O, dϕ

i

dx

i

(O)

, H

OT

≤ 0 if x

i

= O.

• A function u := (u

1

, . . . , u

N

) where u

i

∈ LSC (Γ

i

; R ) for all i = 1, N, is called a viscosity super-solution

of (3.1) if for any (ϕ

1

, . . . , ϕ

N

) ∈ R (G), any i = 1, N and any x

i

∈ Γ

i

such that u

i

− ϕ

i

has a local minimum

(15)

point on Γ

i

at x

i

, then

λu

i

(x

i

) + H

i

x

i

, dϕ

i

dx

i

(x

i

)

≥ 0 if x

i

∈ Γ

i

\ {O} , λu

i

(O) + max

−λ min

j6=i

{u

j

(O) + c

j

} , H

i+

O, dϕ

i

dx

i

(O)

, H

OT

≥ 0 if x

i

= O.

• A functions u := (u

1

, . . . , u

N

) where u

i

∈ C (Γ

i

; R ) for all i = 1, N, is called a viscosity solution of (3.1) if it is both a viscosity sub-solution and a viscosity super-solution of (3.1).

Definition 3.4 (Viscosity solution with exit costs).

• A function b u := ( u b

1

, . . . , b u

N

) where b u

i

∈ U SC (Γ

i

; R ) for all i = 1, N, is called a viscosity sub-solution of (3.2) if for any (ψ

1

, . . . , ψ

N

) ∈ R (G), any i = 1, N and any y

i

∈ Γ

i

such that u b

i

− ψ

i

has a local maximum point on Γ

i

at y

i

, then

λ u b

i

(y

i

) + H

i

y

i

, dψ

i

dx

i

(y

i

)

≤ 0 if y

i

∈ Γ

i

\ {O} , λ b u

i

(O) + max

−λ min

j6=i

{ b u

j

(O)} − λd

i

, H

i+

O, dψ

i

dx

i

(O)

, H

OT

− λd

i

≤ 0 if y

i

= O.

• A function u b := ( u b

1

, . . . , b u

N

) where b u

i

∈ LSC (Γ

i

; R ) for all i = 1, N, is called a viscosity super-solution of (3.2) if for any (ψ

1

, . . . , ψ

N

) ∈ R (G), any i = 1, N and any y

i

∈ Γ

i

such that u

i

− ψ

i

has a local minimum point on Γ

i

at y

i

, then

λ u b

i

(y

i

) + H

i

y

i

, dψ

i

dx

i

(y

i

)

≥ 0 if y

i

∈ Γ

i

\ {O} , λ b u

i

(O) + max

−λ min

j6=i

{ b u

j

(O)} − λd

i

, H

i+

O, dψ

i

dx

i

(O)

, H

OT

− λd

i

≥ 0 if y

i

= O.

• A functions b u := ( u b

1

, . . . , b u

N

) where b u

i

∈ C (Γ

i

; R ) for all i = 1, N, is called a viscosity solution of (3.2) if it is both a viscosity sub-solution and a viscosity super-solution of (3.2).

Remark 3.5. This notion of viscosity solution is consitent with the one of [3]. It can be seen in Section 6 when all the switching costs are zero, our definition and the one of [3] coincide.

4. Connections between the value functions and the Hamilton-Jacobi systems

Let v be the value function of the optimal control problem with entry costs and b v be a value function of the optimal control problem with exit costs. Recall that v

i

, b v

i

: Γ

i

→ R are defined in Lemma 2.8 by

( v

i

(x) = v (x) if x ∈ Γ

i

\ {O} ,

v

i

(O) = lim

Γi\{O}3x→O

v (x) , and (

b v

i

(x) = b v (x) if x ∈ Γ

i

\ {O} , b v

i

(O) = lim

Γi\{O}3x→O

b v (x) .

We wish to prove that v := (v

1

, v

2

, . . . , v

N

) and b v := ( b v

1

, . . . , b v

N

) are respectively viscosity solutions of (3.1) and (3.2). In fact, since G\ {O} is a finite union of open intervals in which the classical theory can be applied, we obtain that v

i

and b v

i

are viscosity solutions of

λu (x) + H

i

(x, Du (x)) = 0 in Γ

i

\ {O} .

(16)

Therefore, we can restrict ourselves to prove the following theorem.

Theorem 4.1. For i = 1, N, the function v

i

satisfies λv

i

(O) + max

−λ min

j6=i

{v

j

(O) + c

j

} , H

i+

O, dv

i

dx

i

(O)

, H

OT

= 0 in the viscosity sense. The function b v

i

satisfies

λ b v

i

(O) + max

−λ min

j6=i

{ b v

j

(O) + d

i

} , H

i+

O, d b v

i

dx

i

(O)

, H

OT

− λd

i

= 0 in the viscosity sense.

The proof of Theorem 4.1 follows from Lemmas 4.2 and 4.5. We focus on v

i

since the proof for b v

i

is similar.

Lemma 4.2. For i = 1, N, the function v

i

is a viscosity sub-solution of (3.1) at O.

Proof of Lemma 4.2. From Theorem 2.9, λv

i

(O) + max

−λ min

j6=i

{v

j

(O) + c

j

} , H

OT

≤ 0.

It is thus sufficient to prove that

λv

i

(O) + H

i+

O, dv

i

dx

i

(O)

≤ 0

in the viscosity sense. Let a

i

∈ A

i

be such that f

i

(O, a

i

) > 0. Setting α (t) ≡ a

i

then (y

x,α

, α) ∈ T

x

for all x ∈ Γ

i

. Moreover, for all x ∈ Γ

i

\ {O}, y

x,α

(t) ∈ Γ

i

\ {O} (the trajectory cannot approach O since the speed pushes it away from O for y

x,α

∈ Γ

i

∩ B (O, r)). Note that it is not sufficient to choose a

i

∈ A

i

such that f (O, a

i

) = 0 since it can lead to f (x, a

i

) < 0 for all x ∈ Γ

i

\ {O}. Next, for τ > 0 fixed and any x ∈ Γ

i

, if we choose

α

x

(t) =

( α (t) = a

i

0 ≤ t ≤ τ, ˆ

a (t − τ) t ≥ τ, (4.1)

then y

x.αx

(t) ∈ Γ

i

\ {O} for all t ∈ [0, τ ]. It yields v

i

(x) ≤ J (x, α

x

) =

Z

τ 0

`

i

(y

x,α

(s) , a

i

) e

−λs

ds + e

−λτ

J (y

x,α

(τ) , α) b . Since this holds for any α b (α

x

is arbitrary for t > τ), we deduce that

v

i

(x) ≤ Z

τ

0

`

i

(y

x,αx

(s) , a

i

) e

−λs

ds + e

−λτ

v

i

(y

x,αx

(τ)) . (4.2) Since f

i

(·, a) is Lipschitz continuous by [H 1], we also have for all t ∈ [0, τ ],

|y

x,αx

(t) − y

O,αO

(t)| =

x + Z

t

0

f

i

(y

x,α

(s) , a

i

) e

i

ds − Z

t

0

f

i

(y

O,α

(s) , a

i

) e

i

ds

≤ |x| + L Z

t

0

|y

x,α

(s) − y

O,α

(s)| ds,

(17)

where α

0

satisfies (4.1) with x = O. According to Gr¨ onwall’s inequality,

|y

x,αx

(t) − y

O,αO

(t)| ≤ |x| e

Lt

,

for t ∈ [0, τ], yielding that y

x,αx

(t) tends to y

O,αO

(t) when x tends to O. Hence, from (4.2), by letting x → O, we obtain

v

i

(O) ≤ Z

τ

0

`

i

(y

O,αO

(s) , a

i

) e

−λs

ds + e

−λτ

v

i

(y

O,αO

(τ)) . Let ϕ be a function in C

1

i

) such that 0 = v

i

(O) − ϕ (O) = max

Γi

(v

i

− ϕ). This yields

ϕ (O) − ϕ (y

O,αO

(τ))

τ ≤ 1

τ Z

τ

0

`

i

(y

O,αO

(s) , a

i

) e

−λs

ds + e

−λτ

− 1

v

i

(y

O,αO

(τ))

τ .

By letting τ tend to 0, we obtain that

−f

i

(O, a

i

) dϕ dx

i

(O) ≤ `

i

(O, a

i

) − λv

i

(O) . Hence,

λv

i

(O) + sup

a∈Ai:fi(O,a)>0

−f

i

(O, a) dv

i

dx

i

(O) − `

i

(O, a)

≤ 0

in the viscosity sense. Finally, from Corollary A.2 in Appendix A, we have sup

a∈Ai:fi(O,a)>0

−f

i

(O, a) dϕ

i

dx

i

(O) − `

i

(O, a)

= max

a∈Ai:fi(O,a)≥0

−f

i

(O, a) dϕ

i

dx

i

(O) − `

i

(O, a)

. The proof is complete.

Lemma 4.3. If

v

i

(O) < min

min

j6=i

{v

j

(O) + c

j

} , − H

OT

λ

, (4.3)

then there exist ¯ τ > 0, r > 0 and ε

0

> 0 such that for any x ∈ (Γ

i

\ {O}) ∩ B (O, r), any ε < ε

0

and any ε-optimal control law α

ε,x

for x,

y

x,αε,x

(s) ∈ Γ

i

\ {O} , for all s ∈ [0, τ] ¯ . Remark 4.4. Roughly speaking, this lemma takes care of the case λv

i

+ H

i+

x, dv

i

dx

i

(O)

≤ 0, i.e., the situation when the trajectory does not leave Γ

i

, see introduction.

Proof of Lemma 4.3. Suppose by contradiction that there exist sequences {ε

n

} , {τ

n

} ⊂ R

+

and {x

n

} ⊂ Γ

i

\ {O}

such that ε

n

& 0, x

n

→ O, τ

n

& 0 and a control law α

n

such that α

n

is ε

n

-optimal control law and y

xnn

n

) = O. This implies that

v

i

(x

n

) + ε

n

> J (x

n

, α

n

) = Z

τn

0

` (y

xnn

(s) , α

n

(s)) e

−λs

ds + e

−λτn

J (O, α

n

(· + τ

n

)) . (4.4)

(18)

Since ` is bounded by M by [H1], then v

i

(x

n

) + ε

n

≥ −τ

n

M + e

−λτn

v (O) . By letting n tend to ∞, we obtain

v

i

(O) ≥ v (O) . (4.5)

From (4.3), it follows that

min

min

j6=i

{v

j

(O) + c

j

} , − H

OT

λ

> v (O) .

However, v (O) = min

min

j

{v

j

(O) + c

j

} , − H

OT

λ

by Theorem 2.9. Therefore, v (O) = v

i

(O) + c

i

> v

i

(O), which is a contradiction with (4.5).

Lemma 4.5. The function v

i

is a viscosity super-solution of (3.1) at O.

Proof of Lemma 4.5. We adapt the proof of Oudet [21] and start by assuming that v

i

(O) < min

min

j6=i

{v

j

(O) + c

j

} , − H

OT

λ

. We need to prove that

λv

i

(O) + H

i+

O, dv

i

dx

i

(O)

≥ 0

in the viscosity sense. Let ϕ ∈ C

1

i

) be such that

0 = v

i

(O) − ϕ (O) ≤ v

i

(x) − ϕ (x) for all x ∈ Γ

i

, (4.6) and {x

ε

} ⊂ Γ

i

\ {O} be any sequence such that x

ε

tends to O when ε tends to 0. From the dynamic programming principle and Lemma 4.3, there exists ¯ τ such that for any ε > 0, there exists (y

ε

, α

ε

) := (y

xεε

, α

ε

) ∈ T

xε

such that y

ε

(τ) ∈ Γ

i

\ {O} for any τ ∈ [0, τ] and ¯

v

i

(x

ε

) + ε ≥ Z

τ

0

`

i

(y

ε

(s) , α

ε

(s)) e

−λs

ds + e

−λτ

v

i

(y

ε

(τ)) . Then, according to (4.6)

v

i

(x

ε

) − v

i

(O) + ε ≥ Z

τ

0

`

i

(y

ε

(s) , α

ε

(s)) e

−λs

ds + e

−λτ

[ϕ (y

ε

(τ)) − ϕ (O)]

−v

i

(O) 1 − e

−λτ

. (4.7)

Next,

 Z

τ

0

`

i

(y

ε

(s) , α

ε

(s)) e

−λs

ds = Z

τ

0

`

i

(y

ε

(s) , α

ε

(s)) ds + o (τ) ,

[ϕ (y

ε

(τ)) − ϕ (O)] e

−λτ

= ϕ (y

ε

(τ)) − ϕ (O) + τ o

ε

(1) + o (τ) ,

(19)

and

( v

i

(x

ε

) − v

i

(O) = o

ε

(1) , v

i

(O) 1 − e

−λτ

= o (τ) + τ λv

i

(O) ,

where the notation o

ε

(1) is used for a quantity which is independent on τ and tends to 0 as ε tends to 0. For k ∈ N

?

the notation o(τ

k

) is used for a quantity that is independent on ε and such that o(τ

k

)

τ

k

→ 0 as τ → 0.

Finally, O(τ

k

) stands for a quantity independent on ε such that O(τ

k

)

τ

k

remains bounded as τ → 0. From (4.7), we obtain that

τ λv

i

(O) ≥ Z

τ

0

`

i

(y

ε

(s) , α

ε

(s)) ds + ϕ (y

ε

(τ )) − ϕ (O) + τ o

ε

(1) + o (τ) + o

ε

(1) . (4.8) Since y

ε

(τ ) ∈ Γ

i

for all ε, one has

ϕ (y

ε

(τ)) − ϕ (x

ε

) = Z

τ

0

dϕ dx

i

(y

ε

(s)) ˙ y

ε

(s) ds = Z

τ

0

dϕ dx

i

(y

ε

(s)) f

i

(y

ε

(s) , α

ε

(s)) ds.

Hence, from (4.8) τ λv

i

(O) −

Z

τ 0

`

i

(y

ε

(s) , α

ε

(s)) + dϕ dx

i

(y

ε

(s)) f

i

(y

ε

(s) , α

ε

(s))

ds ≥ τ o

ε

(1) + o (τ) + o

ε

(1) . (4.9)

Moreover, ϕ (x

ε

) − ϕ (O) = o

ε

(1) and that dϕ dx

i

(y

ε

(s)) = dϕ dx

i

(O) + o

ε

(1) + O (s). Thus

λv

i

(O) − 1 τ

Z

τ 0

`

i

(y

ε

(s) , α

ε

(s)) + dϕ dx

i

(O) f

i

(y

ε

(s) , α

ε

(s))

ds ≥ o

ε

(1) + o (τ)

τ + o

ε

(1)

τ . (4.10) Let ε

n

→ 0 as n → ∞ and τ

m

→ 0 as m → ∞ such that

(a

mn

, b

mn

) :=

1 τ

m

Z

τm 0

f

i

(y

εn

(s) , α

εn

(s)) e

i

ds, 1 τ

m

Z

τm 0

`

i

(y

εn

(s) , α

εn

(s)) ds

−→ (a, b) ∈ R e

i

× R

as n, m → ∞. By [H 1] and [H 2]

( f

i

(y

εn

(s) , α

εn

(s)) e

i

= f

i

(O, α

εn

(s)) + L |y

εn

(s)| = f

i

(O, α

εn

(s)) e

i

+ o

n

(1) + o

m

(1) ,

`

i

(y

εn

(s) , α

εn

(s)) e

i

= `

i

(O, α

εn

(s)) + ω (|y

εn

(s)|) = `

i

(O, α

εn

(s)) e

i

+ o

n

(1) + o

m

(1) . It follows that

(a

mn

, b

mn

) = 1

τ

m

Z

τm

0

f

i

(O, α

εn

(s)) e

i

ds, 1 τ

m

Z

τm

0

`

i

(O, α

εn

(s)) ds

+ o

n

(1) + o

m

(1)

∈ FL

i

(O) + o

n

(1) + o

m

(1) ,

(20)

since FL

i

(O) is closed and convex. Sending n, m → ∞, we obtain (a, b) ∈ FL

i

(O) so there exists a ∈ A

i

such that

m,n→∞

lim 1

τ

m

Z

τm 0

f

i

(y

εn

(s) , α

εn

(s)) e

i

ds, 1 τ

m

Z

τm 0

`

i

(y

εn

(s) , α

εn

(s)) ds

= (f

i

(O, a) e

i

, `

i

(O, a)) . (4.11) On the other hand, from Lemma 4.3, y

εn

(s) ∈ Γ

i

\ {O} for all s ∈ [0, τ

m

]. This yields

y

εn

m

) = Z

τn

0

f

i

(y

εn

(s) , α

εn

(s)) ds

e

i

+ x

εn

. Since |y

εn

m

)| > 0, then

1 τ

m

Z

τm

0

f

i

(y

εn

(s) , α

εn

(s)) ds ≥ − |x

εn

| τ

m

.

Let ε

n

tend to 0, then let τ

m

tend to 0, one gets f

i

(O, a) ≥ 0, so a ∈ A

+i

. Hence, from (4.10) and (4.11), replacing ε by ε

n

and τ by τ

m

, let ε

n

tend to 0, then let τ

m

tend to 0, we finally obtain

λv

i

(O) + max

a∈A+i

−f

i

(O, a) dϕ dx

i

(O) − `

i

(O, a)

≥ λv

i

(O) +

−f

i

(O, a) dϕ dx

i

(O) − `

i

(O, a)

≥ 0.

5. Comparison principle and uniqueness

Inspired by [6, 7], we begin by proving some properties of sub and super viscosity solutions of (3.1). The following three lemmas are reminiscent of Lemma 3.4, Theorem 3.1 and Lemma 3.5 in [3].

Lemma 5.1. Let w = (w

1

, . . . , w

N

) be a viscosity super-solution of (3.1). Let x ∈ Γ

i

\ {O} and assume that w

i

(O) < min

min

j6=i

{w

j

(O) + c

j

} , − H

OT

λ

. (5.1)

Then for all t > 0,

w

i

(x) ≥ inf

αi(·),θi

Z

t∧θi

0

`

i

y

xi

(s) , α

i

(s)

e

−λs

ds + w

i

y

ix

(t ∧ θ

i

)

e

−λ(t∧θi)

! ,

where α

i

∈ L

(0, ∞; A

i

), y

xi

is the solution of y

ix

(t) = x+ h R

t

0

f

i

y

ix

(s) , α

i

(s) ds i

e

i

and θ

i

satisfies y

xi

i

) = 0 and θ

i

lies in [τ

i

, τ

i

], where τ

i

is the exit time of y

xi

from Γ

i

\ {O} and τ

i

is the exit time of y

xi

from Γ

i

. Proof of Lemma 5.1. According to (5.1), the function w

i

is a viscosity super-solution of the following problem in Γ

i

 

 

λw

i

(x) + H

i

x, dw

i

dx

i

(x)

= 0 if x ∈ Γ

i

\ {O} , λw

i

(O) + H

i+

O, dw

i

dx

i

(O)

= 0 if x = O.

(5.2)

Références

Documents relatifs

Bergounioux, Optimal control of abstract elliptic variational inequalities with state constraints,, to appear, SIAM Journal on Control and Optimization, Vol. Tiba, General

¥ If Universe is filled with cosmological constant – its energy density does not change.. ¥ If Universe is filled with anything with non-negative pressure: the density decreases as

Finally, in Section 4, we show that once a solution to the reduced system has been obtained, our framework allows for the recovery of the Einstein vacuum equations, both for

As an application, we show that the set of points which are displaced at maximal distance by a “locally optimal” transport plan is shared by all the other optimal transport plans,

Studying approximations deduced from a Large Eddy Simulations model, we focus our attention in passing to the limit in the equation for the vorticity.. Generally speaking, if E(X) is

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

LIONS, Neumann type boundary conditions for Hamilton-Jacobi equations,

Earth-Moon transfer, low thrust, circular restricted three-body problem, optimal control, minimum time or fuel consumption trajectories, shoot- ing, discrete and