• Aucun résultat trouvé

Stochastic control on networks

N/A
N/A
Protected

Academic year: 2021

Partager "Stochastic control on networks"

Copied!
140
0
0

Texte intégral

(1)

HAL Id: tel-03222676

https://tel.archives-ouvertes.fr/tel-03222676

Submitted on 10 May 2021

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Stochastic control on networks

Wassim Wahbi

To cite this version:

Wassim Wahbi. Stochastic control on networks. Statistics [math.ST]. Université Paris sciences et lettres, 2018. English. �NNT : 2018PSLED072�. �tel-03222676�

(2)
(3)

Contents

1 General Introduction 2

2 Quasi linear parabolic PDE in a junction with non linear Neumann

vertex condition 8

2.1 Introduction . . . 8

2.2 Main results . . . 11

2.2.1 Notations and preliminary results . . . 11

2.2.2 Assumptions and main results . . . 14

2.3 The elliptic problem . . . 20

2.4 The parabolic problem . . . 23

2.4.1 Estimates on the discretized scheme . . . 23

2.4.2 Proof of Theorem 2.2.2. . . 34

2.4.3 On the existence for unbounded junction . . . 39

Appendices 45 A Functionnal spaces 46 B The Elliptic problem 48 3 Diffusion on a junction 51 3.1 Introduction . . . 51

3.2 Introductory material . . . 53

3.2.1 Notations and preliminary results . . . 53

(4)

3.3.1 Diffusion with jumps at the vertex . . . 57

3.3.2 Weak convergence . . . 59

3.3.3 Identification of the limit law . . . 61

3.4 Itô’s formula and local time estimate at the junction . . . 79

3.4.1 Itô’s formula. . . 79

3.4.2 Local time estimate at the junction . . . 84

4 Stochastic optimal control at the junction 90 4.1 Introduction . . . 90

4.2 The set of generalized action, and the martingale problem . . . 92

4.2.1 The set of generalized actions . . . 92

4.2.2 Weak martingale formulation of the problem of control . . . 98

4.3 Compactness of the admissible rule . . . 101

4.4 Dynamic Programming Principle . . . 117

Appendices 126

(5)

Acknowledgements

J’ai eu la grande chance tout au long de cette thèse d’être encadré par des chercheurs de renommée internationale. J’adresse ainsi mes sincères remerciements aux personnes qui ont contribuées à ma formation professionnelle et humaine dans l’étape doctorale. Je tiens à remercier tout d’abord Pierre Cardaliaguet, puisque depuis mon année de master 2 jusqu’à la fin de mon doctorat, il a su m’encadrer et a été ce directeur de thèse que plusieurs doctorants auraient aimés avoir. Au delà de ses capacités intellectuels déjà reconnues, je le remercie pour sa patience et le fait d’avoir cru en moi surtout dans les moments les plus difficiles. Je le remercie également de m’avoir guider dans ma participation à différents séminaires (HJnet, l’Aquila...), ce qui m’a permis de connaître de nouveaux chercheurs et d’étendre mes connaissances en mathématiques. Je remercie également Idris Kharroubi pour son investissement dans mon travail, pour le temps qu’il m’a accordé avec grande générosité. Une très profonde gratitude se dirige vers également Bruno Bouchard, directeur de mon master 2 et qui m’a soutenu et encouragé pour effectuer cette thèse. Bien évidemment je remercie également les membres du Ceremade et du département MIDO, allant du secrétariat, informaticiens et directeur du laboratoire.

Je remercie également tous les gens qui sont intervenus dans mon parcours scientifique et mathématiques, allant dès mon plus jeune âge, jusqu’à aujourd’hui. Il en va de soit aussi de saluer, remercier et être reconnaissant envers les membres de ma famille, qui m’ont soutenus et encouragés du début à la fin.

(6)

Chapter 1

General Introduction

In the thesis we are interested in controlled diffusions on a network structure and on the associated partial differential equations. We address here three basic problems: the exis-tence of a diffusion, the optimal control of this diffusion (dynamic programming principle) and the well-posedness of the associated Hamilton-Jacobi equation. For simplicity of no-tation, we will focus on a network consisting in a single junction, as the multi-junction setting can be treated with similar tools. In this framework the diffusion satisfies the following controlled reflected stochastic differential equation

dx(t) = σi(t)(t, x(t), βi(t)(t))dW (t) + bi(t)(t, x(t), βi(t)(t))dt + dl(t), 0 ≤ t ≤ T (1.1)

where l(·) is a non decreasing process starting from 0, satisfying Z T

0

1{x(s)>0}dl(s) = 0,

and W is a standard one dimensional brownian motion. The corresponding Ito’s formula is given by

dfi(t)(t, x(t)) = ∂xfi(t)(t, x(t))σi(t)(t, x(t), βi(t)(t))dW (t) +



∂tfi(t)(t, x(t)) +

1

2∂x,xfi(t)(t, x(t))σ

2

i(t)(t, x(t), βi(t)(t)) + ∂xfi(t)(t, x(t))bi(t)(t, x(t), βi(t)(t))

 dt + I X i=1 ∂xfi(t, 0)αi(t)dl(t), 0 ≤ t ≤ T, (1.2)

for f regular enough.

(7)

More precisely:

J = nX = (x, i), x ∈ R+ and i ∈ {1, . . . , I}

o ,

where all the points (0, i), i = 1, . . . , I, are identified to the vertex denoted by 0. We can then write J = I [ i=1 Ji,

with Ji := R+× {i} and Ji∩ Jj = {0} for i 6= j.

They will be two types of control. The first ones are the βi appearing on each edge Ji∗ in

(1.1), and are classical from a mathematical point of view for a problem of control. The second ones are the terms αi appearing in (1.2), in front of the term of reflection l, that

we will call in the sequel the control at the junction point, and can be interpreted as the probabilities of moving to another edge as soon as the junction point is reached by the process.

The optimal control consists in minimizing the cost

EPh I X i=1 Z T t 1n x(u),i(u)  ∈J∗ i

ohi(u, x(u), βi(u))du + Z T

t

h0(u, α1(u) . . . αI(u))dl(u) + g(XT)

i ,

with cost hi on each edge Ji∗, cost h0 at the junction point, and the terminal condition g.

The value function v associated with this problem satisfies (at least formally) the following backward Hamilton-Jacobi equation at the junction, with non linear Kirchoff condition at the vertex            ∂tui(t, x) + Hi(t, x, ui(t, x), ∂xui(t, x), ∂x,xui(t, x)) = 0, if (t, x) ∈ (0, T ) × (0, +∞), F (t, u(t, 0), ∂xu(t, 0)) = 0, if t ∈ (0, T ), u(T, ·) = g(·), (1.3)

(8)

where Hi(t, x, u, p, S) = infki∈Ki n1 2σ 2 i(t, x, ki)S + bi(t, x, ki)p + hi(t, x, ki) o , F (t, u, p) = infα=(α1,...αI)∈A0 nXI i=1 αip + h0(t, α1. . . αI) o

Let us recall that the concept of ramified spaces and the analysis of (linear) partial differ-ential equation on these spaces were first introduced by Nikol’skii [34] and Lumer [31, 32]. They are naturally associated with stochastic processes as in (1.1) living on graphs. These processes were introduced in the seminal papers [15] and [16], where they appear as the singular perturbation of Hamiltonian systems. Since then there has been a large litera-ture on the subject: see for instance the survey paper [27] on similar (multi-dimensional) stochastic systems and their interpretations.

On the PDE side, there has been several works on linear and quasilinear parabolic equa-tions of the form (1.3), with more general Hamltonians on the edges. For linear equaequa-tions, von Below [38] shows that, under natural smoothness and compatibility conditions, linear boundary value problems on a network with a linear Kirchhoff are well-posed and enjoy a strong maximum principle. In [40] he also studies the classical global solvability for a class of semilinear parabolic equations on ramified networks, where a dynamical node condition is prescribed. Still in the linear setting, another approach, yielding similar ex-istence results, was developed by Fijavz, Mugnolo and Sikolya in [12]: the idea is to use semi-group theory as well as variational methods to understand how the spectrum of the operator is related to the structure of the network.

Equations of the form (1.3) can also be analyzed in terms of viscosity solutions. The first results on viscosity solutions for Hamilton-Jacobi equations on networks have been obtained by Schieborn in [36] for the Eikonal equations and later discussed in many con-tributions on first order problems [8, 20, 28], elliptic equations [29] and second order problems with vanishing diffusion at the vertex [30]. Let us finally quote the very recent paper [2] which discusses the well-posedness of stationary Hamilton-Jacobi equations on a network and builds solutions to ergodic mean field game systems on this network. The same authors will in a forthcoming work treat the finite horizon MFGs, in the nonstation-ary case. Still in the MFGs theory on networks in the nonstationnonstation-ary case, we refer to the

(9)

following recent thesis [9], where the author studies MFG PDE system on a junction, with linear Kirchoff condition at the junction point, and build weak solutions of the system in Sobolev spaces, with Lipschitz Hamiltonians.

The main reason for studying equation (1.3) is the optimal control of a diffusion living on the junction. Control problems on stratified domains or networks have already been well-studied in the literature, most often for first order problems, and we refer for instance to [3], [4], [5], [17],[1],[33]... On the other hand, for stochastic control problems with re-flection and controllability at the boundary, we refer to [11], where the author studied optimal reflection with some applications in financial markets.

Let us recall in the sequel, the main difficulties and motivations that we have faced out in this thesis:

First of all, to study a problem of control, with control at the junction point, which in-volves the behavior of the process l(·) given in (1.2), we can not consider second order terms vanishing at the junction, since the quadratic variation of l(·) is centrally related to the assumption of ellipticity. From a PDE point of view, we will study the quasi linear case, since in the literature, the viscosity context have been seldom considered, and a comparison Theorem for the problem (1.3), with non vanishing viscosity at the junction point is still an open problem. Moreover, in the quasi linear context, classical solutions and their uniqueness have been proved in [40], but with a dynamical Neumann condition at the junction point, which can not been used for our problem regarding to the generator giving in (1.2). The same author proves the existence and uniqueness of a solution, when the Neumann boundary condition is non dynamical, but with viscosity vanishing at the junction point. The proof uses classical fixed point arguments, and this method is not well adapted for our problem. This is why, we will consider an elliptic scheme, since the non degenerate elliptic problem is well known in literature.

The second main difficulty for studying a problem of control at the junction, is that we should be able to get the existence of a diffusion with measurable, and time dependent coefficients. This the main motivation of Chapter 3, where we propose another proof of the existence on a diffusion, since semi-groups techniques used in [15] are not adapted to this context. The main problem we face out, is that we are not able to have measur-able coefficients αi at the junction point, and this the key point to formulate a verification

(10)

Theorem in the theory of stochastic control. However, in the last Chapter 4, we will prove the dynammic programming principle, adapting classical arguments of compactification methods, to our problem which is a good starting point. We get several technical results, that will be used in a future framework to improve this theory.

Our contributions to the topic are the following:

• Well-posedness of the Hamilton-Jacobi equation (1.3), in the time dependent and uniformly parabolic setting.

In Chapter 2, we prove the existence of classical quasi linear solution to equation (1.3). Our main assumptions are that the equation is uniformly parabolic with smooth coefficients and that the term F = F (u, p) at the junction is either decreasing with respect to u or increasing with respect to p. Note that, in contrast with most previous works, we assume that the diffusion is not degenerate (in particular at the junction) and of evolution type. We also prove a comparison principle, from which we derive the uniqueness of the solution.

The main idea of the proof is to use a time discretization, exploiting at each step the solvability of the associated elliptic problem:

     −σi(x, ∂xui(x))∂x,xui(x) + Hi(x, ui(x), ∂xui(x)) = 0, F (u(0), ∂xu(0)) = 0. (1.4)

• Construction of a solution to the diffusion (1.1).

Before discussing the optimal control problem (1.3), we show how to build a stochastic process of the form (1.1). Let us recall that in [16], this process is built by semi-group techniques.

The aim of Chapter 3 is to provide a different and more intuitive method for the con-struction of the diffusion (1.1). We explain this concon-struction in the time-independent framework, where the coefficient of the diffusion at the junction α = (αi) are fixed.

Our idea is to build the process as the limit, as the small parameter δ tends to 0, of processes which jump at a position δ on a branch as soon as it touches the junction point 0. The branch i is chosen randomly (and independently of the process) with probability αi. We also describe the process l as the limit of the quadratic variation

(11)

of X over the times spent at the neighborhood of 0.

• Dynamic programming principle for the optimal control problem (1.3).

In Chapter 4, we study stochastic control problems at the junction, with control at the junction point. This kind of problem has not been studied before and our main result is a dynamic programming principle. For this we follow the classical strategy of proof introduced in [25]. We use a weak formulation in the problem, the main novelty (as well as the main difficulty) being to treat the reflexion at the junction point, which is responsible for the process l in (1.1). Our first main step is to prove the compactness of the class of admissible controls. Then we show stability properties of the set of controls by conditioning and concatenation at stopping times, from which we derive the dynamic programming principle.

Beside this introduction, the thesis is organized in 3 Chapters: in the first one we analyze the Hamilton-Jacobi equation with a Neumann boundary condition at the junc-tion; in the second one we build a solution to the reflected process while the last one is dedicated to optimal control problems on the junction.

Different topics can be treated in futur frameworks, as the viscosity theory for equations of type 1.3, the verification theorem for the stochastic problem, and applications in MFGs.

(12)

Chapter 2

Quasi linear parabolic PDE in a

junction with non linear Neumann

vertex condition

This chapter is based on a paper written under the supervision of P. Cardaliaguet and submitted for publication: ”I.Wahbi. Quasi linear parabolic PDE in a junction, with non linear Neumann boundary condition. ArXiv:1807.04032, 2018,” [41].

2.1

Introduction

In this Chapter, we study non degenerate quasi linear parabolic partial differential equa-tions on a junction, satisfying a non linear Neumann boundary condition at the junction point x = 0:            ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,x2 ui(t, x) + Hi(x, ui(t, x), ∂xui(t, x)) = 0,

for all x > 0, and for all i ∈ {1 . . . I}, F (u(t, 0), ∂xu(t, 0)) = 0.

(2.1)

The well-known Kirchhoff law corresponds to the case where F is linear in ∂xu and

independent of u.

(13)

spaces and the analysis of partial differential equation on these spaces have attracted a lot of attention in the last 30 years. As explained in [34], the main motivations are applications in physics, chemistry, and biology (for instance small transverse vibrations in a grid of strings, vibration of a grid of beams, drainage system, electrical equation with Kirchhoff law, wave equation, heat equation,...). Linear diffusions of the form (4.4), with a Kirchhoff law, are also naturally associated with stochastic processes living on graphs. These processes were introduced in the seminal papers [15] and [16]. Another motivation for studying (4.4) is the analysis of associated stochastic optimal control problems with a control at the junction. The result of this Chapter will allow us in a future work to characterize the value function of such problems.

There has been several works on linear and quasilinear parabolic equations of the form (4.4). For linear equations, von Below [38] shows that, under natural smoothness and compatibility conditions, linear boundary value problems on a network with a linear Kirchhoff condition at the vertex point, are well-posed. The proof consists mainly in showing that the initial boundary value problem on a junction is equivalent to a well-posed initial boundary value problem for a parabolic system, where the boundary conditions are such that the classical results on linear parabolic equations [26] can be applied. The same author investigates in [39] the strong maximum principle for semi linear parabolic operators with Kirchhoff condition, while in [40] he studies the classical global solvability for a class of semilinear parabolic equations on ramified networks, where a dynamical node condition is prescribed: Namely the Neumann condition at the junction point x = 0 in (4.4), is replaced by the dynamic one

∂tu(t, 0) + F (t, u(t, 0), ∂xu(t, 0)) = 0.

In this way the application of classical estimates for domains established in [26] becomes possible. The author then establish the classical solvability in the class C1+α,2+α, with

the aid of the Leray-Schauder-principle and the maximum principle of [39]. Let us note that this kind of proof fails for equation (4.4) because in this case one cannot expect an uniform bound for the term |∂tu(t, 0)| (the proof of Lemma 3.1 of [26] VI.3 fails). Still

in the linear setting, another approach, yielding similar existence results, was developed by Fijavz, Mugnolo and Sikolya in [12]: the idea is to use semi-group theory as well as

(14)

variational methods to understand how the spectrum of the operator is related to the structure of the network.

Equations of the form (4.4) can also be analyzed in terms of viscosity solutions. The first results on viscosity solutions for Hamilton-Jacobi equations on networks have been obtained by Schieborn in [36] for the Eikonal equations and later discussed in many contributions on first order problems [8, 20, 28], elliptic equations [29] and second order problems with vanishing diffusion at the vertex [30].

In contrast second order Hamilton-Jacobi equations with a non vanishing viscosity at the boundary have seldom been studied in the literature and our aim is to show the well-posedness of classical solutions for (4.4) in suitable Höder spaces: see Theorem 2.2.2 for the existence and Theorem 2.2.4 for the comparison, and thus the uniqueness. Our main assumptions are that the equation is uniformly parabolic with smooth coefficients and that the term F = F (u, p) at the junction is either decreasing with respect to u or increasing with respect to p.

The main idea of the proof is to use a time discretization, exploiting at each step the solvability in C2+α of the elliptic problem

     −σi(x, ∂xui(x))∂2x,xui(x) + Hi(x, ui(x), ∂xui(x)) = 0, F (u(0), ∂xu(0)) = 0. (2.2)

The Chapter is organized as follows. In section 2.2, we introduce the notations and state our main results. In Section 2.3, we review the mains results of existence and uniqueness of the elliptic problem (2.2). Finally Section 2.4, is dedicated to the proof of our main results.

(15)

2.2

Main results

In this section we state our main result Theorem 2.2.2, on the solvability of the parabolic problem with Neumann boundary condition at the vertex, on a bounded junction

                         ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,xui(t, x)+ Hi(x, ui(t, x), ∂xui(t, x)) = 0, if (t, x) ∈ (0, T ) × (0, ai), F (u(t, 0), ∂xu(t, 0)) = 0, if t ∈ [0, T ), ∀i ∈ {1 . . . I}, ui(t, ai) = φi(t), if t ∈ [0, T ], ∀i ∈ {1 . . . I}, ui(0, x) = gi(x), if x ∈ [0, ai]. (2.3)

There will be two typical assumptions for F = F (u, p): either F is decreasing with respect to u or F is increasing with respect to p (Kirchhoff conditions).

2.2.1 Notations and preliminary results

Let us start by introducing the main notation used in this Chapter as well as an interpo-lation result.

Let I ∈ N∗ be the number of edges, and a = (a

1, . . . aI) ∈ (0, ∞)I be the length of each

edge.

The bounded junction is defined by

Ja = n X = (x, i), x ∈ [0, ai] and i ∈ {1, . . . , I}

o ,

where all the points (0, i), i = 1, . . . , I, are identified to the vertex denoted by 0. We can then write Ja = I [ i=1 Jai i , with Jai i := [0, ai] × {i}, Jiai∩ J aj

j = {0}. For T > 0, the time-space domain JTa is defined

by

(16)

The interior of Ja

T set minus the junction point 0 is denoted by ◦ Ja T, and is defined by ◦ JTa = (0, T ) × I [ i=1 ◦ Jai i  .

For the functionnal spaces that will be used in the sequel, we use here the notations of Chapter 1.1 of [26]. For the convenience of the reader, we recall these notations in Appendix A.

In addition we introduce the parabolic Hölder space on the junctionC2l,l(Ja

T), k.kCl 2,l(Ja

T) 

and the space C l 2,l

b ( ◦

Ja

T), defined by (where l > 0, see Appendix A for more details)

C2l,l(Ja T) := n f : Ja T → R, (t, (x, i)) 7→ fi(t, x), ∀(i, j) ∈ {1 . . . I}2, ∀t ∈ (0, T ), fi(t, 0) = fj(t, 0) = f (t, 0), ∀i ∈ {1 . . . I}, (t, x) 7→ fi(t, x) ∈ C l 2,l([0, T ] × [0, ai]) o , C2l,l b ( ◦ Ja T) := n f : Ja T → R, (t, (x, i)) 7→ fi(t, x), ∀i ∈ {1 . . . I}, (t, x) 7→ fi(t, x) ∈ C l 2,l b ((0, T ) × (0, ai)) o , with kuk C2l,l(Ja T) = X 1≤i≤I kuik C2l,l([0,T ]×[0,ai]).

We will use the same notations, when the domain does not depend on time, namely T = 0, ΩT = Ω, removing the dependence on the time variable.

We continue with the definition of a nondecreasing maps F : RI → R.

Let (x = (x1, . . . xI), y = (y1. . . yI)) ∈ R2I, we say that

x ≤ y, if ∀i ∈ {1 . . . I}, xi ≤ yi,

and

(17)

We say that F ∈ C(RI, R) is nondecreasing if

∀(x, y) ∈ RI, if x ≤ y, then F (x) ≤ F (y),

increasing if

∀(x, y) ∈ RI, if x < y, then F (x) < F (y).

Next we recall an interpolation inequality, which will be useful in the sequel.

Lemma 2.2.1. Suppose that u ∈ C0,1([0, T ] × [0, R]) satisfies an Hölder condition in t

in [0, T ] × [0, R], with exponent α ∈ (0, 1], constant ν1, and has derivative ∂xu, which

for any t ∈ [0, T ] are Hölder continuous in the variable x, with exponent γ ∈ (0, 1], and constant ν2. Then the derivative ∂xu satisfies in [0, T ] × [0, R], an Hölder condition in t,

with exponent 1+γαγ , and constant depending only on ν1, ν2, γ. More precisely

∀(t, s) ∈ [0, T ]2, |t − s| ≤ 1, ∀x ∈ [0, R], |∂xu(t, x) − ∂xu(s, x)| ≤  2ν2  ν1 γν2 1+γγ + 2ν1  γν2 ν1 −1+γ1  |t − s|1+γαγ .

This is a special case of Lemma II.3.1, in [26], (see also [34]). The main difference is that we are able to get global Hölder regularity in [0, T ]×[0, R] for ∂xu in its first variable.

Let us recall that this kind of result fails in higher dimensions.

Proof. Let (t, s) ∈ [0, T ]2, with |t − s| ≤ 1, and x ∈ [0, R]. Suppose first that x ∈ [0,R 2].

Let y ∈ [0, R], with y 6= x, we write

∂xu(t, x) − ∂xu(s, x) =

1 y − x

Z y

x

(∂xu(t, x) − ∂xu(t, z)) + (∂xu(t, z) − ∂xu(s, z)) + (∂xu(s, z) − ∂xu(s, x)) dz.

Using the Hölder condition in time satisfied by u, we have 1 y − x Z y x (∂xu(t, z) − ∂xu(s, z))dz ≤ 2ν1|t − s|α |y − x| .

(18)

On the other hand, using the Hölder regularity of ∂xu in space satisfied, we have 1 y − x Z y x

(∂xu(t, x) − ∂xu(t, z)) + (∂xu(s, z) − ∂xu(s, x))dz

≤ 2ν2|y − x| γ. It follows |∂xu(t, x) − ∂xu(s, x)| ≤ 2ν2|y − x|γ + 2ν1|t − s|α |y − x| . Assuming that |t − s| ≤(3R2 )1+γ γν2 ν1 α1

∧ 1, minimizing in y ∈ [0, R], for y > x, the right side of the last equation, we get that the infimum is reached for

y∗ = x + ν1|t − s| α γν2 1+γ1 , and then |∂xu(t, x) − ∂xu(s, x)| ≤ C(ν1, ν2, γ)|t − s| αγ 1+γ,

where the constant C(ν1, ν2, γ), depends only on the data (ν1, ν2, γ), and is given by

C(ν1, ν2, γ) = 2ν2  ν1 γν2 1+γγ + 2ν1 γν2 ν1 −1+γ1 . For the cases y < x, and x ∈ [R

2, R], we argue similarly, that completes the proof. 

2.2.2 Assumptions and main results

We state in this subsection the central Theorem of this Chapter, namely the solvability and uniqueness of (2.3) in the class Cα2,1+α(Ja

T) ∩ C 1+α 2,2+α b ( ◦ Ja

T). In the rest of these

Chapter we fix α ∈ (0, 1).

Let us state the assumptions we will work on.

(19)

We introduce the following data      F ∈ C0(R × RI, R) g ∈ C1(Ja) ∩ C2 b( ◦ Ja) ,

and for each i ∈ {1 . . . I}

           σi ∈ C1([0, ai] × R, R) Hi ∈ C1([0, ai] × R2, R) φi ∈ C1([0, T ], R) .

We suppose furthermore that the data satisfy (i) Assumption on F           

a) F is decreasing with respect to its first variable, b) F is nondecreasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, F (b, B) = 0,

or satisfies the Kirchhoff condition           

a) F is nonincreasing with respect to its first variable, b) F is increasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, F (b, B) = 0.

We suppose moreover that there exists a parameter m ∈ R, m ≥ 2 such that we have (ii) The (uniform) ellipticity condition on the (σi)i∈{1...I}: there exists ν, ν, strictly positive

constants such that

∀i ∈ {1 . . . I}, ∀(x, p) ∈ [0, ai] × R,

ν(1 + |p|)m−2 ≤ σi(x, p) ≤ ν(1 + |p|)m−2.

(iii) The growth of the (Hi)i∈{1...I} with respect to p exceed the growth of the σi with

(20)

function such that

∀i ∈ {1 . . . I}, ∀(x, u, p) ∈ [0, ai] × R2, |Hi(x, u, p)| ≤ µ(|u|)(1 + |p|)m.

(iv) We impose the following restrictions on the growth with respect to p of the derivatives for the coefficients (σi, Hi)i∈{1...I}, which are for all i ∈ {1 . . . I},

a) |∂pσi|[0,ai]×R2(1 + |p|) 2+ |∂ pHi|[0,ai]×R2 ≤ γ(|u|)(1 + |p|) m−1, b) |∂xσi|[0,ai]×R2(1 + |p|) 2+ |∂ xHi|[0,ai]×R2 ≤  ε(|u|) + P (|u|, |p|)(1 + |p|)m+1, c) ∀(x, u, p) ∈ [0, ai] × R3, −CH ≤ ∂uHi(x, u, p) ≤  ε(|u|) + P (|u|, |p|)(1 + |p|)m,

where γ and ε are continuous non negative increasing functions. P is a continuous func-tion, increasing with respect to its first variable, and tends to 0 for p → +∞, uniformly with respect to its first variable, from [0, u1] with u1 ∈ R, and CH > 0 is real strictly

positive number. We assume that (γ, ε, P, CH) are independent of i ∈ {1 . . . I}.

(v) A compatibility conditions for g and (φi){1...I}

F (g(0), ∂xg(0)) = 0 ; ∀i ∈ {1 . . . I}, gi(ai) = φi(0).

Theorem 2.2.2. Assume (P). Then system (2.3) is uniquely solvable in the class Cα2,1+α(Ja

T)∩

C1+α2,2+α

b ( ◦

Ja

T). There exist constants (M1, M2, M3), depending only the data introduced in

assumption (P), M1 = M1  maxi∈{1...I} n supx∈(0,ai)| − σi(x, ∂xgi(x))∂ 2 xgi(x) + Hi(x, gi(x), ∂xgi(x))| + |∂tφi|(0,T ) o

, maxi∈{1...I}|gi|(0,ai), CH  , M2 = M2  ν, ν, µ(M1), γ(M1), ε(M1), sup|p|≥0P (M1, |p|), |∂xgi|(0,ai), M1  , M3 = M3  M1, ν(1 + |p|)m−2, µ(|u|)(1 + |p|)m, |u| ≤ M1, |p| ≤ M2  , such that ||u||C(Ja

(21)

Moreover, there exists a constant M (α) depending on α, M1, M2, M3  such that ||u||Cα 2,1+α(Ja T) ≤ M (α).

We continue this Section by giving the definitions of super and sub solution, and stating a comparison Theorem for our problem.

Definition 2.2.3. We say that u ∈ C0,1(Ja

T) ∩ C1,2( ◦

Ja

T), is a super solution (resp. sub

solution) of            ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,xui(t, x)+ Hi(x, ui(t, x), ∂xui(t, x)) = 0, if (t, x) ∈ (0, T ) × (0, ai), F (u(t, 0), ∂xu(t, 0)) = 0, if t ∈ (0, T ), (2.4) if            ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,xui(t, x)+ Hi(x, ui(t, x), ∂xui(t, x)) ≥ 0, (resp. ≤ 0), ∀(t, x) ∈ (0, T ) × (0, ai),

F (u(t, 0), ∂xu(t, 0)) ≤ 0, (resp. ≥ 0), ∀t ∈ (0, T )

Theorem 2.2.4. Parabolic comparison. Assume (P). Let u ∈ C0,1(Ja T) ∩ C 1,2 b ( ◦ Ja T) (resp. v ∈ C0,1(JTa) ∩ C 1,2 b ( ◦ Ja T)) a super solution

(resp. a sub solution) of (2.4), satisfying for all i ∈ {1 . . . I}, ui(t, ai) ≥ vi(t, ai), for all

t ∈ [0, T ], and ui(0, x) ≥ vi(0, x), for all x ∈ [0, ai].

Then for each (t, (x, i)) ∈ Ja

T : ui(t, x) ≥ vi(t, x).

Proof. We start by showing that for each 0 ≤ s < T , for all (t, (x, i)) ∈ Ja

s, ui(t, x) ≥

vi(t, x).

Let λ > 0. Suppose that λ > C1+ C2, where the expression of the constants (C1, C2) are

given in the sequel (see (2.5), and (2.6)). We argue by contradiction assuming that sup (t,(x,i))∈Ja s exp(−λt + x)vi(t, x) − ui(t, x)  > 0.

(22)

point (t0, (x0, j0)) ∈ (0, s] × J , with 0 ≤ x0 < aj0.

Suppose first that x0 > 0, the optimality conditions imply that

exp(−λt0+ x0)  − λ(vj0(t0, x0) − uj0(t0, x0)) + ∂tvj0(t0, x0) − ∂tuj0(t0, x0)  ≥ 0, exp(−λt0+ x0))  vj0(t0, x0) − uj0(t0, x0) + ∂xvj0(t0, x0) − ∂xuj0(t0, x0)  = 0, exp(−λt0+ x0)  vj0(t0, x0) − uj0(t0, x0) + 2  ∂xvj0(t0, x0) − ∂xuj0(t0, x0)  + ∂x,xvj0(t0, x0) − ∂x,xuj0(t0, x0)  = exp(−λt0+ x0)  −vj0(t0, x0) − uj0(t0, x0)  + ∂x,xvj0(t0, x0) − ∂x,xuj0(t0, x0)  ≤ 0. Using assumptions (P) (iv) a), (iv) c) and the optimality conditions above we have

Hj0(x0, ui(t0, x0), ∂xuj0(t0, x0)) − Hj0(x0, vj0(t0, x0), ∂xvj0(t0, x0)) ≤  vj0(t0, x0) − uj0(t0, x0)  CH + γ(|∂xvj0(t0, x0)|)  (1 + |∂xuj0(t0, x0))| ∨ |∂xvj0(t0, x0))|) m−1 ≤ C1  vj0(t0, x0) − uj0(t0, x0)  , where C1 := maxi∈{1...I} n sup(t,x)∈[0,T ]×[0,ai]n CH + γ(|∂xvi(t, x)|  1 + |∂xui(t, x))| ∨|∂xvi(t, x))| m−1 o o . (2.5)

On the other hand we have using assumption (P) (ii), (iv) a), (iv) c), and the optimality conditions σj0(x0, ∂xvj0(t0, x0))∂x,xvj0(t0, x0) − σj0(x0, ∂xuj0(t0, x0))∂x,xuj0(t0, x0) ≤  vj0(t0, x0) − uj0(t0, x0)  ν(1 + |∂xvj0(t0, x0)|) m−2 + ∂x,xuj0(t0, x0) + γ(|∂xuj0(t0, x0)|)(1 + |∂xuj0(t0, x0))| ∨ |∂xvj0(t0, x0))|) m−1  ≤ C2  vj0(t0, x0) − uj0(t0, x0)  ,

(23)

where C2 := maxi∈{1...I} n sup(t,x)∈[0,T ]×[0,ai]n ν(1 + |∂xvi(t, x)|)m−2 + ∂x,xui(t, x) + γ(|∂xui(t, x)|)(1 + |∂xui(t, x))| + |∂xvi(t, x))|)m−1 o o . (2.6)

Using now the fact that v is a sub-solution while u is a super-solution, we get 0 ≤

∂tuj0(t0, x0) − σj0(x0, ∂xuj0(t0, x0))∂x,xuj0(t0, x0) + Hj0(x0, ui(t0, x0), ∂xuj0(t0, x0)) −∂tvj0(t0, x0) + σj0(x0, ∂xvj0(t0, x0))∂x,xvj0(t0, x0) − Hj0(x0, vj0(t0, x0), ∂xvj0(t0, x0))

≤ −(λ − (C1+ C2))(vj0(t0, x0) − uj0(t0, x0)) < 0,

which is a contradiction. Therefore the supremum is reached at (t0, 0), with t0 ∈ (0, s].

We apply a first order Taylor expansion in space, in the neighborhood of the junction point 0. Since for all (i, j) ∈ {1 . . . I}, ui(t0, 0) = uj(t0, 0) = u(t0, 0), and vi(t0, 0) =

vj(t0, 0) = v(t0, 0), we get from ∀(i, j) ∈ {1, . . . I}2, ∀h ∈ (0, min i∈{1...I}ai] vj(t0, 0) − uj(t0, 0) ≥ exp(h)  vi(t0, h) − ui(t0, h)  , that ∀(i, j) ∈ {1, . . . I}2, ∀h ∈ (0, min i∈{1...I}ai] vj(t0, 0) − uj(t0, 0) ≥ vi(t0, 0) − ui(t0, 0) + h vi(t0, 0) − ui(t0, 0) + ∂xvi(t0, 0) − ∂xui(t0, 0)  + hεi(h), where

∀i ∈ {1, . . . I}, limh→0εi(h) = 0.

We get then ∀i ∈ {1, . . . I}, ∂xvi(t0, 0) ≤ ∂xui(t0, 0) −  vi(t0, 0) − ui(t0, 0)  < ∂xui(t0, 0).

(24)

Using the growth assumptions on F (assumption (P)(i)), and the fact that v is a sub-solution while u is a super-sub-solution, we get

0 ≤ F (t0, v(t0, 0), ∂xv(t0, 0)) < F (t0, u(t0, 0), ∂xu(t0, 0)) ≤ 0,

and then a contradiction.

We deduce then for all 0 ≤ s < T , for all (t, (x, i)) ∈ [0, s] × Ja,

exp(−λt + x)vi(t, x) − ui(t, x)



≤ 0.

Using the continuity of u and v, we deduce finally that for all (t, (x, i)) ∈ [0, T ] × Ja,

vi(t, x) ≤ ui(t, x).



2.3

The elliptic problem

As explained in the general Introduction 2.1, the construction of a solution for our parabolic problem (2.3) relies on a time discretization and on the solvability of the as-sociated elliptic problem. We review in this section the well-posedness of the elliptic problem      −σi(x, ∂xui(x))∂x,xui(x) + Hi(x, ui(x), ∂xui(x)) = 0, F (u(0), ∂xu(0)) = 0 , (2.7)

which is formulated for regular maps (x, i) 7→ ui(x), continuous at the junction point,

namely for each i 6= j ∈ {1 . . . I}, ui(0) = uj(0) = u(0), that follows at each edge

−σi(x, ∂xui(x))∂x,xui(x) + Hi(x, ui(x), ∂xui(x)) = 0,

and ui satisfy the following non linear Neumann boundary condition at the vertex

(25)

We introduce the following data for i ∈ {1 . . . I}                    F ∈ C(R × RI, R), σi ∈ C1([0, ai] × R, R) Hi ∈ C1([0, ai] × R2, R) φi ∈ R ,

satisfying the following assumptions

Assumption (E) (i) Assumption on F           

a) F is decreasing with respect to its first variable, b) F is nondecreasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, such that : F (b, B) = 0,

or F satisfies the Kirchhoff condition           

a) F is nonincreasing with respect to its first variable, b) F is increasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, such that : F (b, B) = 0.

(ii) The ellipticity condition on the σi

∃c > 0, ∀i ∈ {1 . . . I}, ∀(x, p) ∈ [0, ai] × R, σi(x, p) ≥ c.

(iii) For the Hamiltonians Hi, we suppose

∃CH > 0, ∀i ∈ {1 . . . I}, ∀(x, u, v, p) ∈ (0, ai) × R3,

(26)

For each i ∈ {1 . . . I}, we define the following differential operators (δi, δi)i∈{1...I} acting on C1([0, a i] × R2, R), for f = f (x, u, p) by δi := ∂u+ 1 p∂x; δi := p∂p.

(iv) We impose the following restrictions on the growth with respect to p for the coefficients (σi, Hi)i∈{1...I} = (σi(x, p), Hi(x, u, p))i∈{1...I}, which are for all i ∈ {1 . . . I}

δiσi = o(σi),

δiσi = O(σi),

Hi = O(σip2),

δiHi ≤ o(σip2),

δiHi ≤ O(σip2),

where the limits behind are understood as p → +∞, uniformly in x, for bounded u. The main result of this section is the following Theorem, for the solvability and uniqueness of the elliptic problem at the junction, with non linear Neumann condition at the junction point.

Theorem 2.3.1. Assume (E). The following elliptic problem at the junction, with Neu-mann boundary condition at the vertex

           −σi(x, ∂xui(x))∂x,xui(x) + Hi(x, ui(x), ∂xui(x)) = 0, if x ∈ (0, ai), F (u(0), ∂xu(0)) = 0, ∀i ∈ {1 . . . I}, ui(ai) = φi, (2.8)

is uniquely solvable in the class C2+α(Ja).

Theorem 2.3.1 is stated without proof in [29]. For the convenience of the reader, we sketch its proof in the Appendix B.

The uniqueness of the solution of (2.8), is a consequence of the elliptic comparison The-orem for smooth solutions, for the Neumann problem, stated in this Section, and whose proof uses the same arguments of the proof of the parabolic comparison Theorem 2.2.4. We complete this section by recalling the definition of super and sub solution for the

(27)

elliptic problem (2.8), and the corresponding elliptic comparison Theorem.

Definition 2.3.2. Let u ∈ C2(Ja). We say that u is a super solution (resp. sub solution)

of      −σi(x, ∂xfi(x))∂x,xfi(x) + Hi(x, fi(x), ∂xfi(x)) = 0, if x ∈ (0, ai), F (f (0), ∂xf (0)) = 0, (2.9) if      −σi(x, ∂xui(x))∂x,xui(x) + Hi(x, ui(x), ∂xui(x)) ≥ 0, (resp. ≤ 0), if x ∈ (0, ai),

F (u(0), ∂xu(0)) ≤ 0, (resp. ≥ 0).

Theorem 2.3.3. Elliptic comparison Theorem, see for instance Theorem 2.1 of [29]. Assume (E). Let u ∈ C2(Ja) (resp. v ∈ C2(Ja)) a super solution (resp. a sub solution)

of (2.9), satisfying for all i ∈ {1 . . . I}, ui(ai) ≥ vi(ai). Then for each (x, i) ∈ Ja :

ui(x) ≥ vi(x).

2.4

The parabolic problem

In this Section, we prove Theorem 2.2.2. The construction of the solution is based on the results obtained in Section 2.3 for the elliptic problem, and is done by considering a sequence un∈ C2(Ja), solving on a time grid an elliptic scheme defined by induction. We

will prove that the solution un converges to the required solution.

2.4.1 Estimates on the discretized scheme Let n ∈ N∗, we consider the following time grid, (tn

k = kTn)0≤k≤nof [0, T ], and the following

sequence (uk)0≤k≤n of C2+α(Ja), defined recursively by

(28)

and for 1 ≤ k ≤ n, uk is the unique solution of the following elliptic problem                   

n(ui,k(x) − ui,k−1(x)) − σi(x, ∂xui,k(x))∂x,xui,k(x))+

Hi(x, ui,k(x), ∂xui,k(x)) = 0, if x ∈ (0, ai),

F (uk(0), ∂xuk(0)) = 0,

∀i ∈ {1 . . . I}, ui,k(ai) = φi(tnk).

(2.10)

The solvability of the elliptic scheme (2.10) can be proved by induction, using the same arguments as for Theorem 2.3.1. The next step consists in obtaining uniform estimates of (uk)0≤k≤n. We start first by getting uniform bounds for n|uk− uk−1|(0,ai) using the comparison Theorem 2.3.3.

Lemma 2.4.1. Assume (P). There exists a constant C > 0, independent of n, depending only the data C = Cmaxi∈{1...I}

n supx∈(0,ai)|−σi(x, ∂xgi(x))∂ 2 xgi(x)+Hi(x, gi(x), ∂xgi(x))|+ |∂tφi|(0,T ) o , CH  , such that sup n≥0 max k∈{1...n} i∈{1...I}max n

n|ui,k− ui,k−1|(0,ai) o ≤ C, and then sup n≥0 max k∈{0...n} i∈{1...I}max n |ui,k|(0,ai) o ≤ C + max i∈{1...I} n |gi|(0,ai) o .

Proof. Let n > ⌊CH⌋, where CH is defined in assumption (P) (iv) c). Let k ∈ {1 . . . n},

we define the following sequence      M0 = maxi∈{1...I} n supx∈(0,ai)| − σi(x, ∂xgi(x))∂ 2 xgi(x) + Hi(x, gi(x), ∂xgi(x))| + |∂tφi|(0,T ) o , Mk,n = n n − CH Mk−1,n, k ∈ {1 . . . n}.

We claim that for each k ∈ {1 . . . n} max

i∈{1...I}

n

n|ui,k − ui,k−1|(0,ai) o

≤ Mk,n.

(29)

the junction by h :=      Ja → R (x, i) 7→ M1,n n + gi(x),

is a super solution of (2.10), for k = 1. For this we will use the Elliptic Comparison Theorem 2.3.3.

Using the compatibility conditions satisfied by g, namely assumption (P) (v), and the assumptions of growth on F , assumption (P) (i), we get for the boundary conditions

F (h(0), ∂xh(0)) = F ( M1,n n + g(0), ∂xg(0)) ≤ F (g(0), ∂xg(0)) = 0, h(ai) = M1,n n + gi(ai) ≥ M0,n n + gi(ai) ≥ φi(t n 1).

For all i ∈ {1 . . . I}, and x ∈ (0, ai), we get using assumption (P) (iii)

n(hi(x) − gi(x)) − σi(x, ∂xhi(x))∂x2hi(x) + Hi(x, hi(x), ∂xhi(x)) = M1,n− σi(x, ∂xgi(x))∂x2gi(x) + Hi(x, M1,n n + gi(x), ∂xgi(x)) ≥ M1,n− σi(x, ∂xgi(x))∂x2gi(x) + Hi(x, gi(x), ∂xgi(x)) − M1,nCH n ≥ 0.

It follows from the comparison Theorem 2.3.3, that for all i ∈ {1 . . . I}, and x ∈ [0, ai]

u1,i(x) ≤

M1,n

n + gi(x).

Using the same arguments, we show that

h :=      Ja → R (x, i) 7→ −M1,n n + gi(x),

is a sub solution of (2.10) for k = 1, and we then get max i∈{1...I} n sup x∈(0,ai) n|u1,i(x) − gi(x)| o ≤ M1,n.

(30)

the following map h :=      Ja→ R (x, i) 7→ Mk,n n + ui,k−1(x),

is a super solution of (2.10). For the boundary conditions, using assumption (P) (i), we get F (h(0), ∂xh(0)) = F ( Mk,n n + uk−1(0), ∂xuk−1(0)) ≤ F (uk−1(0), ∂xuk−1(0)) ≤ 0, h(ai) = Mk,n n + ui,k−1(ai) ≥ M0,n n + ui,k−1(ai) ≥ φi(t n k).

For all i ∈ {1 . . . I}, and x ∈ (0, ai)

n(hi(x) − ui,k−1(x)) − σi(x, ∂xh(x))∂x2h(x) + Hi(x, h(x), ∂xh(x)) =

Mk,n− σi(x, ∂xui,k−1(x))∂x2ui,k−1(x) + Hi(x,

Mk,n

n + ui,k−1(x), ∂xuk−1(x)) ≥ Mk,n− σi(x, ∂xui,k−1(x))∂x2ui,k−1(x) + Hi(x, ui,k−1(x), ∂xuk−1(x)) −

CHMk,n

n .

Since we have for all x ∈ (0, ai)

−σi(x, ∂xui,k−1(x))∂x2ui,k−1(x) + Hi(x, ui,k−1(x), ∂xui,k−1(x)) = −n(ui,k−1(x) − ui,k−2(x)),

using the induction assumption we get

n(hi(x) − ui,k−1(x)) − σi(x, ∂xh(x))∂x2h(x) + Hi(x, ∂xh(x), ∂xh(x)) ≥

Mk,n− n(ui,k−1(x) − ui,k−2(x)) −

CHMk,n

n ≥ Mk,n

n − CH

n − Mk−1,n ≥ 0.

It follows from the comparison Theorem 2.3.3, that for all (x, i) ∈ Ja

ui,k(x) ≤

Mk,n

n + ui,k−1(x).

Using the same arguments, we show that

h :=      Ja→ R (x, i) 7→ −Mk,n n + ui,k−1(x),

(31)

is a sub solution of (2.10), and we get max

i∈{1...I}

n

n|ui,k(x) − ui,k−1(x)|(0,ai) o

≤ Mk,n.

We obtain finally using that for all k ∈ {1 . . . n}      Mk,n ≤ Mn,n, Mk,n =  n n − CH k M0, and Mn,n n→+∞

−−−−→ M := exp(CH) maxi∈{1...I}

n supx∈(0,ai)| − σi(x, ∂xgi(x))∂x2gi(x) + Hi(x, gi(x), ∂xgi(x))| + |∂tφi|(0,T ) o , that

supn≥0 maxk∈{1...n} maxi∈{1...I}

n

n|ui,k − ui,k−1|(0,ai) o

≤ C, supn≥0 maxk∈{0...n} maxi∈{1...I}

n |ui,k|(0,ai) o ≤ C + maxi∈{1...I} n |gi|(0,ai) o .

That completes the proof. 

The next step consists in obtaining uniform estimates for |∂xuk|(0,ai), in terms of n|uk− uk−1|(0,ai) and the quantities (ν, ν, µ, γ, ε, P ) introduced in assumption (P) (ii), (iii) and (iv). More precisely, we use similar arguments as for the proof of Theorem 14.1 of [19], using a classical argument of upper and lower barrier functions at the boundary. The assumption of growth (P) (ii) and (iii) are used in a key way to get an uniform bound on the gradient at the boundary. Finally to conclude, we appeal to a gradient maximum principle, using the growth assumption (P) (iv), adapting Theorem 15.2 of [19] to our elliptic scheme. The basic idea, which goes back as far as Bernstein’s work, involves differentiation of each quasi linear equation in each edge of (2.10), with respect to x. Thereafter, the maximum principle is applied to the resulting equation in the function |∂xui,k|2.

(32)

only the data 

ν, ν, µ(|u|), γ(|u|), ε(|u|), sup|p|≥0P (|u|, |p|), |∂xgi|(0,ai), |u| ≤ supn≥0maxk∈{0...n}maxi∈{1...I}

n

|ui,k|(0,ai) o

, supn≥0maxk∈{1...n}maxi∈{1...I}

n

n|ui,k − ui,k−1|(0,ai) o , such that sup n≥0 max k∈{0...n} i∈{1...I}max n |∂xui,k|(0,ai) o ≤ C.

Proof. Step 1 : We claim that, for each k ∈ {1 . . . n}, maxi∈{1...I}

n

|∂xui,k|∂(0,ai) o

is bounded by the data, uniformly in n.

It follows from Lemma 2.4.1, that there exists M > 0 such that sup

n≥0

max

k∈{0...n} i∈{1...I}max

n

|ui,k|(0,ai)+ n|ui,k − ui,k−1|(0,ai) o

≤ M.

We fix i ∈ {1 . . . I}. We apply a barrier method consisting in building two functions wi,k+, wi,k− satisfying in a neighborhood of 0, for example [0, κ], with κ ≤ ai

Qi(x, w+i,k(x), ∂xw+i,k(x), ∂ 2

xw+i,k(x)) ≥ 0, ∀x ∈ [0, κ], w +

i,k(0) = ui,k(0), w+i,k(κ) ≥ M,

Qi(x, w−i,k(x), ∂xw−i,k(x), ∂x2w−i,k(x)) ≤ 0, ∀x ∈ [0, κ], w −

i,k(0) = ui,k(0), w −

i,k(κ) ≤ −M,

where we recall that for each (x, u, p, S) ∈ [0, ai] × R3

Qi(x, u, p, S) = n(u − ui,k−1(x)) − σi(x, p)S + Hi(x, u, p).

For n > ⌊CH⌋, where CH is defined in assumption P (iv) c), it follows then from the

comparison principle that

w−i,k(x) ≤ ui,k(x) ≤ w+i,k(x), ∀x ∈ [0, κ],

and then

(33)

We look for wi,k+ defined on [0, κ] of the form

wi,0+ = gi(x)

wi,k+ : x 7→ ui,k(0) +

1

β ln(1 + θx),

where the constants (β, θ, κ) will be chosen in the sequel independent of k. Remark first that for all x ∈ [0, κ], ∂2

xwi,k+(x) = −β∂xw +

i,k(x)2, and w +

i,k(0) = ui,k(0). Let

us choose (θ, κ), such that

∀k ∈ {1 . . . n}, 0 < κ ≤ min i∈{1...I}ai, w + i,k(κ) ≥ M, ∂xw + i,k(κ) ≥ β. (2.11)

We choose for instance

θ = β2exp(2βM ) + 1 mini∈{1...I}ai exp(2βM ) κ = 1 θ  exp(2βM ) − 1. (2.12)

The constant β will be chosen in order to get

β ≥ sup k∈{1...n} sup x∈[0,κ] µ(w+ i,k(x))(1 + ∂xwi,k+(x))m+ M ν(1 + ∂xwi,k+(x))m−2∂xwi,k+(x)2 , (2.13)

where (µ(.), ν, m) are defined in assumption (P) (ii) and (iii). Since we have ∀x ∈ [0, κ], w+i,k(x) ≤ wi,k+(κ) = 2M,

β ≤ ∂xw+i,k(κ) ≤ ∂xw+i,k(x) ≤ ∂xw+i,k(0).

We can then choose β large enough to get (2.13), for instance β ≥ µ(2M ) ν  1 + 1 β2  + M νβ2.

It is easy to show by induction that w+i,k is lower barrier of ui,k in the neighborhood [0, κ].

More precisely, since w+i,0 = ui,0, and for all k ∈ {1 . . . n}

wi,k+(0) = ui,k(0), w+i,k(κ) ≥ ui,k(κ),

wi,k+(x) = w+i,k−1(x) + ui,k(0) − ui,k−1(0) ≥ wi,k−1+ (x) −

M n ,

(34)

we get using the assumption of induction, assumption (P) (ii) and (iii), and (2.13) that for all x ∈ (0, κ)

n(w+

i,k(x) − ui,k−1(x)) − σi(x, ∂xw+i,k(x))∂x,xwi,k+(x) + Hi(x, w+i,k(x), ∂xw+i,k(x)) ≥

−M + βσi(x, ∂xw+i,k(x))∂xwi,k+(x)2+ Hi(x, wi,k+(x), ∂xw+i,k(x)) ≥

−M + βν(1 + ∂xw+i,k(x))m−2∂xw+i,k(x)2+ µ(w+i,k(x))(1 + ∂xw+i,k(x))m ≥ 0.

We obtain therefore sup

n≥0

max

k∈{0...n} i∈{1...I}max ∂xui,k(0) ≤

θ

β ∨ ∂xgi(0).

With the same arguments we can show that wi,0− = gi(x)

wi,k− : x 7→ ui,k(0) −

1

β ln(1 + θx),

is a lower barrier in the neighborhood of 0. Using the same method, we can show that ∂xui,k(ai) is uniformly bounded by the same upper bounds, that completes the proof of

Step 1.

Step 2 : For the convenience of the reader, we do not detail all the computations of this Step, since they can be found in the proof of Theorem 15.2 of [19]. It follows from Lemma 2.4.1 that there exists M > 0 such that

supn≥0 maxk∈{0...n} maxi∈{1...I}

n

|ui,k|(0,ai) o

≤ M. We set furthermore

∀(x, u, p) ∈ [0, ai] × R2, Hi,kn (x, u, p) = n(u − ui,k−1(x)) + Hi(x, u, p).

Let u be a solution of the elliptic equation, for x ∈ (0, ai)

σi(x, ∂xu(x))∂x,xu(x) − Hi,kn (x, u(x), ∂xu(x)) = 0,

(35)

equalities δiHi,kn (x, u, p) = δiHi(x, u, p) + n(p − ∂xui,k−1(x)) p , δiH n i,k(x, u, p) = δiHi(x, u, p),(2.14)

where we recall that the operators δi and ¯δi are defined in assumption (E) (iii). We follow

the proof of Theorem 15.2 in [19]. Let u such that u = ψ(u), where ψ ∈ C3[m, M ], is

strictly increasing and m = ψ(−M ), M = ψ(M ). In the sequel, we will set v = ∂xu2

and v = ∂xu2. To simplify the notations, we will omit the variables (x, u(x), ∂xu(x)) in

the functions σi and Hi,kn , and the variable u for ψ. We assume first that the solution

u ∈ C3([−M, M ]), and we follow exactly all the computations that lead to equation of

(15.25) of [19] to get the following inequality

σi∂x,xv + Bi∂xv + Gni,k ≥ 0, (2.15)

where Bi and Gni,k have the same expression in (15.26) of [19] with (σi = σ∗i, ci = 0). We

choose (r = 0, s = 0), since we will see in the sequel (2.17), that condition (15.32) of [19] holds under assumption assumption (P). We have more precisely

Bi = ψ′∂pσi∂x,xu − ∂pHi+ ω∂p(σip2), Gn i,k = ω′ ψ′ + κiω 2+ β iω + θi,kn , ω = ψ ′′ ψ′2 ∈ C 1([m, M ]), κi = 1 σip2  δi(σip2) + p2 4σi |(δi+ 1)σi|2  , βi = 1 σip2  δi(σip2) − δiHi+ p2 2σi ((δi+ 1)σi)(δiσi)  , θn i,k = 1 σip2 p2 4σi |δiσi|2− δiHi,kn  = θi− 1 σip2 n(p − ∂ xui,k−1) p  , θi = 1 σip2  p2 4σi |δiσi|2− δiHi  . We set in the sequel

Gi =

∂xω

∂xψ

+ κiω2+ βiω + θi, in order to get Gni,k = Gi−

1 σip2 n(p − ∂xui,k−1) p  . (2.16) More precisely, we see from (2.14) that all the coefficients (Bi, κi, βi, θi) can be chosen

(36)

independent of n and ui,k−1. The main argument then to get a bound of ∂xu is to apply

a maximum principle for v in (2.15), and this will be done as soon as we ensure Gn i,k ≤ 0, for |∂xu| ≥ Lnk. Hence, we have sup x∈ΩL v = sup x∈∂ΩL v, ΩL= {x ∈ [0, ai], |∂xu(x)| ≥ Lnk}

On the other hand, using assumption (P) (ii) (iii) and (iv), it is easy to check that there exists a constants (a, b, c), depending only on the data



ν, ν, µ(M ), γ(M ), ε(M ), sup

|p|≥0

P (M, |p|), such that

supx∈[0,ai],|u|≤M lim sup|p|→+∞ κi(x, u, p) ≤ a,

supx∈[0,ai],|u|≤M lim sup|p|→+∞ βi(x, u, p) ≤ b,

supx∈[0,ai],|u|≤M lim sup|p|→+∞ θi(x, u, p) ≤ c,

where a = 1 ν(γ(M ) + ν) + 1 2+ γ(M )2 ν2 , b = ε(M ) + sup|p|≥0P (M, |p|) + γ(M ) ν + (ε(M ) + sup|p|≥0P (M, |p|))(ν + γ(M )) ν2 , c = (ε(M ) + sup|p|≥0P (M, |p|)) 2 4ν2 + 2(ε(M ) + sup|p|≥0P (M, |p|)) ν .

Therefore, fixing ε > 0, using (2.16), we can find L = L(a, b, c), such that for all p ≥ L(a, b, c)

Gi ≤

∂xω

∂xψ

+ aω2+ bω + c + ε.

(37)

choose ψ(·) = ψ(a, b, c)(·) such that we ensure

Gi ≤ 0, if |∂xu(x)| ≥ L(a, b, c).

We see then from the expression of θn

i,k that we get

Gni,k ≤ 0, if |∂xu(x)| ≥ L(a, b, c) ∨ |∂xui,k−1(x)|.

Therefore applying the maximum principle to v in (2.15), and from the relation u = ψ(u), v = ∂xu2 we get finally

|∂xu|(0,ai)≤ max

max ψ′(a, b, c)(·)

min ψ′(a, b, c)(·)|∂xu|∂(0,ai), L(a, b, c), |∂xui,k−1|(0,ai) 

. This upper bound still holds if u ∈ C2([0, a

i]), (cf. (15.30) and (15.31) of the proof of

Theorem 15.2 in [19]). Finally applying the upper bound above to the solution uk, we get

by induction that

supn≥0 maxk∈{0...n} maxi∈{1...I}

n

|∂xui,k|(0,ai) o

≤ maxmax ψ

(a, b, c)(·)

min ψ′(a, b, c)(·)|∂xui,k|∂(0,ai), L(a, b, c), |∂xgi|(0,ai) 

.

This completes the proof. 

The following Proposition follows from Lemmas 2.4.1 and 2.4.2, assumption (P) (ii) (iii), and from the relation

∀x ∈ [0, ai], |∂x,xui,k(x))| ≤

|n(ui,k(x) − ui,k−1(x))| + |Hi(x, ui,k(x), ∂xui,k(x))|

σi(x, ∂xui,k(x))

≤ |n(ui,k(x) − ui,k−1(x))| + µ(|ui,k(x)|)(1 + |∂xui,k(x)|

m)

ν(1 + |∂xui,k(x)|m−2)

.

(38)

the data introduced in assumption (P) M1 = M1  maxi∈{1...I} n supx∈(0,ai)| − σi(x, ∂xgi(x))∂x2gi(x) + Hi(x, gi(x), ∂xgi(x))| + |∂tφi|(0,T ) o

, maxi∈{1...I}|gi|(0,ai), CH  , M2 = M2  ν, ν, µ(M1), γ(M1), ε(M1), sup|p|≥0P (M1, |p|), |∂xgi|(0,ai), M1  , M3 = M3  M1, ν(1 + |p|)m−2, µ(|u|)(1 + |p|)m, |u| ≤ M1, |p| ≤ M2  , such that sup n≥0 max k∈{0...n} i∈{1...I}max n |ui,k|(0,ai) o ≤ M1, sup n≥0 max k∈{0...n} i∈{1...I}max n |∂xui,k|(0,ai) o ≤ M2, sup n≥0 max k∈{1...n} i∈{1...I}max n

|n(ui,k− ui,k−1)|(0,ai) o ≤ M1, sup n≥0 max k∈{0...n} i∈{1...I}max n |∂x,xui,k|(0,ai) o ≤ M3.

Unfortunately, we are unable to give an upper bound of the modulus of continuity of ∂x,xui,k in Cα([0, a]) independent of n. However, we are able to formulate in the weak

sense a limit solution. From the regularity of the coefficients, using some tools introduced in Section 2.2.1, Lemma 2.2.1, we get interior regularity, and a smooth limit solution. 2.4.2 Proof of Theorem 2.2.2.

Proof. The uniqueness is a result of the comparison Theorem 2.2.4. To simplify the notations, we set for each i ∈ {1 . . . I}, and for each (x, q, u, p, S) ∈ [0, ai] × R4

Qi(x, u, q, p, S) = q − σi(x, p)S + Hi(x, u, p).

Let n ≥ 0. Consider the subdivision (tn

k = kTn )0≤k≤n of [0, T ], and (uk)0≤k≤n the solution

of (2.10).

(39)

such that

supn≥0 maxk∈{1...n} maxi∈{1...I}

n

|ui,k|(0,ai) + |n(ui,k− ui,k−1)|(0,ai) + |∂xui,k|(0,ai) + |∂x,xui,k|(0,ai)

o

≤ M. (2.17)

We define the following sequence (vn)n≥0in C0,2(JTa), piecewise differentiable with respect

to its first variable by

∀i ∈ {1 . . . I}, vi,0(0, x) = gi(x) if x ∈ [0, ai],

vi,n(t, x) = ui,k(x) + n(t − tnk)(ui,k+1(x) − ui,k(x)) if (t, x) ∈ [tnk, tnk+1) × [0, ai].

We deduce then from (2.17), that there exists a constant M1 independent of n, depending

only on the data of the system, such that for all i ∈ {1 . . . I} |vi,n|α[0,T ]×[0,ai] + |∂xvi,n|

α

x,[0,T ]×[0,ai] ≤ M1.

Using Lemma 2.2.1, we deduce that there exists a constant M2(α) > 0, independent of n,

such that for all i ∈ {1 . . . I}, we have the following global Hölder condition |∂xvi,n| α 2 t,[0,T ]×[0,ai] + |∂xvi,n| α x,[0,T ]×[0,ai] ≤ M2(α).

We deduce then from Ascoli’s Theorem, that up to a sub sequence n, (vi,n)n≥0 converge

in C0,1([0, T ] × [0, a

i]) to vi, and then vi ∈ C

α

2,1+α([0, T ] × [0, ai]).

Since vn satisfies the following continuity condition at the junction point

∀(i, j) ∈ {1 . . . I}2, ∀n ≥ 0, ∀t ∈ [0, T ], v

i,n(t, 0) = vj,n(t, 0) = vn(t, 0),

we deduce then v ∈ Cα2,1+α(Ja

T).

We now focus on the regularity of v inJ◦a

T, and we will prove that v ∈ C1+

α 2,2+α(

Ja T), and

satisfies on each edge

(40)

Using once again (2.17), there exists a constant M3 independent of n, such that for each

i ∈ {1 . . . I}

k∂tvi,nkL2((0,T )×(0,ai)) ≤ M3, k∂x,xvi,nkL2((0,T )×(0,ai)) ≤ M3.

Hence we get up to a sub sequence, that

∂tvi,n ⇀ ∂tvi, ∂x,xvi,n ⇀ ∂x,xvi,

weakly in L2((0, T ) × (0, ai)).

The continuity of the coefficients (σi, Hi)i∈{1...I}, Lebesgue’s Theorem, the linearity of Qi

in the variable ∂t and ∂x,x, allows us to get for each i ∈ {1 . . . I}, up to a subsequence np

Z T

0

Z ai

0



Qi(x, vi,np(t, x), ∂tvi,np(t, x), ∂xvi,np(t, x), ∂x,xvi,np(t, x))  ψ(t, x)dxdt p→+∞ −−−−→ Z T 0 Z ai 0  Qi(x, vi(t, x), ∂tvi(t, x), ∂xvi(t, x), ∂x,xvi(t, x))  ψ(t, x)dxdt, ∀ψ ∈ C∞ c ((0, T ) × (0, ai)).

We now prove that for any ψ ∈ C∞

c ((0, T ) × (0, ai)) Z T 0 Z ai 0 

Qi(x, vi,np(t, x), ∂tvi,np(t, x), ∂xvi,np(t, x), ∂x,xvi,np(t, x))) 

ψ(t, x)dxdt −−−−→ 0.p→+∞

Using that (uk)0≤k≤n is the solution of (2.10), we get for any ψ ∈ Cc∞((0, T ) × (0, ai))

Z T

0

Z ai

0



Qi(x, vi,n(t, x), ∂tvi,n(t, x), ∂xvi,n(t, x), ∂x,xvi,n(t, x))

 ψ(t, x)dxdt = n−1 X k=0 Z tnk+1 tn k Z ai 0 

σi(x, ∂xui,k+1(x))∂x,xui,k+1(x) − σi(x, ∂xvi,n(t, x))∂x,xvi,n(t, x)

+Hi(x, vi,n(t, x), ∂xvi,n(t, x)) − Hi(x, ui,k+1(x), ∂xui,k+1(x))



ψ(t, x)dxdt. (2.18) Using assumption (P) more precisely the Lipschitz continuity of the Hamiltonians Hi, the

Hölder equicontinuity in time of (vi,n, ∂xvi,n), there exists a constant M4(α) independent

of n, such that for each i ∈ {1 . . . I}, for each (t, x) ∈ [tn

k, tnk+1] × [0, ai]

|Hi(x, ui,k+1(x), ∂xui,k+1(x)) − Hi(x, vi,n(t, x), ∂xvi,n(t, x))| ≤ M4(α)(t − tnk)

α 2,

(41)

and therefore for any ψ ∈ C∞ c ((0, T ) × (0, ai)) n−1 X k=0 Z tnk+1 tn k Z ai 0 

Hi(x, ui,k+1(x), ∂xui,k+1(x)) − Hi(x, vi,n(t, x), ∂xvi,n(t, x))



ψ(t, x)dxdt ≤

aiM4(α)|ψ|(0,T )×(0,ai)n

−α2 −−−−→ 0.n→+∞

For the last term in (2.18), we write for each i ∈ {1 . . . I}, for each (t, x) ∈ (tn

k, tnk+1)×(0, ai)

σi(x, ∂xui,k+1(x))∂x,xui,k+1(x) − σi(x, ∂xvi,n(t, x))∂x,xvi,n(t, x) =

 σi(x, ∂xui,k+1(x)) − σi(x, ∂xvi,n(t, x))  ∂x,xui,k(x) + (2.19)  σi(x, ∂xui,k+1(x)) − n(t − tnk)σi(x, ∂xvi,n(t, x))  ∂x,xui,k+1(x) − ∂x,xui,k(x)  .(2.20) Using again the Hölder equicontinuity in time of (vi,n, ∂xvi,n) as well as the uniform bound

on |∂x,xui,k|[0,ai] (2.17), we can show that for (2.19), for any ψ ∈ C

∞ c ((0, T ) × (0, ai)), n−1 X k=0 Z tnk+1 tn k Z ai 0  σi(x, ∂xui,k+1(x)) − σi(x, ∂xvi,n(t, x))  ∂x,xui,k(x)ψ(t, x)dxdt n→+∞ −−−−→ 0.

Finally, from assumptions (P), for all i ∈ {1 . . . I}, σi is differentiable with respect to all

its variable, integrating by part we get for (2.20) n−1 X k=0 Z tnk+1 tn k Z ai 0  σi(x, ∂xui,k+1(x)) − n(t − tnk)σi(x, ∂xvi,n(t, x))   ∂x,xui,k+1(x) − ∂x,xui,k(x)  ψ(t, x)dxdt = n−1 X k=0 Z tnk+1 tn k Z ai 0  ∂x  σi(x, ∂xui,k+1(t, x))ψ(t, x)  − n(t − tnk)∂x  σi(x, ∂xvi,n(t, x))ψ(t, x)   ∂xui,k+1(x) − ∂xui,k(x)  dxdt n→+∞ −−−−→ 0. We conclude that for any ψ ∈ C∞

c ((0, T ) × (0, ai)) Z T 0 Z ai 0  Qi(x, vi(t, x), ∂tvi(t, x), ∂xvi(t, x), ∂x,xvi(t, x)))  ψ(t, x)dxdt = 0.

It is then possible to consider the last equation as a linear one, with coefficients ˜σi(t, x) =

σi(x, ∂xvi(t, x)), ˜Hi(t, x) = Hi(x, vi(t, x), ∂xvi(t, x)) belonging to the class C

α

(42)

(0, ai)), and using Theorem III.12.2 of [26], we get finally that for all i ∈ {1 . . . I}, vi ∈

C1+α2,2+α((0, T ) × (0, a

i)), which means that v ∈ C1+

α 2,2+α(

Ja T).

We deduce that vi satisfies on each edge

Qi(x, vi(t, x), ∂tvi(t, x), ∂xvi(t, x), ∂x,xvi(t, x))) = 0, if (t, x) ∈ (0, T ) × (0, ai).

From the estimates (2.17), we know that ∂tvi,n and ∂x,xvi,n are uniformly bounded by n.

We deduce finally that v ∈ C1+ α 2,2+α b ( ◦ Ja T).

We conclude by proving that v satisfies the non linear Neumann boundary condition at the vertex. For this, let t ∈ (0, T ); we have up to a sub sequence np

F (vnp(t, 0), ∂xvnp(t, 0)) −−−−→

p→+∞ F (v(t, 0), ∂xv(t, 0)).

On the other hand, using that F (uk(0), ∂0uk(x)) = 0, we know from the continuity of F

(assumption (P)), the Hölder equicontinuity in time of t 7→ vn(t, 0), and t 7→ ∂xv(t, 0),

that there exists a constant M5(α) independent of n, such that if t ∈ [tnk, tnk+1)

|F (vn(t, 0), ∂xvn(t, 0))| = |F (vn(t, 0), ∂xvn(t, 0)) − F (uk(0), ∂xuk(0))| ≤

supn|F (u, x) − F (v, y)|, |u − v| + kx − ykRI ≤ M5(α)n− α 2

o n→+∞ −−−−→ 0.

Therefore, we conclude once more from the continuity of F (assumption (P)), the com-patibility condition (assumption (P) (v)), that for each t ∈ [0, T )

F (v(t, 0), ∂xv(t, 0)) = 0.

On the other hand, it is easy to get

∀i ∈ {1 . . . I}, ∀x ∈ [0, ai], vi(0, x) = gi(x), ∀t ∈ [0, T ], vi(t, ai) = φi(t).

Finally, the expression of the upper bounds of the solution given in Theorem 2.2.2, are a consequence of Proposition 2.4.3, and Lemma 2.2.1, that completes the proof. 

(43)

2.4.3 On the existence for unbounded junction

We give in this subsection a result on the existence and the uniqueness of the solution for the parabolic problem (2.3), in a unbounded junction J defined for I ∈ N∗ edges by

J = nX = (x, i), x ∈ R+ and i ∈ {1, . . . , I} o . In the sequel, C0,1(J T) ∩ C1,2( ◦

JT) is the class of function with regularity C0,1([0, T ] ×

[0, +∞)) ∩ C1,2((0, T ) × (0, +∞)) on each edge, and L(J

T) is the set of measurable real

bounded maps defined on JT.

We introduce the following data      F ∈ C0(R × RI, R) g ∈ C1 b(J ) ∩ Cb2( ◦ J ) ,

and for each i ∈ {1 . . . I}

           σi ∈ C1(R+× R, R) Hi ∈ C1(R+× R2, R) φi ∈ C1([0, T ], R) .

We suppose furthermore that the data satisfy the following assumption Assumption (P∞) (i) Assumption on F           

a) F is decreasing with respect to its first variable, b) F is nondecreasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, F (b, B) = 0,

(44)

or the Kirchhoff condition           

a) F is nonincreasing with respect to its first variable, b) F is increasing with respect to its second variable, c) ∃(b, B) ∈ R × RI, F (b, B) = 0.

We suppose moreover that there exist a parameter m ∈ R, m ≥ 2 such that we have (ii) The (uniform) ellipticity condition on the (σi)i∈{1...I}: there exists ν, ν, strictly positive

constants such that

∀i ∈ {1 . . . I}, ∀(x, p) ∈ R+× R,

ν(1 + |p|)m−2 ≤ σi(x, p) ≤ ν(1 + |p|)m−2.

(iii) The growth of the (Hi)i∈{1...I} with respect to p exceed the growth of the σi with

respect to p by no more than two, namely there exists µ an increasing real continuous function such that

∀i ∈ {1 . . . I}, ∀(x, u, p) ∈ R+× R2, |Hi(x, u, p)| ≤ µ(|u|)(1 + |p|)m.

(iv) We impose the following restrictions on the growth with respect to p of the derivatives for the coefficients (σi, Hi)i∈{1...I}, which are for all i ∈ {1 . . . I},

a) |∂pσi|R+×R2(1 + |p|) 2+ |∂ pHi|R+×R2 ≤ γ(|u|)(1 + |p|) m−1, b) |∂xσi|R+×R2(1 + |p|) 2+ |∂ xHi|R+×R2 ≤  ε(|u|) + P (|u|, |p|)(1 + |p|)m+1, c) ∀(x, u, p) ∈ R+× R2, −CH ≤ ∂uHi(x, u, p) ≤  ε(|u|) + P (|u|, |p|)(1 + |p|)m,

where γ and ε are continuous non negative increasing functions. P is a continuous func-tion, increasing with respect to its first variable, and tends to 0 for p → +∞, uniformly with respect to its first variable, from [0, u1] with u1 ∈ R, and CH > 0 is real strictly

positive number. We assume that (γ, ε, P, CH) are independent of i ∈ {1 . . . I}.

(v) A compatibility conditions for g

(45)

We state here a comparison Theorem for the problem (2.3), in a unbounded junction. Theorem 2.4.4. Assume (P∞). Let u ∈ C0,1(JT) ∩ C1,2(

JT) ∩ L∞(JT) (resp. v ∈

C0,1(J

T) ∩ C1,2( ◦

JT) ∩ L∞(JT)) be a super solution (resp. a sub solution) of (2.4) (where

ai = +∞), satisfying for all i ∈ {1 . . . I} for all x ∈ [0, +∞), ui(0, x) ≥ vi(0, x). Then

for each (t, (x, i)) ∈ JT : ui(t, x) ≥ vi(t, x).

Proof. Let s ∈ [0, T ), K = (K . . . K) > (1, . . . 1) in RI, and λ = λ(K) > 0, that will be

chosen in the sequel. We argue as in the proof of Theorem 2.2.4, assuming sup (t,(x,i))∈JK s exp(−λt − (x − 1) 2 2 )  vi(t, x) − ui(t, x)  > 0.

Using the boundary conditions satisfied by u and v, the above supremum is reached at a point (t0, (x0, j0)) ∈ (0, s] × J , with 0 ≤ x0 ≤ K.

If x0 ∈ [0, K), the optimality conditions are given for x0 6= 0 by

−λ(vj0(t0, x0) − uj0(t0, x0)) + ∂tvj0(t0, x0) − ∂tuj0(t0, x0) ≥ 0, −(x0− 1)  vj0(t0, x0) − uj0(t0, x0)  + ∂xvj0(t0, x0) − ∂xuj0(t0, x0) = 0,  vj0(t0, x0) − uj0(t0, x0)  − 2(x0− 1)2  vj0(t0, x0) − uj0(t0, x0)  + ∂x,xvj0(t0, x0) − ∂x,xuj0(t0, x0)  ≤ 0, and if x0 = 0, ∀i ∈ {1, . . . I}, ∂xvi(t0, 0) ≤ ∂xui(t0, 0) −  vi(t0, 0) − ui(t0, 0)  < ∂xui(t0, 0).

If x0 = 0, we obtain a contradiction exactly as in the proof of Theorem 2.2.4. On the other

hand if x0 ∈ (0, K), using assumptions (P) (iv) a), (iv) c) and the optimality conditions,

we can choose λ(K) of the form λ(K) = C(1 + K2), (see (2.5) and (2.6)), where C > 0 is

a constant independent of K, to get again a contradiction. We deduce that, if sup (t,(x,i))∈JK s exp(−λ(K)t − (x − 1) 2 2 )  vi(t, x) − ui(t, x)  > 0,

(46)

then for all (t, (x, i)) ∈ [0, T ] × JK exp(−λ(K)t − (x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ exp(−λ(K)t − (K − 1) 2 2 )  vi(t, K) − ui(t, K)  . Hence for all (t, (x, i)) ∈ [0, T ] × JK

exp(−(x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ exp(−(K − 1) 2 2 )  vi(t, K) − ui(t, K)  . On the other hand, if

sup (t,(x,i))∈JK s exp(−λ(K)t − (x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ 0,

then for all (t, (x, i)) ∈ [0, T ] × JK

exp(−λ(K)t − (x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ 0. So exp(−(x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ 0. Finally we have, for all (t, (x, i)) ∈ [0, T ] × JK

max0, exp(−(x − 1) 2 2 )  vi(t, x) − ui(t, x)  ≤ exp(−(K − 1) 2 2 )  ||u||L∞(J T)+ ||v||L∞(JT)  . Sending K → ∞ and using the boundedness of u and v, we deduce the inequality v ≤ u

in [0, T ] × J . 

Theorem 2.4.5. Assume (P∞). The following parabolic problem with Neumann boundary

condition at the vertex                    ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,xui(t, x)+ Hi(x, ui(t, x), ∂xui(t, x)) = 0, if (t, x) ∈ (0, T ) × (0, +∞), F (u(t, 0), ∂xu(t, 0)) = 0, if t ∈ [0, T ), ∀i ∈ {1 . . . I}, ui(0, x) = gi(x), if x ∈ [0, +∞), (2.21)

(47)

is uniquely solvable in the class Cα2,1+α(JT)∩C1+ α 2,2+α(

JT). There exist constants (M1, M2, M3),

depending only the data introduced in assumption (P∞)

M1 = M1  maxi∈{1...I} n supx∈(0,+∞)| − σi(x, ∂xgi(x))∂x2gi(x) + Hi(x, gi(x), ∂xgi(x))| o , maxi∈{1...I}|gi|(0,+∞), CH  , M2 = M2  ν, ν, µ(M1), γ(M1), ε(M1), sup|p|≥0P (M1, |p|), |∂xgi|(0,+∞), M1  , M3 = M3  M1, ν(1 + |p|)m−2, µ(|u|)(1 + |p|)m, |u| ≤ M1, |p| ≤ M2  , such that

||u||C(JT) ≤ M1, ||∂xu||C(JT) ≤ M2, ||∂tu||C(JT) ≤ M1, ||∂x,xu||C(JT)≤ M3.

Moreover, there exists a constant M (α) depending on α, M1, M2, M3



such that for any a ∈ (0, +∞)I

||u||Cα 2,1+α(Ja

T)

≤ M (α).

Proof. Assume (P∞) and let a = (a, . . . , a) ∈ (0, +∞)I. Applying Theorem 2.2.2, we can

define ua ∈ C0,1(Ja

T) ∩ C1,2( ◦

Ja

T) as the unique solution of

                         ∂tui(t, x) − σi(x, ∂xui(t, x))∂x,xui(t, x)+ Hi(x, ui(t, x), ∂xui(t, x)) = 0, if (t, x) ∈ (0, T ) × (0, a), F (u(t, 0), ∂xu(t, 0)) = 0, if t ∈ [0, T ),

∀i ∈ {1 . . . I}, ui(t, a) = gi(a), if t ∈ [0, T ],

∀i ∈ {1 . . . I}, ui(0, x) = gi(x), if x ∈ [0, a].

(2.22)

Using assumption (P∞) and Theorem 2.2.2, we get that there exists a constant C > 0

independent of a such that

supa≥0 ||ua|| C1,2(Ja

T) ≤ C. We are going to send a to +∞ in (2.22).

(48)

Following the same argument as for the proof of Theorem 2.2.2, we get that, up to a sub sequence, uaconverges locally uniformly to some map u which solves (2.21). On the other

hand, uniqueness of u is a direct consequence of the comparison Theorem 2.4.4, since u ∈ L∞(J

T). Finally the expression of the upper bounds of the derivatives of u given in

(49)

Références

Documents relatifs

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

TEMAM, Local existence of C~ solutions of the Euler equations of incompressible perfect fluids, in Proc. TEMAM, Navier-Stokes Equations and Nonlinear Functional Analysis,

LIONS, Neumann type boundary conditions for Hamilton-Jacobi equations,

Related topics include modeling of variability across different life-cycle stages of software systems; patterns, styles and tactics; practices for

Walias, Existence and uniqueness of solutions of diffusion-absorption equations with general data, Differential Integral Equations 7 (1994), 15–36. Sa¨ıd Benachour: Institut Elie

We have defined the right and left deterministic hamiltonians and the effective flux limiter A, so to complete the proof of Theorem 2.7, we should prove the convergence result.. This

If instead the cost changes in a continuous way along the edges and the dynamics is continuous in time, the minimum time problem can be seen as a continuous-state

In [30], Tanaka and Tani proved in 2003 the local existence for arbitrary initial data and global existence for sufficiently small initial data of a compressible Navier-Stokes flow