• Aucun résultat trouvé

A variational method for a class of parabolic PDEs

N/A
N/A
Protected

Academic year: 2022

Partager "A variational method for a class of parabolic PDEs"

Copied!
46
0
0

Texte intégral

(1)

A variational method for a class of parabolic PDEs

ALESSIOFIGALLI, WILFRIDGANGBO ANDT ¨URKAYYOLCU

Abstract. In this manuscript we extend De Giorgi’s interpolation method to a class of parabolic equations which are not gradient flows but possess an entropy functional and an underlying Lagrangian. The new fact in the study is that not only the Lagrangian may depend on spatial variables, but it does not induce a metric. Assuming the initial condition to be a density function, not necessar- ily smooth, but solely of bounded first moments and finite “entropy”, we use a variational scheme to discretize the equation in time and construct approximate solutions. Then De Giorgi’s interpolation method is revealed to be a powerful tool for proving convergence of our algorithm. Finally we show uniqueness and stability inL1of our solutions.

Mathematics Subject Classification (2010):35K59 (primary); 49J40, 82C40, 47J25 (secondary).

1. Introduction

In the theory of existence of solutions of ordinary differential equations on a metric space, curves of maximal slope and minimizing movements play an important role.

The minimizing movements in general are obtained via a discrete scheme. They have the advantage of providing an approximate solution of the differential equa- tion by discretizing in time while not requiring the initial condition to be smooth.

Then a clever interpolation method introduced by De Giorgi [6, 7] ensures com- pactness for the family of approximate solutions. Many recent works [3, 14] have used minimizing movement methods as a powerful tool for proving existence of solutions for some classes of partial differential equations (PDEs). So far, most of these studies concern PDEs which can be interpreted as gradient flow of an en- tropy functional with respect to a metric on the space of probability measures. This paper extends the minimizing movements and De Giorgi’s interpolation method to include PDEs which are not gradient flows, but possess an entropy functional and an underlying Lagrangian which may be dependent of the spatial variables.

In the current manuscript X ⇢ Rd is an open set whose boundary is of zero measure. We denote by P1ac(X) the set of Borel probability densities on X of Received February 27, 2009; accepted in revised form February 24, 2010.

(2)

bounded first moments, endowed with the 1-Wasserstein distance W1(cf. Subsec- tion 2.2). We consider distributional solutions of a class of PDEs of the form

@t%t+div(%tVt)=0, in D0((0,T)⇥Rd) (1.1) (this implicitly means that we have imposed Neumann boundary condition), with

%tVt :=%trpH

x, %t 1r[P(%t)]⌘

on (0,T)⇥X and

t 7!%t 2AC1(0,T;P1ac(X))⇢C([0,T];P1ac(X)).

By abuse of notation,%t will denote at the same time the solution at timetand the function(t,x)7!%t(x)defined over(0,T)⇥X. (It will be clear from the context which one we are referring to.) We recall that the unknown%t is nonnegative, and can be interpreted as the density of a fluid, whose pressure isP(%t). Here, the data

H,U andPsatisfy specific properties, which are stated in Subsection 2.1.

We only consider solutions such thatr[P(%t)] 2 L1((0,T)⇥ X), and is ab- solutely continuous with respect to %t. If%t satisfies additional conditions which will soon comment on, thent 7!U(%t):=R

XU(%t)dx is absolutely continuous, monotone nonincreasing, and

d

dtU(%t)= Z

Xhr[P(%t)],Vtidx. (1.2) The space to which the curvet 7! %t belongs ensures that%t converges to%0 in P1ac(X)ast !0.

Solutions of our equation can be viewed as curves of maximal slope on a met- ric space contained inP1(X). They include the so-called minimizing movements (cf.[3] for a precise definition) obtained by many authors in case the Lagrangian does not depend on spatial variables (e.g.[13] when H(p)= 1/2|p|2,[1, 3] when H(x,p)H(p)).These studies have been very recently extended to a special class of Lagrangian depending on spatial variables where the Hamiltonian assume the form H(x,p) = hA(x)p,pi[14]. In their pioneering work Alt and Luck- haus [2] consider differential equations similar to (1.1), imposing some assumptions not very comparable to ours. Their method of proof is very different from the ones used in the above cited references and is based on a Galerkin type approximation method.

Let us describe the strategy of the proof of our results. The first step is the existence part. Let L(x,·)be the Legendre transform ofH(x,·), to which we refer as a Lagrangian. For a time step h > 0, letch(x,y), the cost for moving a unit mass from a pointx to a pointy, be the minimal action min Rh

0 L( ,˙)dt.Here, the minimum is performed over the set of all paths (not necessarily contained in X) such that (0) = x and (h) = y. The cost ch provides a way of defining the minimal total workCh(%0,%)(cf. (2.8)) for moving a mass of distribution%0

(3)

to another mass of distribution % in time h. For measures which are absolutely continuous, the recent papers [4,8,9] give uniqueness of a minimizer in (2.8), which is concentrated on the graph of a functionTh :Rd !Rd.Furthermore,Chprovides a natural way of interpolating between these measures: there exists a unique density

ssuch thatCh(%0,%h)=Cs(%0,%¯s)+Ch s(%¯s,%h)fors 2(0,h).

Assume for a moment that X is bounded. For a given initial condition%0 2 P1ac(X)such thatU(%0) < +1we inductively construct{%nhh }n in the following way: %(nh+1)his the unique minimizer ofCh(%hnh,%)+U(%)overP1ac(X). We refer to this minimization problem as a primal problem. Under the additional condition thatL(x, v) > L(x,0)⌘ 0 for allx, v 2Rd such thatv6=0, one hasch(x,x) <

ch(x,y)forx 6= y.As a consequence, under that condition the following maximum principle holds: if%0Mthen%hnhMfor alln 0.

We then study a problem, dual to the primal one, which provides us with a characterization and some important regularity properties of the minimizer%h(n+1)h. These properties would have been harder to obtain studying only the primal prob- lem. Having determined {%hnh}n2N, we consider two interpolating paths. The first one is the patht 7!%¯thsuch that

Ch(%nhh ,%(nh+1)h)=Cs(%nhh ,%¯hnh+s)+Ch s(%¯nhh +s,%(nh+1)h), 0<s <h.

The second patht 7!%ht is defined by

%hnh+s:=arg min

Cs(%hnh,%)+U(%) , 0<s<h.

This interpolation was introduced by De Giorgi in the study of curves of maximal slopes whenp

Cs defines a metric. The path{ ¯%ht}satisfies equation (3.42), which is a discrete analogue of the differential equation (1.1). Then we write a discrete energy inequality in terms of both paths{ ¯%ht}and{%ht}, and we prove that up to a subsequence both paths converge (in a sense to be made precise) to the same path

%t. Furthermore,%t satisfies the energy inequality U(%0) U(%T)

Z T 0

dt Z

X

hL(x,Vt)+H

x, %t 1r[P(%t)]⌘i

%tdx, (1.3) which thanks to the assumptions on H (cf. Subsection 2.1) implies for instance thatr[P(%t)]2 L1((0,T)⇥X).The above inequality corresponds to what can be considered as one half of the chain rule:

d

dtU(%t) Z

XhVt,r[P(%t)]idx.

HereVt is a velocity associated to the patht 7!%t, in the sense that equation (1.1) holds without yet the knowledge that %tVt = %trpH

x, %t 1r[P(%t)]⌘ . The

(4)

current state of the art allows us to establish the reverse inequality yielding to the whole chain rule only if we know that

Z T 0

dt Z

X|Vt|%tdx, Z T

0

dt Z

X|%t 1r[P(%t)]|0%tdx <+1 (1.4) for some↵2(1,+1),↵0=↵/(↵ 1). In that case, we can conclude that

%tVt =%trpH

x, %t 1r[P(%t)]

and d

dtU(%t)= Z

XhVt,r[P(%t)]idx.

In light of the energy inequality (3.43), a sufficient condition to have the inequality (1.4) is thatL(x, v)⇠|v|. This is what we later impose in this work.

Suppose now thatXmay be unbounded. As pointed out in Remark 3.18, by a simple scaling argument we can solve equation (1.1) for general nonnegative den- sities, not necessarily of unit mass. Lemma 4.1 shows that if we impose the bound (4.1) on the negative part of U, thenU(%)is well-defined for % 2 P1ac(X). We assume that the initial condition%0 2 P1ac(X)andRX|U(%0)|dx is finite, and we start our approximation argument by replacing XbyXm := X\Bm(0)and%0by

%m0 := %0 Bm(0).Here, Bm(0)is the open ball of radiusm, centered at the origin.

The previous argument provides us with a solution of equation (1.1), starting at%0m, for which we show that

t2[max0,T]

⇢Z

Xm

|x|%tmdx + Z

Xm

|U(%mt )|dx

is bounded by a constant independent ofm. Using the fact that for eachm,%msat- isfies the energy inequality (1.3), we obtain that a subsequence of{%m}converges to a solution of equation (1.1) starting at%0.Moreover, as we will see, our approxi- mation argument also allows to relax the regularity assumptions on the Hamiltonian H. This shows a remarkable feature of the existence scheme described before, as it allows to construct solutions of a highly nonlinear PDE as (1.1) by approximating at the same time the initial datum and the Hamiltonian (and the same strategy could also be applied to relax the assumptions onU,cf. Section 4). This completes the existence part.

In order to prove uniqueness of solution in equation (1.1) we make several additional assumptions onPandH. First of all, we assume thatL(x, v) >L(x,0) for all x, v 2 Rd such that v 6= 0 to ensure that the maximum principle holds.

Next, letQdenote the inverse ofPand setu(t,·):= P(%t).Then equation (1.1) is equivalent to

@tQ(u)=diva(x,Q(u),ru) in D0((0,T)X), (1.5)

(5)

which is a quasilinear elliptic-parabolic equation. Hereais given by equation (5.2).

The study in [15] addresses contraction properties of solutions of equation (1.5) even when @tQ(u) is not a bounded measure but is merely a distribution, as in our case. Our vector field adoes not necessarily satisfy the assumptions in [15].

(Indeed one can check that it violates drastically the strict monotonicity condition of [15], for largeQ(u).) For this reason, we only study uniqueness of solutions with bounded initial conditions even if, for this class of solution, ais still not strictly monotone in the sense of [2] or [15].

The strategy consists first in showing that there exists a Hamiltonian H¯ ⌘ H¯(x,%,m)(cf. equation (5.3)) such that for eachx, a(x,%, m)is contained in the subdifferential of H¯(x,·,·)at(%,m).Then, assuming H¯(x,·,·)convex andQ Lipschitz, we establish a contraction property for bounded solutions of (1.1). As a by product we conclude uniqueness of bounded solutions.

The paper is structured as follows: in Section 2 we start with some preliminar- ies and set up the general framework for our study. The proof of the existence of solutions is then split into two cases. Section 3 is concerned with the case whereX is bounded, and we prove existence of solutions of equations (1.1) by applying the discrete algorithm described before. In Section 4 we relax the assumption that X is bounded: under the hypotheses that%02P1ac(X)andRX|U(%0)|dx is finite, we construct by approximation a solution of equation (1.1) as described above. Section 5 is concerned with uniqueness and stability inL1of bounded solutions of equation (1.1) when Q is Lipschitz. To achieve that goal, we impose the stronger condition (5.5) on the Hamiltonian H. We avoid repeating known facts as much as possible, while trying to provide all the necessary details for a complete proof.

ACKNOWLEDGEMENTS. The collaboration on this manuscript started during Spring 2008 while the three authors were visiting IPAM–UCLA, whose financial support and hospitality are gratefully acknowledged. WG gratefully acknowledges the sup- port provided by NSF grants DMS-03-54729 and DMS-06-00791. TY gratefully acknowledges RAs support provided by NSF grants DMS-03-54729 and DMS-06- 00791. It is a pleasure to express our gratitude to G. Savar´e for fruitful discussions.

2. Preliminaries, notation and definitions

2.1. Main assumptions

We fix a convex superlinear function✓ : [0,+1)![0,+1)such that✓(0)=0.

The main examples we have in mind are functions✓ which are positive combina- tions of functions like t 7! t with ↵ > 1 (for functions like t 7! t(lnt)+ or et,cf. Remark 3.19). We consider a function L : Rd ⇥Rd 7! Rwhich we call Lagrangian. We assume that:

(L1) L2C2(Rd⇥Rd), andL(x,0)=0 for allx 2Rd.

(6)

(L2) The matrixrvvL(x, v)is strictly positive definite for allx, v2Rd. (L3) There exist constantsA,A,C>0 such that

C✓(|v|)+ A L(x, v) ✓(|v|) A 8x, v2Rd.

Let us remark that the condition L(x,0) = 0 is not restrictive, as we can always replace L byL L(x,0), and this would not affect the study of the problem we are going to consider. We also note that (L1), (L2) and (L3) ensure that Lis a so- calledTonelli Lagrangian(cf.for instance [8, Appendix B]). To prove a maximum principle for the solutions of (1.1), we will also need the assumption:

(L4) L(x, v) L(x,0)for allx, v2Rd.

Theglobal Legendre transformL:Rd⇥Rd !Rd ⇥Rd ofLis defined by L(x, v):=(x,rvL(x, v)) .

We denote by8L : [0,+1)⇥Rd⇥Rd!Rd⇥Rd theLagrangian flowdefined

by 8

<

: d dt

⇥rvL 8L(t,x, v)

=rxL 8L(t,x, v) , 8L(0,x, v)=(x, v).

(2.1) Furthermore, we denote by81L : [0,+1)⇥Rd⇥Rd !Rd the first component of the flow:8L1 :=⇡1 8L,⇡1(x, v):=x.

The Legendre transform ofL, called theHamiltonianofL, is defined by H(x,p):= sup

v2Rd{hv,pi L(x, v)}. Moreover we define the Legendre transform of✓ as

(s):=sup

t 0{st ✓(t)}, s 2R.

It is well-known that L satisfies (L1), (L2) and (L3) if and only if H satisfies the following conditions:

(H1) H 2C2(Rd⇥Rd), andH(x,p) 0 for allx,p2Rd.

(H2) The matrixrppH(x,p)is strictly positive definite for allx,p2Rd. (H3) ✓:R![0,+1)is convex, superlinear at+1, and we have

A+C

✓|p| C

H(x,p)✓(|p|)+ A 8x, v2Rd. Moreover (L4) is equivalent to:

(7)

(H4) rpH(x,0)=0 for allx 2Rd.

We also introduce some weaker conditions onL, which combined with (L3) make it aweak Tonelli Lagrangian:

(L1w) L2C1(Rd⇥Rd), andL(x,0)=0 for allx 2Rd. (L2w) For eachx 2Rd,L(x,·)is strictly convex.

Under (L1w), (L2w) and (L3), the global Legendre transform is an homeomor- phism, and the Hamiltonian associated toLsatisfies (H3) and

(H1w) H 2C1(Rd⇥Rd), andH(x,p) 0 for allx,p2Rd. (H2w) For eachx 2Rd,H(x,·)is strictly convex.

(Cf. for instance [8, Appendix B].) In this paper we will mainly work assuming (L1), (L2) and (L3), except in Section 4 where we relax the assumptions onL(and correspondingly that on H) to (L1w), (L2w) and (L3).

LetU : [0,+1)!Rbe a given function such that

U 2C2((0,+1))[C([0,+1)), U00 >0, (2.2) and

U(0)=0, lim

t!+1

U(t)

t = +1. (2.3)

We set U(t) = +1 for t 2 ( 1,0), so that U remains convex and lower- semicontinuous on the wholeR. We denote byUthe Legendre transform ofU :

U(s):=sup

t2R{st U(t)} =sup

t 0{st U(t)}. (2.4) When%is a Borel probability density ofRd suchU (%) 2 L1(Rd)we define the internal energy

U(%):=

Z

RdU(%)dx.

If%represents thedensityof a fluid, one interpretsP(%)as apressure, where

P(s):=U0(s)s U(s). (2.5)

Note that P0(s)=sU00(s), so that Pis increasing on[0,+1).

(8)

2.2. Notation and definitions

If%is a probability density and↵>0, we write M(%):=

Z

Rd|x|%(x)dx

for its moment of order↵.If X ⇢Rd is a Borel set, we denote byPac(X)the set of all Borel probability densities onX.If%2Pac(X), we tacitly identify it with its extension defined to be 0 outsideX.We denote byP(X)the set of Borel probability measures µonRd that are concentrated on X: µ(X) = 1. Finally, we denote by Pac(X)⇢Pac(X)the set of%probability density onXsuch thatM(%)is finite.

When ↵ 1, this is a metric space when endowed with the Wasserstein distance W (cf. equation (2.10) below). We denote byLd thed–dimensional Lebesgue measure.

Letu, v : X⇢Rd !R[{±1}.We denote byu vthe function(x,y)7!

u(x)+v(y)where it is well-defined. The set of pointsxsuch thatu(x)2Ris called the domain ofuand denoted by domu.We denote by@ u(x)the subdifferential of u at x. Similarly, we denote by @+u(x)the superdifferential ofu at x. The set of point where u is differentiable is called the domain ofru and is denoted by domru.

Letu : Rd !R[{+1}. Its Legendre transform isu : Rd !R[{+1}

defined by

u(y)= sup

x2X{hx,yi u(x)}.

In caseu: X⇢Rd !R[{+1}, its Legendre transform is defined by identifying uwith its extension which takes the value+1outsideX.

Finally, for f :(a,b)!R, we set d+f

dt |t=c :=lim sup

h!0+

f(c+h) f(c)

h , d f

dt |t=c :=lim inf

h!0

f(c+h) f(c)

h .

Definition 2.1 (c-transform). Letc:Rd⇥Rd !R[{+1}, letX⇢Rdand let u, v : X !R[{ 1}. Thefirstc-transformofu,uc : X ! R[{ 1}, and the secondc-transformofv,vc: X !R[{ 1}, are respectively defined by

uc(y):= inf

x2X{c(x,y) u(x)}, vc(x):= inf

y2X{c(x,y) v(y)}. (2.6) Definition 2.2 (c-concavity). We say thatu:X !R[{ 1}isfirstc-concaveif there existsv: X !R[{ 1}such thatu =vc.Similarly,v: X !R[{ 1}

issecondc-concaveif there existsu: X !R[{ 1}such thatv=uc.

For simplicity we will omit the words “first” and “second” when referring to c-transform andc-concavity.

(9)

For h > 0, we define the action Ah( ) of an absolutely continuous curve : [0,h]!Rdas

Ah( ):=

Z h 0

L( (⌧),˙(⌧))d⌧

and thecost function ch(x,y):=infn

Ah( ) : 2W1,1(0,h;Rd), (0)=x, (h)=yo

. (2.7) Forµ0, µ12P(Rd), let0(µ0, µ1)be the set of probability measures onRd⇥Rd which haveµ0andµ1as marginals. Set

Ch0, µ1):=inf

⇢Z

RdRdch(x,y)d (x,y) : 20(µ0, µ1) (2.8) and

W,h0, µ1):=hinf

⇢Z

RdRd

✓|y x| h

d (x,y) : 20(µ0, µ1) . (2.9) Remark 2.3. By Remark 2.11ch is continuous. In particular, there always exists a minimizer for (2.8) (trivial ifCh is identically+1on0(%0,%1)). We denote the set of minimizers by 0h(%0,%1).Similarly, there is a minimizer for (2.9), and we denote the set of its minimizers by0h(%0,%1).

We also recall the definition of the↵-Wasserstein distance,↵ 1:

W0, µ1):=inf

⇢Z

RdRd|y x|d (x,y) : 20(µ0, µ1)

1/↵

. (2.10) It is well-known (cf. for instance [3]) that W metrizes the weak topology of measures on bounded subsets ofRd. Although we defineW here for all↵ 1, onlyW1will be used except after section 3.5.

The following fact can be checked easily:

Ch0, µ2)Ch t0, µ1)+Ct1, µ2) (2.11) for allt 2[0,h]andµ0, µ1, µ22P(Rd).

2.3. Properties of enthalpy and pressure functionals In this subsection, we assume that (2.2) and (2.3) hold.

Lemma 2.4. The following properties hold:

(i) U0 : [0,+1) ! Ris strictly increasing, and so invertible. Its inverse is of classC1andlimt!+1U0(t)= +1.

(10)

(ii) U2C1(R)is nonnegative, and(U)0(s) 0for alls2R. (iii) lims!+1(U)0(s)= +1.

(iv) lims!+1U(s)

s = +1.

(v) P : [0,+1) ! [0,+1)is strictly increasing, bijective,limt!+1P(t) = +1, and its inverse Q : [0,+1) ! [0,+1)satisfieslims!+1Q(s) = +1.

Proof. (i) SinceUis convex andU(0)=0, we haveU0(t) U(t)/t. This together withU00 >0 and the superlinearity ofU easily imply the result.

(ii) U 0 follows from U(0) = 0. The remaining part is a consequence of (U)0(U0(t)) = t for t > 0, together withU(s) = 0 (and so(U)0(s) = 0) for sU0(0+).

(iii) Follows from (i) and the identity(U)0(U0(t))=t fort >0.

(iv) SinceUis convex and nonnegative we haveU(s) s2(U)0 s2 , so that the result follows from (iii).

(v) Observe that P(t)=U(U0(t)) 0 by (ii). SinceU0is monotone nondecreas- ing, fort <1 we haveP(t)tU0(1) U(t). We conclude that limt!0+ P(t)=0.

The remaining statements follow.

Remark 2.5. LetX ⇢Rd be a bounded set, and let% 2Pac(X)be a probability density. Recall that we extend%outside X by setting its value to be identically 0.

If R> 0 is such thatXBR(0), we haveR

Rd✓(|x|)%(x)dx  ✓(R). Moreover, since by convexityU(t) U(1)+U0(1)(t 1)⌘at+bfort 0,R

RdU (%)dx is bounded onPac(X)by|a| + |b|Ld(X). HenceU(%)is always well-defined on Pac(X), and is finite if and only ifU+(%)2L1(X).

The following lemma is a standard result of the calculus of variations,cf. for instance [5] (for a more general result on unbounded domains,cf.Section 4):

Lemma 2.6. Let X ⇢Rd and suppose{%n}n2N⇢ Pac(X)converges weakly to% inL1(X). Assume that eitherXis bounded, orXis unbounded andU 0. Then

lim inf

n!1 U(%n) U(%).

2.4. Properties ofH and the cost functions

Lemma 2.7. The following properties hold for0<h¯ <handx,y2Rd: (i) ch(x,x)0.

(ii) ch(x,y)ch¯(x,y).

(iii)

Ch

✓|x y| h

+ Ah ch(x,y) h

✓|x y| h

Ah Ah.

(11)

Proof. (i) Set (t)⌘ xfort 2[0,h]and recall thatL(x,0)=0 to getch(x,x) Ah( )=0.

(ii) Given 2W1,1(0,h¯;Rd)satisfying (0)=xand (h)¯ = y,we can associate an extension to(h,¯ h], which we still denote ,such that (t)= yfort 2 (h,¯ h]. We have 2W1,1(0,h;Rd), (0)=xand (h)¯ = y.Hence,

ch(x,y)Ah( )=Ah¯( )+ Z h

h¯

L(y,0)dt =Ah¯( ).

Since 2W1,1(0,h¯;Rd)is arbitrary, this concludes the proof of (ii).

(iii) The first inequality is obtained using (L3) andch(x,y)AT( )with (t)= (1 t/h)x+(t/h)y, while the second one follows from Jensen’s inequality.

The next proposition can readily be derived from the standard theory of Hamil- tonian systems (cf. e.g.[8, Appendix B]):

Proposition 2.8. Under the assumptions(L1),(L2)and(L3),(2.7)admits a min- imizer x,y for anyx,y 2 Rd. We have that x,y 2 C2([0,h]) and satisfies the Euler-Lagrange equation

( x,y(⌧),˙x,y(⌧))=8L(⌧,xx,y(0)) 8⌧ 2[0,h], (2.12) where8Lis the Lagrangian flow defined in equation(2.1). Moreover, for anyr >0 andS⇢(0,+1)a compact set, there exists a constantkS(r), depending onSand r only, such that|| x,y||C2([0,h])kS(r)if|x|,|y|r andh2S.

Remark 2.9. Let be a minimizer of the problem (2.7), and set p(⌧):=rvL( (⌧),˙(⌧)) .

(a) The Euler-Lagrange equation (2.12) implies that and pare of classC1and satisfy the system of ordinary differential equations

˙(⌧)=rpH( (⌧),p(⌧)), p(⌧)˙ = rxH( (⌧),p(⌧)) (2.13) (b) The Hamiltonian is constant along the integral curve ( (⌧), p(⌧)), i.e.

H( (⌧),p(⌧))= H( (0),p(0))for⌧ 2[0,h].

The following lemma is standard (cf.for instance [8, Appendix B]):

Lemma 2.10. Under the assumptions in Proposition2.8, let be a minimizer of (2.7), and define pi := rvL( (i),˙(i))fori = 0,h. Forr,m > 0there exists a constantlh(r,m), depending onh,r,m only, such that ifx,y 2 Br(0)andw 2

Bm(0), then:

(a) ch(x+w,y)ch(x,y) hp0, wi+ 12`h(r,m)|w|2; (b) ch(x,y+w)ch(x,y)+hph, wi+ 12`h(r,m)|w|2.

(12)

Remark 2.11. This lemma says that p0 2 @+ch(·,y)(x), and fory 2 Br(0)the restriction ofc(·,y)to Br(0)is`h(r,m)-concave. Similarly, ph 2 @+ch(x,·)(y), and forx2 Br(0)the restriction ofc(x,·)toBr(0)is`h(r,m)-concave.

Lemma 2.12. Suppose(L1),(L2)and(L3)hold. Leta,b,r 2 (0,+1)be such thata <band setS = [a,b].Then there exists a constantk˜S(r), depending onS andronly, such that

|ch(x,y) ch¯(x,y)|k˜S(r)|h h¯| for allh,h¯ 2Sand allx,y2Rd satisfying|x|,|y|r.

Proof. LetkS(r)be the constant appearing in Proposition 2.8 and let E1:=sup

x,v{|L(x, v)| : |x|,|v|kS(r)}, E2:=sup

x,v

|rvL(x, v)| : |x|kS(r),|v|kS(r)b a .

Fixh,h¯ 2 Ssuch thath¯ <h.Forx,y2Rdsuch that|x|,|y|r we denote by a minimizer of (2.7). Define ¯(t)= (th/¯ h)fort 2[0,h¯].Then ¯ 2C2([0,h¯]),

¯(0)=xand ¯(h)¯ =y.Then ch¯(x,y)

Z h¯ 0

L ¯,˙¯ dt = h¯ h

Z h 0

L

✓ ,h

¯ h˙

ds

= h¯

hch(x,y)+ h¯ h

Z h 0

L

✓ ,h

h¯˙

L( ,˙)

ds.

This implies ch¯(x,y)h¯

hch(x,y)+h¯ hh E2

h h¯ 1

kS(r)= h¯

hch(x,y)+(h h)E¯ 2kS(r), and so

ch¯(x,y) ch(x,y)h¯ h

h ch(x,y)+(h h)E¯ 2kS(r)

|h h¯|(E1+E2kS(r)),

(2.14) where we used the trivial boundch(x,y)E1h. Since by Lemma 2.7(ii)ch(x,y)ch¯(x,y), (2.14) proves the lemma.

2.5. Total works and their properties

In this subsection we assume that (2.2) and (2.3) hold.

Lemma 2.13. The following properties hold:

(13)

(i) For anyµ 2 P(Rd)we haveCh(µ, µ)  0. In particular, for anyµ,µ¯ 2 P(Rd),Ch¯(µ,µ)¯ Ch(µ,µ)¯ ifh<h.¯

(ii) For anyh>0,µ,µ¯ 2P(Rd),

AhAh+W✓,h(µ,µ)¯ Ch(µ,µ)¯ CW,h(µ,µ)¯ +Ah.

(iii) For any K >0there exists a constantC(K) >0such that W1(µ,µ)¯  1

KW✓,h(µ,µ)¯ +C(K)

K h 8h>0, µ,µ¯ 2P(Rd). (2.15) Proof. (i) The first part follows fromch(x,x)0, while the second statement is a consequence of the first one andCh¯(µ,µ)¯ Ch(µ,µ)¯ +Ch h¯ (µ,¯ µ).¯

(ii) It follows directly from Lemma 2.7(iii).

(iii) Thanks to the superlinearity ofh, for anyK >0 there exists a constantC(K) >

0 such that

✓(s) K s C(K) 8s 0. (2.16)

Fix now 20h0, µ1). Then W1(µ,µ)¯ 

Z

RdRd|x y|d (x,y)

h K

Z

RdRd

K|x y|

h C(K) d (x,y)+C(K) K h

 1 K

Z

RdRd

✓|x y| h

d (x,y)+C(K) K h

= 1

KW,h(µ,µ)¯ +C(K) K h.

Lemma 2.14. Leth > 0.Suppose that{%n}n2Nconverges weakly to%in L1(Rd) and that{M1(%n)}n2Nis bounded. ThenM1(%)is finite, and we have

lim inf

n!1 Ch(%,¯ %n) Ch(%,¯ %) 8%¯ 2P1ac(X).

Proof. The fact thatM1(%)is finite follows from the weak lower-semicontinuity in L1(Rd)ofM1. Let now n 20h(%,¯ %n). Since{M1(%n)}n2Nis bounded we have

sup

n2N

Z

Rd(|x| + |y|) n(dx,dy) <+1. (2.17) As|x| + |y|is coercive, equation (2.17) implies that{ n}n2Nadmits a cluster point for the topology of the narrow convergence. Furthermore it is easy to see that 20(%,¯ %)and so, sincechis continuous and bounded below, we get

lim inf

n!1 Ch(%,¯ %n)=lim inf

n!1

Z

RdRdch(x,y)d n(x,y) Z

RdRdch(x,y)d (x,y) Ch(%,¯ %).

(14)

3. Existence of solutions in a bounded domain

Throughout this section we assume that (2.2) and (2.3) hold. We recall thatL sat- isfies (L1), (L2) and (L3). We also assume that X ⇢ Rd is an open bounded set whose boundary@Xis of zero Lebesgue measure, and we denote byXits closure.

The goal is to prove existence of distributional solutions to equation (1.1) by us- ing an approximation by discretization in time. More precisely, in Subsection 3.1 we construct approximate solutions at discrete times{h,2h,3h, . . .}by an implicit Euler scheme, which involves the minimization of a suitable functional. Then in Subsection 3.2 we explicitly characterize the minimizer introducing a dual prob- lem. We then study the properties of an augmented action functional which allows to prove a priori bounds on the De Giorgi’s variational and geodesic interpolations (cf. Subsection 3.4). Finally, using these bounds we can take the limit as h ! 0 and prove existence of distributional solutions to equation (1.1) when✓ behaves at infinity liket,↵>1.

3.1. The discrete variational problem

We fix a time step h > 0 and for simplicity of notation we set c = ch. We fix

%02Pac(X), and we consider the variational problem

%2Pinfac(X)Ch(%0,%)+U(%). (3.1) Lemma 3.1. There exists a unique minimizer%of problem(3.1). Suppose in ad- dition that(L4)holds. IfM 2(0,+1)and%0M, then%M.In other words, the maximum principle holds.

Proof. Existence of a minimizer%follows by classical methods in the calculus of variation, thanks to the lower-semicontinuity of the functional % 7! Ch(%0,%)+ U(%)in the weak topology of measures and to the superlinearity ofU (which im- plies that any limit point of a minimizing sequence still belongs toPac(X)).

To prove uniqueness, let%1and%2be two minimizers, and take 120h(%0,%1),

220h(%0,%2)(cf.Remark 2.3). Then 1+2 2 20⇣

%0,%1+2%2⌘ , so that Ch

%0,%1+%2 2

 Z

XX

c(x,y)d

1+ 2

2

= Ch(%0,%1)+Ch(%0,%2)

2 .

Moreover by strict convexity ofU U

✓%1+%2 2

 U(%1)+U(%2) 2

with equality if and only if%1=%2. This implies uniqueness.

Thanks to (L1) and (L4) one easily gets thatch(x,x) <ch(x,y)for allx,y 2 X, x 6= y. Thanks to this fact the proof of the maximum principle is a folklore which can be found in [18].

(15)

3.2. Characterization of minimizers via a dual problem

The aim of this paragraph is to completely characterize the minimizer%provided by Lemma 3.1. We are going to identify a problem, dual to problem (3.1), and to use it to achieve that goal.

We define E ⌘ Ec to be the set of pairs (u, v) 2 C(X)C(X)such that u(x)+v(y)c(x,y)for allx,y 2 X, and we writeu vc.We consider the functional

J(u, v):=

Z

X

u%0dx Z

X

U( v)dx.

To alleviate the notation, we have omitted to display the%0dependence in J.

We recall some well-known results:

Lemma 3.2. Let u 2 Cb(X). Then(uc)c u, (uc)c u,((uc)c)c = uc, and ((uc)c)c =uc. Moreover:

(i) Ifu=vcfor somev2C(X), then:

(a) There exists a constant A = A(c,X), independent ofu, such that u is A-Lipschitz andA-semiconcave.

(b) Ifx¯ 2 X is a point of differentiability ofu, y¯ 2 X, andu(x)¯ +v(y)¯ = c(x,¯ y), then¯ x¯is a point of differentiability ofc(·,y)¯ andru(x)¯ =rxc(x,¯ y).¯ Furthermore y¯ = 8L1 h,x,¯ rpH(x¯, ru(x))¯ , and in particular y¯ is uniquely determined.

(ii) Ifv=ucfor someu2C(X), then:

(a) There exists a constant A = A(c,X), independent of v, such that v is A-Lipschitz andA-semiconcave.

(b) Ifx¯ 2 X, y¯ 2 X is a point of differentiability of v, andu(x)¯ +v(y)¯ = c(x,¯ y), then¯ y¯is a point of differentiability ofc(x,¯ ·)andrv(y)¯ =ryc(x,¯ y).¯ Furthermore, x¯ = 81L h,y,¯ rpH(y,¯ rv(y))¯ , and in particular y¯ is uniquely determined.

In particular, if K ⇢ Ris bounded, the set{vc : v 2 C(X), vc(X)\K 6= ;}is compact inC(X), and weakcompact inW1,1(X).

Proof. Despite the fact that the assertions made in the lemma are now part of the folklore of the Monge-Kantorovich theory, we sketch the main steps of the proof.

The first part is classical, and can be found in [12, 16, 17].

Regarding (i)-(a), we observe that by Remark 2.11 the functions c(·,y)are uniformly semiconcave for y 2 X, so that u is semiconcave as the infimum of uniformly semiconcave functions (cf. for instance [8, Appendix A]). In particular uis Lipschitz, with a Lipschitz constant bounded bykrxckL1(XX).

To prove (i)-(b), we note that@ u(x)¯ ⇢ @ c(·,y)(¯ x). Since by Remark 2.11¯

@+c(·,y)(¯ x¯)is nonempty, we conclude that c(·,y)¯ is differentiable at x¯ ifu is.

Hence

ru(x¯)=rxc(x¯,y)¯ = rvL( (0),˙(0))

(16)

where : [0,h] ! X is (the unique curve) such thatc(x,¯ y)¯ = Rh

0 L( ,˙)dt (cf.[8, Section 4 and Appendix B]). This together with equation (2.12) implies

y¯=81L h,x¯,rpH(x,¯ ru(x¯)) . (3.2) The proof of (ii) is analogous.

Remark 3.3. By Lemma 3.2, ifu = vc for some v 2 Cb(X) we can uniquely defineLd-a.e. a mapT :domru! Xsuch thatu(x)+v(T x)=c(x,T x). This map is continuous on domru, and sincerucan be extended to a Borel map onX we conclude thatT can be extended to a Borel map onX, too. Moreover we have ru(x)=rxc(x,T x)Ld-a.e., andT is the unique optimal map pushing any density

%2Pac(X)forward toµ¯ :=T#(%Ld)2P(X)(cf.for instance [12, 16, 17]).

Lemma 3.4. If(u, v)2Eand%2Pac(X), then J(u, v)Ch(%0,%)+U(%).

Proof. Let 20(%0,%).SinceU(%(y))+U( v(y)) %(y)v(y)and(u, v)2 E, integrating the inequality we get

Z

X

U(%(y)+U( v(y)) dy Z

X

%(y)v(y)dy Z

XX

c(x,y)d (x,y) +

Z

X

%0(x)u(x)dx.

(3.3)

Rearranging the expressions in equation (3.3) and optimizing over0(%0,%)we ob- tain the result.

Lemma 3.5. There exists(u, v) 2 E maximizing J(u, v)overE and satisfying uc=vand(v)c =u.Furthermore:

(i) uandvare Lipschitz with a Lipschitz constant bounded by||rc||L1(XX). (ii) %v := (U)0( v) is a probability density on X, and the optimal map T

associated tou(cf.Remark 3.3) pushes%0Ld forward to%vLd.

Proof. Note that ifuc = v and(v)c = u,then (i) is a direct consequence of Lemma 3.2.

Before proving the first statement of the lemma, let us show that it implies (ii).

Let'2C(X)and set

v" :=v+"', u":=(v")c.

Remark 3.3 says that for Ld-a.e. x 2 X the equation u(x)+v(y) = c(x,y) admits a unique solutionT x.As done in [10] (cf.also [11]) we have that

ku" uk1"k'k1, lim

"!0

u"(x) u(x)

" ='(T x)

(17)

forLd-a.e.x 2X. Hence by the Lebesgue dominated convergence theorem

"lim!0

Z

X

u"(x) u(x)

" %0(x)dx = Z

X

'(T x)%0(x)dx. (3.4) Since(u, v)maximizesJ overE, by equation (3.4) we obtain

0= lim

"!0

J(u", v") J(u, v)

"

= Z

X

'(T x)%0(x)dx+ Z

X

(U)0( v(x))'(x)dx.

Therefore Z

X

'(T x)%0(x)dx = Z

X

(U)0( v(x))'(x)dx. (3.5) Choosing'⌘1 in equation (3.5) and recalling that(U)0 0 (cf.Lemma 2.4(ii)) we discover that%v :=(U)0( v)is a probability density on X. Moreover equa- tion (3.5) means thatT pushes%0Ldforward to%vLd.This proves (ii).

We eventually proceed with the proof of the first statement. Observe that the functional J is continuous onE, which is a closed subset ofC(X)C(X).Thus it suffices to show the existence of a compact setE0 ⇢ E such thatE0 ⇢ {(u, v): uc =v, vc =u}and supE J =supE0J.

If (u, v) 2 E then u  vc, and so J(u, v)  J(vc, v).But as pointed out in Lemma 3.2v  (vc)c, and since by Lemma 2.4(ii)U 2 C1(R)is monotone nondecreasing we have J(u, v)  J(vc, v)  J(vc, (vc)c).Set u¯ = vc andv¯ = (vc)c.Observe that by Lemma 3.2u¯ = ¯vcandv¯ = ¯uc.

AsU 2C1(R)and(U)0 0, the functional 7!e( ) :=R

XU( v(x¯ )+ )dx is differentiable and

e0( )= Z

X

(U)0( v(x)¯ + )dx 0.

Since by Lemma 2.4(iv)Ugrows superlinearly at infinity, so doese( ).Hence

!+1lim J(u¯+ ,v¯ )= lim

!+1

Z

Xu%¯ 0dx+ e( )= 1. (3.6) Moreover, asU 0 (cf.Lemma 2.4(ii)),

! 1lim J(u¯+ ,v¯ ) lim

! 1

Z

Xu%¯ 0dx+ = 1. (3.7) Since 7! J(u¯+ ,v¯ )is differentiable, (3.6) and (3.7) imply thatJ(u¯+ ,v¯ ) achieves its maximum at a certain value ¯ which satisfies 1 =e0(¯).Therefore we have

(u,˜ v)˜ :=(u¯+ ¯,v¯ ¯)2E, J(u,¯ v)¯  J(u,˜ v),˜ and Z

X

(U)0( v)˜ dx =1.

Références

Documents relatifs

This particular problem concerns the contact between an elastic body and a rigid support but a large variety of contact conditions can be easily analyzed using our general results,

We consider two kinds of motion representation: a dense motion model related to the whole image and an affine velocity directly linked to the trajectory of the tracked object..

Key words. copositive matrix, copositivity test, convex cone, Pareto eigenvalues, quadratic program- ming, copositive programming.. 1. The concept of copositivity usually applies to

Rademacher, Uber partielle und totale Differenzierbarkeit von Functionen mehrerer Variabeln und uber die Transformation der Doppelintegrale.. Weaver,

L’accès aux archives de la revue « Annali della Scuola Normale Superiore di Pisa, Classe di Scienze » ( http://www.sns.it/it/edizioni/riviste/annaliscienze/ ) implique l’accord

[6] devise a numerical scheme for (1.1) in dimension two on basis of the gradient flow structure, using the hydrodynamical formulation of the Wasserstein distance [3] instead of

We also obtain lower bound results for the space error indicators in the non-contact region, and for the time error estimator.. Numerical results for d = 1 , 2 show that the

In Section 1, we consider some examples of problems involving a unilateral boundary condition and work out various variational formulations for them, well suited for their