• Aucun résultat trouvé

Calculus of variations with convexity constraint

N/A
N/A
Protected

Academic year: 2021

Partager "Calculus of variations with convexity constraint"

Copied!
24
0
0

Texte intégral

(1)

Calculus of Variations with Convexity Constraint

G. Carlier ∗ 11th June 2002

Abstract

This article studies calculus of variations problems under a con-vexity constraint. The main motivations are Newton’s least resistance problem and mathematical economics. First a compactness result is proved which enables us to prove existence of solutions for noncon-vex lagrangians or functionals with linear growth. Then a first order necessary condition is established by a penalization method. Finally, several examples of applications are derived from the latter.

1

Introduction

The aim of this article is to study variational problems of the following form: inf

u∈C∩X J (u) :=

Z

L(x, u(x), ∇u(x))dx (1) where Ω is an open bounded convex subset of RN,

C := {u ∈ Wloc1,∞(Ω), u convex }

and X is a closed convex subset of some Sobolev space.

A famous example of such a problem is Isaac Newton’s body of min-imal resistance problem (see Brock, Ferone and Kawohl [3] and Buttazzo, Ferone and Kawohl [4]). In that problem, L(x, u, v) = 1+kvk1 2 and X =

{u : m ≤ u ≤ M }. In [10], Lachand-Robert and Peletier have studied the case L(x, u, v) = f (v) with f convex or concave, X = {u : u1 ≤ u ≤ u2}

with u1 and u2 convex and u1 = u2 in ∂Ω and they have found explicit

so-lutions in this case as well as in some extensions. Problems of the form (1)

(2)

are also of importance in economics in the theory of incentives (cf. Chon´e and Rochet [5]).

Section 2 deals with existence issues. After establishing a compactness property, existence results are proved for a wide class of functionals.

Section 3 focuses on necessary first order conditions and the Euler-Lagrange equation of (1). In [11], P.-L. Lions has proved representation theorems of the polar cone of the cone of convex functions in various func-tional spaces (L2, H1, H01) by means of PDE’s arguments. We propose to prove that result in any Sobolev space with different arguments. In 3.1 using a distributional characterization of convex functions (see Dudley [7]) another proof of the representation theorem of Lions [11] is given. In 3.2, a C1 convex nonnegative penalization functional of the convexity constraint is introduced. Another proof of the representation Theorem is given us-ing this penalization functional. These results make it possible to write the Euler-Lagrange equation of (1) (Section 3.3) and to prove strong convergence results of the solutions of the penalized problems (Section 3.4).

Finally in Section 4 regularity results are proved in dimension one and in the radial case. We end this article by comparing the convex envelope with the solution of a special (projection) variational problem under a convexity constraint.

2

Existence results

In all the following, Ω is an open bounded convex subset of RN. Notations

Let us recall some basic facts about convex functions and define some notations.

For x ∈ RN, xi, i = 1, ..., N denotes the i-th component of x in the

canonical basis of RN. For any Borel subset B of Ω, |B| denotes the Lebesgue measure of B. If ω is some open subset of Ω, notation ω ⊂⊂ Ω means that the closure of ω, ω is included in Ω.

Let u be a convex function in Ω such that u < +∞ in Ω, then a well-known convex analysis result (cf. for instance Rockafellar [14]) implies that u ∈ Wloc1,∞(Ω). More precisely, there exists some measurable set A ⊂ Ω such that dimH(Ω \ A) ≤ N − 1 (where dimH(Ω \ A) denotes the Hausdorff

dimension of (Ω \ A)) and u is Fr´echet differentiable in A . Moreover, if Du(x) denotes the Fr´echet derivative of u at x ∈ A, then the map Du is

(3)

continuous in A. On the other hand, since u ∈ Wloc1,∞(Ω), one may define ∇u as the distributional derivative of u, hence ∇u ∈ L∞

loc(Ω) and it is easy

to prove that ∇u = Du almost everywhere. Therefore, in what follows, for any convex u, notation ∇u will be used for both distributional gradient and ordinary Fr´echet derivative of u at any point of the set A where it is defined. Finally, for i = 1, ..., N , Diu denotes the i-th partial derivative of u.

2.1 Compactness

Proposition 1 Let (un) be a sequence of convex functions in Ω. Assume

that there exists p ∈ [1, +∞] such that (un) ∈ W1,p(Ω) for all n and (un)

is bounded in W1,p(Ω) then there exist u ∈ W1,p(Ω), convex, a measurable

subset A of Ω and a subsequence again labeled (un) such that:

1) (un) converges to u uniformly on compact subsets of Ω,

2) (∇un) converges to ∇u pointwise on A and dimH(Ω \ A) ≤ N − 1.

Proof. Let us first assume p ∈ (1, +∞). Using Rellich-Kondrachov The-orem (cf. for instance Br´ezis [2]) and reflexivity of W1,p(Ω), up to a subse-quence, it may be assumed that there exists u ∈ W1,p(Ω) such that:

• (un) converges a.e. and strongly in Lp(Ω) to u,

• (∇un) converges to ∇u weakly in Lp(Ω)N.

Of course, (un) converges to u in D0(Ω), and using a theorem of Dudley

[7], we have: • u is convex,

• (un) (up to a subsequence) converges to u uniformly on compact

sub-sets of Ω.

Moreover, since un (respectively u) is convex, there exists some

measur-able An ⊂ Ω, dimH(Ω \ An) ≤ N − 1 (respectively A ⊂ Ω, dimH(Ω \ A) ≤

N − 1) such that un(respectively u) is Fr´echet differentiable in An

(respec-tively A). Define then:

A := ∩nAn∩ A

obviously dimH(Ω \ A) ≤ N − 1.

(4)

Let x ∈ A and let us show that limn∇un(x) = ∇u(x).

Step 1 : (∇un(x)) is bounded.

If not, there would exist a subsequence again denoted (∇un(x)) such

that:

• limnDiun(x) = +∞ for all i ∈ I1,

• limnDiun(x) = −∞ for all i ∈ I2,

• (Diun(x)) is bounded for all i ∈ {1, ..., N } \ I1∪ I2

and I1∪ I2 nonempty. There would exist then some open subset V of RN

and some ε > 0 such that x + V ⊂ Ω and ∀y ∈ V , i ∈ I1 ⇒ yi > ε and i ∈

I2 ⇒ yi < −ε. Moreover, for all y ∈ V and all n :

un(x + y) ≥ un(x) + h∇un(x), yi → +∞ as n → +∞

integrating the latter yields: lim

n

Z

V

un(x + y)dy = +∞

which contradicts the fact that (un) is bounded in L1.

Step 2 : (∇un(x)) converges to ∇u(x).

From Step 1, it is enough to show that ∇u(x) is the only cluster point of (∇un(x)).

Suppose on the contrary that there exist a subsequence say again (∇un(x))

and β ∈ RN, β 6= ∇u(x) such that (∇un(x)) converges to β as n → +∞.

Let y1 ∈ RN and α > 0 be such that:

h∇u(x), y1i ≥ hβ, y1i + α.

Since ∇u is continuous in A, there exist two open sets of RN, U and V ,

0 ∈ U , y1 ∈ V such that:

h∇u(x − h), yi ≥ hβ, yi +α

2, for all(h, y) ∈ (U ∩ (x − A)) × V. (2) Since A is dense in Ω, there exist y ∈ V and t > 0 such that ty ∈ U ∩ (x − A). Let us define h0= ty, using (2), we have:

h∇u(x − h0), h0i ≥ hβ, h0i +

α.t

(5)

There exists then an open subset W of RN such that x − W ⊂ Ω and: for all h ∈ W ∩ (x − A), h∇u(x − h), hi ≥ hβ, hi + α.t

4 . (4) Note also that, since A has full Lebesgue measure in Ω, (4) is satisfied for almost every h ∈ W .

On the other hand, since ∇unis Minty-monotone, for all h ∈ W ∩(x−A)

and for all n, we have:

h∇un(x), hi ≥ h∇un(x − h), hi (5) so that: Z W h∇un(x), hi dh ≥ Z W h∇un(x − h), hi dh

letting n tend to +∞ in the previous inequality and using (4), we obtain: Z W hβ, hi dh ≥ Z W h∇u(x − h), hi dh ≥ Z W hβ, hi dh + |W |α.t 4 which yields a contradiction.

We have then proved the result for p ∈ (1, +∞). The case p = +∞ follows immediately.

Step 3 : the case p = 1.

Suppose now that p = 1, in this case, by the Sobolev inequalities and the theorem of Dudley, it still may be assumed that (un) converges to some

convex function u ∈ L1(Ω) ∩ Wloc1,∞ uniformly on compact subsets of Ω. Let the set A be defined as previously and let ω be some open convex subset of Ω such that ω ⊂⊂ Ω.

Let us first show that (∇un) is bounded in L∞(ω). Let ε ∈ (0, d(ω, ∂Ω))

and ωε:= ω + εB(0, 1) since (un) converges uniformly to u in ωεthere exists

M such that: |un(x)| ≤ M , ∀x ∈ ωε, for all n. Let us show: sup n k∇unkL∞(ω) ≤ 2M ε . (6)

Let x ∈ A ∩ ω and v ∈ B(0, 1) \ {0} and define: pv(x) := (x + R+v) ∩ ∂ωε.

(6)

There exists k ≥ ε such that pv(x) = x + kv and similarly p−v(x) = x −

k0x with k0 ≥ ε. By convexity, we have:

un(pv(x)) ≥ un(x) + k h∇un(x), vi (7) and un(p−v(x)) ≥ un(x) − k0h∇un(x), vi (8) so that: | h∇un(x), vi | ≤ 2M ε and since x and v are arbitrary we obtain (6).

For every open convex ω ⊂⊂ Ω, (∇un) is then bounded in L∞(ω). And

then the previous case (p > 1) applies. There exists therefore a subsequence ∇uφω(n) converging to ∇u pointwise in A ∩ ω.

To end the proof, let us take an exhaustive sequence of ωk ⊂⊂ Ω, k ≥ 1,

for instance:

ωk:= {x ∈ Ω : d(x, ∂Ω) >

δ0

2k}

where δ0 > 0 is such that ω1 6= ∅. Then it is enough to make a suitable

diagonal extraction. Then we have the existence of a subsequence say again (∇un) converging pointwise to ∇u in A. We obtain ∇u ∈ L1(Ω)N by Fatou’s

Lemma so that u ∈ W1,1(Ω) which ends the proof.

Corollary 1 Let (un) be a sequence of convex functions in Ω such that for

every convex ω with ω ⊂⊂ Ω the following holds: sup

n

kunkW1,1(ω)< +∞

then there exist u convex in Ω (in particular Wloc1,∞(Ω)), a measurable subset A of Ω and a subsequence again labeled (un) such that:

1) (un) converges to u uniformly on compact subsets of Ω,

2) (∇un) converges to ∇u pointwise on A and dimH(Ω \ A) ≤ N − 1.

Proof. Let us define for all k ∈ N∗:

ωk:= {x ∈ Ω : d(x, ∂Ω) >

δ0

2k}

where δ0 > 0 is such that ω16= ∅.

Using Proposition 1, we may find a subsequence (uφ1(n)) of (un), a convex

(7)

• dimH(ω2\ A1) ≤ N − 1,

• (uφ1(n)) converges uniformly to u1 in ω1,

• (∇uφ1(n)) converges pointwise to ∇u1 in A1.

We may then construct inductively, a sequence of subsequences uφk(n),

of convex functions uk ∈ W1,1(ωk+1), and of measurable sets Ak ⊂ ωk+1

such that for all k ≥ 1:

• (uφk+1(n)) is a subsequence of (uφk(n)),

• Ak⊂ Ak+1,

• uk+1= uk in ωk+1,

• (uφk+1(n)) converges uniformly to uk+1 in ωk+1,

• (∇uφk+1(n)) converges pointwise to ∇uk+1, in Ak+1.

• dimH(ωk+2\ Ak+1) ≤ N − 1.

Finally we define A := [

n≥1

An, uψ(n) := uφn(n) and u(x) := uk(x), for all x ∈ ωk.

It can be easily checked that (uψ(n)), u and A satisfy the statement of the Corollary.

2.2 Existence of minimizers

Consider the variational problem: inf u∈C J (u) := Z Ω L1(x, u(x), ∇u(x))dx + Z Ω L2(x, u(x), ∇u(x))dµ(x) (9)

where C denotes the set of convex functions in Ω. Let us assume that:

(8)

1. L1 is a normal integrand from Ω × R × RN to R, i.e., for a.e. x ∈ Ω,

L1(x, ., .) is lower semi continuous in R×RN, and there exists a borelian

mapping eL1 from Ω × R × RN to R such that: L1(x, ., .) = eL1(x, ., .)

for a.e. x ∈ Ω.

2. µ is a nonnegative Radon measure such that µ(B) = 0 for all B such that dimH(B) ≤ N − 1, and for µ-a.e. x ∈ Ω, L2(x, ., .) is lower semi

continuous in R × RN and there exists a borelian mapping eL2 from

Ω × R × RN to R such that: L2(x, ., .) = eL2(x, ., .) for µ-a.e. x ∈ Ω.

3. There exist G1 ∈ C(Ω), G1 > 0 in Ω, Ψ1 ∈ L1(Ω), Ψ2 ∈ L1(Ω, dµ)

such that:

L1(x, u, v) ≥ G1(x)(|u| + kvk) + Ψ1(x)

for a.e. x ∈ Ω and for all (u, v) ∈ R × RN and L2(x, u, v) ≥ Ψ2(x)

for µ-a.e. x ∈ Ω and for all (u, v) ∈ R × RN. 4. There exists u0 convex such that J (u0) < +∞.

Then the following existence result holds:

Proposition 2 Problem (9) admits at least one solution.

Proof. Let (un) be a minimizing sequence of (9). Using assumption 3.,

(un) satisfies the assumption of Corollary 1 and then, there exist u convex in

Ω (in particular Wloc1,∞(Ω)), a measurable subset A of Ω and a subsequence again denoted (un) such that (un) converges to u uniformly on compact

subsets of Ω, (∇un) converges to ∇u pointwise in A and dimH(Ω\A) ≤ N −1.

In particular (∇un) converges to ∇u µ-a.e..

Fatou’s Lemma yields then: lim inf n Z Ω L1(x, un(x), ∇un(x))dx ≥ Z Ω lim inf n L1(x, un(x), ∇un(x))dx (10) similarly, we have: lim inf n Z Ω L2(x, un, ∇un)dµ ≥ Z Ω lim inf n L2(x, un, ∇un)dµ (11)

and, by hypotheses 1. and 2.: lim inf

(9)

lim inf

n L2(x, un(x), ∇un(x)) ≥ L2(x, u(x), ∇u(x)) µ-a.e.

so that :

J (u) ≤ inf

u∈CJ (u) < +∞

u is then a solution of (9).

Remark. The previous proposition enables us to prove existence in a large class of cases. No convexity of the Lagrangian is required, even the case of Lagrangians that degenerate on the boundary (G1 = 0 in ∂Ω) is allowed.

This is due to the fact that gradient terms of minimizing sequences do not oscillate in this case.

3

First order condition

3.1 Direct method and representation of the polar cone

Let Ω be an open, convex, bounded subset of RN with C1 boundary and p ∈ (1, +∞). Let us start with some notations:

Kp:= C ∩ W1,p(Ω), K0p := C ∩ W01,p(Ω), (Kp)+:= {L ∈ (W1,p(Ω))0 : hL, ui ≥ 0, ∀u ∈ Kp},

(K0p)+ := {L ∈ W−1,p0(Ω) : hL, ui ≥ 0, ∀u ∈ K0p}.

where 1/p+1/p0 = 1 and let SN (respectively SN+) denote the set of

symmet-ric (respectively semidefinite positive symmetsymmet-ric) N × N matsymmet-rices. In what follows, coX(A) denotes the closed convex hull of the set A in the Banach

space X.

The aim of this section is to give a representation of the cones (Kp)+

and (K0p)+ (positive polar cones of Kp and K0p). In section 3.2, we will use a penalization method, in this section, we propose a direct proof that is close to that of P.-L. Lions [11] but does not require PDE arguments.

This method is based on the following result (cf. Dudley [7]):

Theorem 1 Let T ∈ D0(Ω), the two following properties are equivalent: 1) T is a convex function.

(10)

2) D2T >> 0 in D0(Ω, SN) which means that for all α ∈ RN, for all g ∈ D+(Ω): hT, φg,αi := X 1≤i,j≤N αiαjhDijT, gi = X 1≤i,j≤N αiαj  T, ∂ 2g ∂xi∂xj  ≥ 0

As an easy consequence we have:

Corollary 2 The following representations hold: 1. (Kp)+= co(W1,p(Ω))0{φg,α, g ∈ D+(Ω), α ∈ RN},

2. (K0p)+= coW−1,p0(Ω){φg,α, g ∈ D+(Ω), α ∈ RN}.

Proof. 1. Define the convex cone:

B := co(W1,p(Ω))0{φg,α, g ∈ D+(Ω), α ∈ RN}.

Define also B+ := {u ∈ W1,p(Ω) : hL, ui ≥ 0, ∀L ∈ B}. First it is clear

from Theorem 1 that B ⊂ (Kp)+. It is also clear that B+ ⊂ Kp since

u ∈ B+⇒ D2[u] >> 0 in D0(Ω, S

N) ⇒ u ∈ Kp by Theorem 1.

Finally we have B ⊂ (Kp)+ ⊂ B++= B since B is a closed convex cone

then B = (Kp)+. The proof of 2. is similar and therefore omitted.

To make the previous proposition tractable and derive a representation result we first need a technical lemma. Let us first recall the definition of the Moreau-Yosida approximations of a convex function:

Definition 1 Let v ∈ Kp and define for all n and x ∈ RN : vn(x) := inf

y∈Ω

v(y) + n

2ky − xk

2

(vn) are called Moreau-Yosida approximations of v.

Then the following approximation result holds: Lemma 1 1) Let v ∈ Kp and (v

n) denote the Moreau-Yosida

approxima-tions of v. (vn) converges to v in W1,p(Ω) and there exists a sequence (wn)

of convex C∞(Ω) functions converging to v in W1,p(Ω).

2) Under the further smoothness assumption on Ω that there exists some C2 strictly convex function Φ in RN such that Ω = {Φ < 0} then if v ∈ K0p there exists a sequence of convex functions (vn) in C2(Ω), vn|∂Ω = 0 and

(11)

Proof. 1) By well-known convex analysis techniques (see for instance Aubin and Ekeland [1]), for all n, vn is convex, C1 in RN and satisfies:

       infv ≤ vn≤ v in Ω k∇vnk ≤ k∇vk a.e. in Ω limnvn= v pointwise limn∇vn= ∇v a.e.

Lebesgue’s Dominated Convergence Theorem shows then that (vn) converges

to v in W1,p(Ω). Finally define a sequence of mollifiers (ρ

n), ρn∈ Cc∞(RN),

ρn ≥ 0, supp(ρn) ⊂ B(0,1n),

R

RNρn = 1 and define wn := (ρn? vn). The

sequence (wn) clearly satisfies the desired result.

2) A proof can be found in P.-L. Lions [11].

We are now able to give a representation Theorem of cones (Kp)+ and (K0p)+:

Theorem 2 1) Let L ∈ (W1,p(Ω))0, then L ∈ (Kp)+ if and only if there

exists a matrix (µij)1≤i,j≤N of bounded Radon measures in Ω which is

sym-metric and nonnegative in the sense of symsym-metric matrices such that : ∀v ∈ C2(Ω), hL, vi = X 1≤i,j≤N Z Ω ∂2v ∂xi∂xj dµij

which can be expressed as L = X

1≤i,j≤N

Dijµij in a weak form.

2) Let L ∈ W−1,p0(Ω), if Ω satisfies the same strict convexity and C2 regu-larity as in the previous Lemma then L ∈ (K0p)+ if and only if there exists a matrix (µij)1≤i,j≤N of bounded Radon measures in Ω which is symmetric

and nonnegative in the sense of symmetric matrices such that : ∀v ∈ C2(Ω) with v |∂Ω = 0, hL, vi = X 1≤i,j≤N Z Ω ∂2v ∂xi∂xj dµij.

Remark. Note the difference between the previous representation result where test functions are C2 and the characterization of the polar cone of C0 convex functions in terms of balayage operators (cf. Meyer [12]). See also Chon´e and Rochet [5] who have adopted this point of view.

(12)

Proof. 1) Let L ∈ (Kp)+. We know from Corollary 1 that for every n there exist an integer Nn, λnk, λnk ≥ 0,

PNn k=1λnk = 1, αnk ∈ RN, and gn k ∈ D+(Ω), k = 1, ..., Nn such that: lim n Ln:= Nn X k=1 λnkφgn k,αnk = L in (W 1,p(Ω))0 . Let v ∈ C2(Ω), we have: hLn, vi = Nn X k=1 λnk X 1≤i,j≤N Z Ω αnk iαnk jv(x) ∂ 2gn k ∂xi∂xj (x)dx = Nn X k=1 λnk X 1≤i,j≤N Z Ω αnk iαnk jgkn(x) ∂ 2v ∂xi∂xj (x)dx = X 1≤i,j≤N Z Ω ψnij(x) ∂ 2v ∂xi∂xj (x)dx (12) with ψijn(x) := Nn X k=1 λnkαnk iαnk jgkn(x) so that clearly ψn= (ψijn) ∈ Cc∞(Ω, SN+).

On the other hand: lim n  Ln, 1 2k.k 2  = lim n Z Ω Tr(ψn(x))dx =  L,1 2k.k 2 

so that (ψn) is bounded in L1(Ω, SN+). Using Banach-Alaoglu Theorem,

up to a subsequence we may assume then that (ψn) converges vaguely to

some bounded matrix-valued Radon measure say µ = (µij) of course µ is

SN+-valued and passing to the limit in (12) yields:

hL, vi = X 1≤i,j≤N Z Ω ∂2v ∂xi∂xj dµij.

Conversely suppose L ∈ (W1,p(Ω))0 satisfies:

for all v ∈ C2(Ω), hL, vi = X 1≤i,j≤N Z Ω ∂2v ∂xi∂xj dµij

(13)

with µij as above. Let v ∈ Kp using Lemma 1, there exists (vn) a sequence

of convex C2(Ω) functions converging to v in W1,p(Ω). Since:

lim

n hL, vni = hL, vi and hL, vni ≥ 0 for all n

we obtain that L ∈ (Kp)+.

2) The proof is similar except for the L1 bound of sequence (ψn) that

can be obtained by replacing 12k.k2 by Φ.

3.2 Penalization functional and representation theorem

Let us define the linear continuous map: ϕ :



W1,p(Ω) → Lp(Ω × Ω)

ϕ(u)(x, y) := h∇u(x) − ∇u(y), x − yi , ∀(x, y) ∈ Ω × Ω

Proposition 3 Let u be in W1,p(Ω) then u ∈ Kp if and only if ϕ(u) ≥ 0 almost everywhere in Ω × Ω.

Proof. If u ∈ C1(Ω), ϕ(u) ≥ 0 means that ∇u is monotone which implies that u is convex. Now suppose u ∈ W1,p(Ω) and ϕ(u) ≥ 0 almost every-where. Let K be some convex compact subset of Ω with d(K, ∂Ω) = 2δ > 0. Consider then a sequence of mollifiers (ρn), ρn ∈ Cc∞(RN), ρn ≥ 0,

supp(ρn) ⊂ B(0,1n),

R

RNρn= 1 and define:

un(x) := (ρn? u)(x), for x ∈ K and n ≥

1 δ. Let (x, y) ∈ K2 we have: ϕ(un)(x, y) = Z B(0,n1) ρn(z)ϕ(u)(x − z, y − z)dz ≥ 0

since unis smooth, the latter implies that unis convex. Since (un) converges

to u in Lp(K) and the cone of convex functions is closed this implies that u is convex in K, K being arbitrary, we obtain the desired result.

(14)

Define for all t ∈ R, F (t) := p−1(t−)pwith t− := max(0, −t) (let us recall

that we have assumed p ∈ (1, ∞) in all of Section 3). The penalization functional j is then defined as follows:

j(u) := Z

Z

F (ϕ(u)(x, y))dxdy, for all u ∈ W1,p(Ω) (13)

Proposition 3 expresses then that u ∈ Kp if and only if j(u) = 0. Note also that j is convex and C1 over W1,p(Ω):

j0(u), h = −

Z

{ϕ(u)≤0}

(−ϕ(u))p−1ϕ(h)dxdy (14)

for all (u, h) ∈ W1,p(Ω) × W1,p(Ω). Using the convexity of j we obtain:

Proposition 4 The following representations hold:

1. Kp= (j0(W1,p))−= {v ∈ W1,p : hj0(u), vi ≤ 0, ∀u ∈ W1,p}, 2. K0p= (j0(W01,p))−= {v ∈ W01,p : hj0(u), vi ≤ 0, ∀u ∈ W01,p}, 3. (Kp)+= −co(W1,p)0(j0(W1,p)), 4. (K0p)+= −co W−1,p0(j0(W 1,p 0 )).

Proof. Let us prove 1.. Let u ∈ Kp it is easy to check that for all h ∈ W1,p(Ω), j(u + h) ≤ j(h) since ϕ(u + h) ≥ ϕ(h) a.e., then j(h) ≥ j(u + h) ≥

j(h) + hj0(h), ui so that:

j0(h), u ≤ 0.

Conversely assume u ∈ (j0(W1,p(Ω)))− then we have:

j(0) = 0 ≥ j(u) +j0(u), −u ≥ j(u)

so that j(u) = 0, hence u ∈ Kp, using Proposition 4. The proof of 2. is similar. Finally 3. and 4. follow from 1. and 2. and Hahn-Banach or Bipolar Theorem.

(15)

As a consequence, we can give an alternate proof of Theorem 2 (repre-sentation of cones (Kp)+ and (K0p)+). We only consider the case of (Kp)+, the case of (K0p)+ being similar.

Let L ∈ (Kp)+, we know from Proposition 4 that for all n there exist an integer Nn, λnk, λnk ≥ 0, PNn k=1λnk = 1, and unk ∈ W1,p(Ω), k = 1, ..., Nn such that: Ln:= − Nn X k=1 λnkj0(unk) converges to L in (W1,p(Ω))0.

Taking a test-function v ∈ C2(Ω) yields:

hLn, vi = − Z Ω Z Ω Nn X k=1

λnkF0(ϕ(unk)(x, y))ϕ(v)(x, y)dxdy.

Define then: βn(x, y) := − Nn X k=1 λnkF0(ϕ(unk)(x, y)), so that βn∈ Lp 0 (Ω × Ω) and βn≥ 0 a.e..

Note also that: ϕ(v)(x, y) =

Z 1

0

D2v(y + t(x − y))(x − y, x − y)dt

so that: hLn, vi = Z Ω2 βn(x, y) Z 1 0

D2v(y + t(x − y))(x − y, x − y)dtdxdy

= Z 1 0 Z Φ(t)(Ω×Ω) βn(z + (1 − t)α, z − tα)D2v(z)(α, α)dzdαdt

where Φ(t)(x, y) = (y + t(x − y), x − y). Then, using the convexity of Ω: hLn, vi = X 1≤i,j≤N Z Ω ∂2v ∂xi∂xj (z) Z γ(z) βn(z + (1 − t)α, z − tα)αiαjdαdtdz

where γ(z) := {(α, t) ∈ RN × [0, 1] such that (z, α) ∈ Φ(t)(Ω × Ω)}. Define for all n, i, j and z:

ψnij(z) := Z

(16)

it is clear that ψn= (ψnij) ∈ L1(Ω, SN+).

Moreover (ψn) is bounded in L1(Ω, SN+) since, once again

 Ln, 1 2k.k 2  = Z Ω Tr(ψn(x))dx →  L,1 2k.k 2 

and the proof can be ended exactly as in section 3.2, proof of Theorem 2.

3.3 Euler-Lagrange equation

Let us turn back to our variational problem in the simple cas of a homoge-neous Dirichlet condition:

inf u∈K0p J (u) := Z Ω L(x, u(x), ∇u(x))dx (15) where L : (x, u, ξ) → L(x, u, ξ) is C1 over Ω × R × RN and ∂L∂u, ∂L∂ξ sat-isfy suitable growth conditions with respect to N and p (cf. for instance Dacorogna [6]) so that for all u ∈ W01,p(Ω), for all h ∈ W01,p(Ω):

J0(u), h =Z Ω

∂L

∂u(x, u(x), ∇u(x))h(x) + ∂L

∂ξ(x, u(x), ∇u(x)).∇h(x)dx. Then if u is a solution of (15), u satisfies the variational inequalities:

for all h ∈ W01,p(Ω) such that u + h ∈ K0p, J0(u), h ≥ 0 (16) which is equivalent to:

J0(u) ∈ (K0p)+ (17) and

J0(u), u = 0 (18)

using Theorem 2, (17) is also equivalent to: −div(∂L ∂ξ(x, u, ∇u)) + ∂L ∂u(x, u, ∇u) = X 1≤i,j≤N Dijµij in W−1,p 0 (Ω) (19) and * X 1≤i,j≤N Dijµij, u + = 0 (20)

for some SN+-valued bounded Radon measure µ ; (19) is the the Euler-Lagrange equation of (15). Note that in (19), µij is unknown, not necessarily

unique as noted by P.-L. Lions [11], and can be interpreted in some weak sense as multipliers associated with the convexity constraint of (15).

(17)

3.4 Using the penalization functional on a model problem

Consider the model problem: inf u∈K2 0 J (u) :=1 2 Z Ω X 1≤i,j≤N aij ∂u ∂xi .∂u ∂xj − hf, ui (21)

where aij = aji ∈ L∞(Ω) satisfies the ellipticity condition:

∃ν > 0 such that : X

1≤i,j≤N

aij(x)ξiξj ≥ νkξk2 a.e. x ∈ Ω, ∀ξ ∈ RN

and f ∈ H−1(Ω). Problem (21) clearly admits a unique solution u that is characterized by: (S)          − X 1≤i,j≤N ∂ ∂xj (aij ∂u ∂xi ) = f + L in H−1(Ω), hL, ui = 0, u|∂Ω= 0 where: L = X 1≤i,j≤N

Dijµij in H−1(Ω) for some S+N-valued bounded Radon measure µ.

Consider now the penalized problems: (Pε)



inf Jε(u) := J (u) +1εj(u)

u ∈ H01(Ω)

where ε > 0 and j is the penalization functional: j(u) := 1 2 Z Ω Z Ω

[h∇u(x) − ∇u(y), x − yi]2dxdy.

Let uε denote the unique solution of (Pε). Then uε is the solution of:

(Sε)      − X 1≤i,j≤N ∂ ∂xj (aij ∂u ∂xi ) = f − 1εj0(uε) uε|∂Ω= 0

(18)

Proposition 5 We have, as ε > 0 goes to 0: 1. lim uε = u in H01(Ω),

2. lim −1εj0(uε) = L in H−1(Ω).

3. lim1εhj0(u

ε), uεi = 0.

Proof. 1. Follows from classical results on variational inequalities (cf. for instance Glowinski-Lions-Tr´emoli`eres [9]) ; 2. is obtained by passing to the limit in (Sε) and 3. follows from 1., 2. and hL, ui = 0.

4

Applications

4.1 The one-dimensional case

We consider the following problem: inf x∈W1,p(a;b)∩CJ (x) := Z b a L(t, x(t), ˙x(t))dt (22) Assume that:

(H1) L is of class C1 over (a, b) × R2,

(H2) there exists β > 0, α ∈ Lp0((a, b)) and γ ∈ L1((a, b)) such that for all (t, x, v) ∈ [a, b] × R2, |∂L ∂v(t, x, v)| ≤ α(t) + β(1 + |v| p−1), |∂L ∂x(t, x, v)| ≤ γ(t) + β(1 + |v| p)

where we assume ∞ > p > 1, and 1p +p10 = 1,

(H3) L is strictly convex with respect to v. Then the following regularity result holds:

(19)

Proof. Let x be a solution of (22) then it is a solution of the Euler-Lagrange equation: −d dt( ∂L ∂v(., x(.), ˙x(.))) + ∂L ∂x(., x(.), ˙x(.)) = ¨µ in W −1,p0 ((a, b)) (23) for some nonnegative bounded Radon-measure µ and:

h¨µ, xi = 0. (24) Rewrite (23) as: d dt( ˙µ + ∂L ∂v(., x(.), ˙x(.))) = ∂L ∂x(., x(.), ˙x(.)) (25) since the rightmost member of (25) is in L1((a, b)), ˙µ + ∂L∂v(., x(.), ˙x(.)) is absolutely continuous and (25) can be integrated:

˙ µ(t) + ∂L ∂v(t, x(t), ˙x(t)) = γ + Z t a ∂L ∂x(s, x(s), ˙x(s))ds, γ a constant (26) On the other hand

∂L

∂v(., x(.), ˙x(.)) ∈ L

p0

((a, b)) (27) then (26) and (27) imply that µ ∈ W1,p0. With no loss of generality, x being convex, it may be assumed that ˙x is right-continuous which, with (26), shows that ˙µ is also right-continuous. Since µ is continuous and ˙x is BVlocwe may

define the nonnegative measure µd ˙x that has density µ with respect to the variation (Stieltj`es) measure associated to ˙x, d ˙x. Let us show then that (24) implies µd ˙x = 0.

Let α and β be two points of (a, b) with α < β and x is differentiable at both α and β and let (xn) be a sequence of convex C∞([a, b]) functions

con-verging to x in W1,p((a, b)) and (µ

n) be a sequence of nonnegative C∞([a, b])

functions converging to µ in W1,p0((a, b)).

We know from the proof of Proposition 1 that ( ˙xn(α)) (respectively

( ˙xn(β))) converges to ˙x(α) (respectively ˙x(β)). Moreover:

Z β α µnd ˙xn− Z β α µd ˙x = Z β α µnd( ˙xn− ˙x) + Z β α (µn− µ)d ˙x (28) First: lim sup| Z β (µn− µ)d ˙x| ≤ lim supkµn− µkL∞( ˙x(β) − ˙x(α)) = 0 (29)

(20)

and: Z β α µnd( ˙xn− ˙x) = − Z β α ( ˙xn− ˙x) ˙µn+µn(β)( ˙xn(β)− ˙x(β))−µn(α)( ˙xn(α)− ˙x(α)) so that: | Z β α µnd( ˙xn− ˙x)| ≤ k ˙xn− ˙xkLpk ˙µnk Lp0 +kµnkL∞(| ˙xn(β) − ˙x(β)| + | ˙xn(α) − ˙x(α)|) we get then: lim n Z β α µnd( ˙xn− ˙x) = 0 (30)

finally, using (29) and (30), we obtain: lim n Z β α µnd ˙xn= Z β α µd ˙x. On the other hand:

0 = h¨µ, xi = lim n Z b a ¨ xnµ ≥ lim n Z β α ¨ xnµ = lim n Z β α ¨ xnµn = lim n Z β α µnd ˙xn= Z β α µd ˙x then we get that Rβ

α µd ˙x = 0 for almost every α, β and, since µd ˙x ≥ 0, this

implies

µd ˙x = 0. (31)

Now suppose ˙x is discontinuous at t ∈ (a, b), t is an atom of d ˙x, ˙x(t) = ˙

x(t+) > ˙x(t−), (31) implies then: µ(t) = 0 = inf

(a,b)µ ⇒ ˙µ(t−) ≤ 0 ≤ ˙µ(t +)

which, with (26) yields : ∂L

∂v(t, x(t), ˙x(t−)) ≥ ∂L

∂v(t, x(t), ˙x(t

+)

which yields a contradiction, since ˙x(t+) > ˙x(t−) and ∂L∂v(t, x(t), .) is increas-ing.

(21)

4.2 The radial case

In this section N = 2, Ω is the open unit ball of R2. Consider then the following problem: inf u∈H1 0∩C J (u) := Z Ω [1 2k∇uk 2+ gu] (32)

It is also assumed that g ∈ L2(Ω) and g is radial:

g(x, y) = f (r) = f (k(x, y)k); r12f ∈ L2(0, 1).

First it is clear that the solution of (32) is radial since it is unique and J is rotationally invariant. Using polar coordinates, (32) is equivalent to:

(P)            inf I(v) := Z 1 0 [12r ˙v(r)2+ rf (r)v(r)]dr v convex v(1) = 0 ˙v ≥ 0

in the sense that the solution of (32), u is given by: u(x, y) = v(k(x, y)k) for all (x, y) ∈ Ω

where v denotes the solution of (P). It can be shown as in the proof of Theorem 3 that v ∈ C1((0, 1)). To show that u ∈ C1(Ω), it is enough therefore to show that ˙v(0) = 0 since singularity may only occur at r = 0. Proposition 6 The solution of (32) is C1(Ω).

Proof. First note that for all v, for which I(v) is well defined, one has: I(v) = Z 1 0 [1 2r ˙v(r) 2− G(r) ˙v(r)]dr where: G(r) := Z r 0 sf (s)ds. Hence, x := ˙v is the solution of the following problem:

(P0)       inf F (x) := Z 1 0 [12rx(r)2− G(r)x(r)]dr x non decreasing,

(22)

First order necessary conditions of problems such as (P0) are well known (see [13]) and can be written as: if x is a solution of (P0), then defining

Λ(r) := Z r

0

[sx(s) − G(s)]ds, r ∈ [0, 1] the following conditions hold:

1. Λ(.) ≤ Λ(1),

2. (Λ(.) − Λ(1))dx = 0, 3. Λ(1)x(0) = 0.

where dx denotes the variation measure associated with the nondecreas-ing continuous function x.

Distinguish then 2 cases: First case Λ(1) 6= 0

In this case, condition 3. ensures x(0) = 0. Second case : Λ(1) = 0

Condition 1. implies then Λ ≤ 0. Since x is nondecreasing, we get: Λ(r) ≥

Z r

0

s[x(0) − G(s)

s ]ds (33)

on the other hand: |G(s) s | ≤ 1 s( Z s 0 tf (t)2dt)12( Z s 0 tdt)12 (34)

so that G(s) = o(s) fo small s > 0. Now, if x(0) > 0, (33) and (34) would imply Λ(r) > 0 for small r which contradicts 1.. Finally, we have proved x(0) = 0, since x(0) = ˙v(0), the desired result follows.

Remark. The previous result obviously extends to radial problems in dimensions higher than 2.

(23)

4.3 A projection problem

Consider the following projection problem: inf u∈H1 0(0;1)∩C J (u) := Z 1 0 [1 2| ˙u(t)| 2+ f (t)u(t)]dt (35)

where f is L2. Let u0be the element of H01(0, 1) at which the minimum of J

over H01(0, 1) is achieved, i.e, the solution of the following Cauchy problem:

¨

u = f , u(0) = u(1) = 0 (36) and define v0 as the convex envelope of u0 (see for instance [8]) then the

following holds:

Proposition 7 v0 is the solution of (35).

Proof. Using the results of section 3, we know that v is a solution of (35) if and only if v is convex and there exists a nonnegative Radon measure µ such that ¨µ ∈ H−1(0, 1) and:

Z 1 0 ( ˙v. ˙h + f.h) = Z 1 0 ¨ hdµ for all h ∈ C2([0, 1]) ∩ H01 and h¨µ, vi = 0.

It is easy to show that v0 solves such a system with µ = u0− v0, since for

a.e. t either (u0− v0)(t) = 0 or ¨v0(t) = 0 and since µ(0) = µ(1) = 0.

Remark. The previous result, though straightforward, has an interesting geometrical interpretation. Problem (35) consists indeed in finding the pro-jection of u0 on the cone of convex H01 functions for the norm u 7→ k ˙uk2.

The previous result expresses then that in dimension one the projection of a given H2∩ H1

0 function on K02 is exactly its convex envelope. In higher

dimension, this is obviously no longer true. Nevertheless, the previous proof still holds if u0 is smooth and its convex envelope is affine in the set where

it differs from u0.

Acknowledgements

(24)

References

[1] J.-P. Aubin and I. Ekeland. Applied Nonlinear Analysis. Wiley Inter-science, 1984.

[2] H. Br´ezis. Analyse Fonctionnelle Th´eorie et Applications. Masson, Paris, 1984.

[3] B. Brock, Ferone V., and B. Kawohl. A symmetry problem in the calculus of variations. Calc. of Var and PDE’s, 1996.

[4] G. Buttazzo, V. Ferone, and B. Kawohl. Minimum problems over sets of concave functions and related questions. Math. Nachrichten, 173:71–89, 1995.

[5] P. Chon´e and Rochet J.-C. Ironing, sweeping and multidimensional screening. Econometrica, 1998.

[6] B. Dacorogna. Direct Methods in the Calculus of Variations. Springer-Verlag, 1989.

[7] M.R. Dudley. On second derivatives of convex functions. Mathematical Scandinavian, 41:159–174, 1977.

[8] I. Ekeland and R. Temam. Analyse Convexe et Probl`emes Variationnels. Dunod-Gauthier-Villars, 1974.

[9] R. Glowinski, J.-L. Lions, and R. Tr´emoli`eres. Analyse Num´erique des In´equations Variationnelles. Dunod, 1976.

[10] T. Lachand-Robert and M.A. Peletier. Minimisation de fonctionnelles dans un ensemble de fonctions convexes. C.R. Acad. Sc. Paris, 325:851– 855, 1997.

[11] P.-L. Lions. Identification du cˆone dual des fonctions convexes et ap-plications. C.R. Acad. Sci., Paris, 1998.

[12] P.-A. Meyer. Probabilities and Potential. Waltham, Mass. Blaisdell Publ. Co, 1966.

[13] J.-C. Rochet. Sur quelques probl`emes de calcul des variations de l’´economie math´ematique. Annals of the CEREMADE, Birkhauser Ed., 1989.

Références

Documents relatifs

If f is convex, or just approximately starshaped at x, with ∂f (x) nonempty, then f is equi-subdifferentiable at x, as shown by Lemma 2.1 (a). In fact, this last assertion is a

Recently, Ahlgren ([1]) has given a proof of a Theorem slightly weaker than Theorem 3, and has also proved a quantitative version of Ono's Theorem about the odd values of the

On a classical problem of the calculus of variations without convexity assumptions.. Annales

In this paper we prove that a similar result holds in the Heisenberg group, by showing that every continuous H –convex function belongs to the class of functions whose second

A motivating application of these results concerns the analysis of singularities of solutions to the Hamilton-Jacobi-Bellman equation.. ~*~ Partially supported by

D In the same way that we extended constructions involving trigonometric polynomials to more general functions in proofs like that of Theorem 2 from Lemma 20 so we can

On the different notions of convexity for rotationally invariant functions(*).. BERNARD DACOROGNA(1) and HIDEYUKI

Before going on with the description of the main features of this paper, we will need some further notations. Therefore, Choquet Theory applies and every positive positive