• Aucun résultat trouvé

Combination and mean width rearrangements of solutions to elliptic equations in convex sets

N/A
N/A
Protected

Academic year: 2022

Partager "Combination and mean width rearrangements of solutions to elliptic equations in convex sets"

Copied!
21
0
0

Texte intégral

(1)

ScienceDirect

Ann. I. H. Poincaré – AN 32 (2015) 763–783

www.elsevier.com/locate/anihpc

Combination and mean width rearrangements of solutions to elliptic equations in convex sets

Paolo Salani

DiMaI Dipartimento di Matematica e Informatica “U. Dini”, Università di Firenze, viale Morgagni 67/A, 50134 Firenze, Italy Received 4 November 2013; received in revised form 6 April 2014; accepted 7 April 2014

Available online 16 April 2014

Abstract

We introduce a method to compare solutions of different equations in different domains. As a consequence, we define a new kind of rearrangement which applies to solution of fully nonlinear equationsF (x, u, Du, D2u)=0, not necessarily in divergence form, in convex domains and we obtain Talenti’s type results for this kind of rearrangement.

©2014 Elsevier Masson SAS. All rights reserved.

Keywords:Rearrangements; Elliptic equations; Infimal convolution; Power concave envelope; Minkowski addition of convex sets

1. Introduction

Rearrangements are among the most powerful tools in analysis. Roughly speaking they manipulate the shape of an object while preserving some of its relevant geometric properties. Typically, a rearrangement of a function is performed by acting separately on each of its level sets. Probably the most famous one is the radially symmetric decreasing rearrangement, orSchwarz symmetrization: theSchwarz symmetrand of a continuous functionw0 is the function w whose superlevel sets are concentric balls (usually centered at the origin) with the same measure as the corresponding superlevel sets ofw. Notice thatw, by definition, is equidistributed withw. When applied to the study of solutions of partial differential equations with a divergence structure, this usually leads to a comparison between the solution in a generic domain and the solution of (a possibly “rearranged” version of) the same equation in a ball with the same measure of the original domain. An archetypal result of this type is the following (see[39]):

letube the Schwarz symmetrand of the solutionuof u+f (x)=0 inΩ,

u=0 on∂Ω (1)

and letvbe the solution of v+f(x)=0 inΩ,

v=0 on∂Ω,

E-mail address:paolo.salani@unifi.it.

http://dx.doi.org/10.1016/j.anihpc.2014.04.001

0294-1449/©2014 Elsevier Masson SAS. All rights reserved.

(2)

whereΩis the ball (centered at the origin) with the same measure asΩ,f is a non-negative function andfis the Schwarz symmetrand off. Then, under suitable summability assumptions onf, it holds

uv inΩ, (2)

whence

uLp(Ω)vLp) (3)

for everyp >0, includingp= +∞.

Actually Talenti’s comparison principle(2)–(3)applies to more general situations and the Laplace operator in(1) can be substituted by operators like

div

aij(x)uj

+c(x)u

or even more general ones (see for instance[2–4,39–41]), but alwaysin divergence form.

Here we introduce a new kind of rearrangement, which allows us to obtain comparison results similar to(2)–(3) for very general equations, not necessarily in divergence form, between a classical solution in a convex domainΩ and the solution in the ballΩwith the same mean width asΩ. Recall thatthe mean widthw(Ω)ofΩ is defined as follows:

w(Ω)= 1 n

Sn1

h(Ω, ξ )+h(Ω,ξ )

= 2 n

Sn1

h(Ω, ξ ) dξ,

whereh(Ω,·)is the support function ofΩ (thenw(Ω, ξ )=w(Ω,ξ )=h(Ω, ξ )+h(Ω,ξ )isthe widthofΩ in directionξ or−ξ) andωn is the measure of the unit ball inRn. WhenΩ is a ball,w(Ω)simply coincides with its diameter; in the planew(Ω)coincides with the perimeter ofΩ, up to a factorπ1. See Section2for more details, notation and definitions.

Precisely, we will deal with problems of the following type

⎧⎪

⎪⎩ F

x, u, Du, D2u

=0 inΩ,

u=0 on∂Ω,

u >0 inΩ,

(4)

whereF (x, t, ξ, A)is a continuous proper elliptic operator acting onRn×R×Rn×SnandΩ is an open bounded convex subset ofRn. HereDuandD2uare the gradient and the Hessian matrix of the functionurespectively,Snis the set of then×nreal symmetric matrices.

We will see how, given a solution u of problem(4) and a parameter p >0, it is possible to associate to u a symmetrand up which is defined in a ballΩ having the same mean width as Ω. Under suitable assumptions on the operatorF (seeTheorem 6.6) we obtain a pointwise comparison analogous to(2)betweenupand the solutionv inΩ, that is

upv inΩ, (5)

wherevis the solution of

⎧⎪

⎪⎩ F

x, v, Dv, D2v

=0 inΩ,

v=0 on∂Ω,

v >0 inΩ.

(6) Then from(5)we get

uLq(Ω)vLq) for everyq(0,+∞]. (7)

The precise definition ofupis actually quite involved and it will be given in Section5. Here we just say thatupis not equidistributed withu, in contrast with Schwarz symmetrization; indeed the measure of the super level sets ofup is greater than the measure of the corresponding super level sets ofu.

(3)

The results of this paper are based on the refinement of a technique developed in[5,14,19](and inspired by[1]) to study concavity properties of solutions of elliptic and parabolic equations in convex rings and in convex domains. It is shown here that this refinement permitsto compare solutions of different equations in different domainsand this is in fact the main result of the paper, seeTheorem 4.1. More explicitly, consider two convex setsΩ0andΩ1and a real numberμ(0,1), and denote byΩμtheMinkowski convex combination(with coefficientμ) ofΩ0andΩ1, that is

Ωμ=(1μ)Ω0+μΩ1=

(1μ)x0+μx1:x0Ω0, x1Ω1 . Correspondingly, letu0,u1anduμbe the solutions of

(Pi)

⎧⎪

⎪⎩ Fi

x, ui, Dui, D2ui

=0 inΩi,

ui=0 on∂Ωi,

ui>0 inΩi,

i=0,1, μ.

Roughly speaking (the precise statement will be given in Section4)Theorem 4.1states that, under suitable assump- tions on the operatorsF0,F1andFμ, it is possible to compareuμwith a suitable convolution ofu0andu1. Such a result has obviously its own interest and it has several interesting consequences, among which there is the rearrange- ment technique sketched above.

The paper is organized as follows. In Section2we introduce notation and recall some useful notions and known results. Section3is dedicated to the so-called(p, μ)-convolution of a non-negative function. In Section4we state Theorem 4.1, the main theorem of the paper, and in Section5we prove it. Section6is devoted to rearrangements: it contains the definition ofupandTheorem 6.6. In Section7some examples and applications are presented.

2. Notation and preliminaries

ForA⊆Rn, we denote byA,∂Aand|A|its closure, its boundary and its measure.

Letn2,x∈Rnandr >0:B(x, r)is the euclidean ball of radiusrcentered atx, i.e.

B(x, r)=

z∈Rn: |zx|< r .

In particular we setB=B(0,1),Sn1=∂Bandωn= |B|.

We denote bySnthe space ofn×nreal symmetric matrices and bySn+ andSn++the cones of nonnegative and positive definite symmetric matrices. IfA, BSn, byA0(>0)we mean thatASn+(Sn++)andAB means AB0.

SO(n) is the special orthogonal group ofRn, that is the space of rotations inRn, i.e.n×northogonal matrices with determinant 1.

With the symbol ⊗we denote the direct product between vectors in Rn, that is, for x=(x1, . . . , xn)andy = (y1, . . . , yn),xyis then×nmatrix with entries(xiyj)fori, j=1, . . . , n.

2.1. Viscosity solutions

We will make use of basic viscosity techniques; here we recall only few notions and we refer to the User’s Guide[13]and to the books[9,24]for more details.

The continuous operatorF :Rn×R×Rn×Sn→Ris calledproperif F (x, r, ξ, A)F (x, s, ξ, A) wheneverrs.

LetΓ be a convex cone inSn, with vertex at the origin and containing the cone of nonnegative definite symmetric matricesS+n. We say thatF isdegenerate ellipticinΓ if

F (x, u, ξ, A)F (x, u, ξ, B) wheneverAB, A, BΓ.

We setΓF =

Γ, where the union is extended to every coneΓ such thatF is degenerate elliptic inΓ. When we say thatF is degenerate elliptic, we mean thatF is degenerate elliptic inΓF = ∅. A functionuC2(Ω)is called admissibleforF inΩ ifD2u(x)ΓF for everyxΩ. In general, unless otherwise specified, we will consider for simplicity only operators such thatΓF =Snthroughout (then every regular function is admissible).

(4)

Given two functionsuandφdefined in an open setΩ, we say thatφtouchesuby above atx0Ωif φ(x0)=u(x0) and φ(x)u(x) in a neighborhood ofx0.

Analogously, we say thatφtouchesuby below atx0Ωif

φ(x0)=u(x0) and φ(x)u(x) in a neighborhood ofx0.

An upper semicontinuous functionuis aviscosity subsolutionof the equationF =0 inΩ if, for everyC2functionφ touchinguby above at any pointxΩ, it holds

F

x, u(x), Dφ(x), D2φ(x)

0. (8)

A lower semicontinuous functionuis aviscosity supersolutionofF =0 inΩif, for every admissibleC2functionφ touchinguby below at any pointxΩ, it holds

F

x, u(x), Dφ(x), D2φ(x)

0.

Aviscosity solutionis a continuous function which is both a viscosity sub- and supersolution of F =0 at the same time.

The technique proposed in this paper requires the use of the comparison principle for viscosity solutions. Since we will have to compare a viscosity subsolution only with a classical solution, we will need only a weak version of the comparison principle. To be precise, we say that the operatorF satisfiesthe Comparison Principleif the following statement holds:

(CP) LetuC(Ω)C2(Ω)andvC(Ω)be a classical supersolution and a viscosity subsolution ofF =0such thatuvon∂Ω. ThenuvinΩ.

Comparison Principles for viscosity solutions are an actual and deep field of investigation and we do not intend to give here an updated picture of the state of the art, we just refer to[9,13,24]. However, when one of the involved functions is regular, the situation is much easier and (CP) is for instance satisfied ifF is strictly proper, in other words if it is strictly monotone with respect tou.

2.2. Minkowski addition and support functions of convex sets

The Minkowski sum of two subsetsA0andA1ofRnis simply defined as follows A0+A1= {x+y:xA0, yA1}.

Letμ(0,1); the Minkowski convex combination ofA0andA1(with coefficientμ) is given by Aμ=(1μ)A0+μA1=

(1μ)x0+μx1:x0A0, x1A1 . The famousBrunn–Minkowski inequalitystates

|Aμ|1/n(1μ)|A0|1/n+μ|A1|1/n (9) for every couple A0,A1 of measurable sets such that Aμ is also measurable. In other words, (9) states that the n-dimensional volume (i.e. Lebesgue measure) raised to power 1/nis concave with respect to Minkowski addition (see the beautiful paper by Gardner[16]for a survey on this and related inequalities).

When the involved sets are convex, Minkowski addition can be conveniently expressed in terms of support functions (see property (ii) below).

Thesupport functionhΩ:Rn→Rof a bounded convex setΩis defined as follows hΩ(X)=max

yΩX, y, X∈Rn.

Every support function is convex and positively homogeneous of degree 1, that is hΩ(X+Y )hΩ(X)+hΩ(Y ) for everyX, Y∈Rn

(5)

and

hΩ(tX)=t hΩ(X) for everyX∈Rnandt0.

Vice versa, every convex and positively 1-homogeneous function is the support function of a convex body (i.e. a closed bounded convex set). This establishes a one to one correspondence between support functions and convex bodies.

Moreover the following properties hold:

(i) ht Ω=t hΩ fort0;

(ii) hΩ1+Ω2=hΩ1+hΩ2.

The latter simply reads that the Minkowski addition of convex sets corresponds to the sum of support functions.

As already said in the introduction, we denote the mean width ofΩbyw(Ω), that is w(Ω)= 1

n

Sn1

h(Ω, ξ )+h(Ω,ξ )

= 2 n

Sn1

h(Ω, ξ ) dξ.

WhenΩ is a ball,w(Ω)coincides with its diameter. In the planew(Ω)coincides with the perimeter ofΩ, up to a factorπ1.

Given a convex setΩ and a pointx∂Ω, we denote byνΩ(x)theexterior normal cone ofΩ atx, that is νΩ(x)=

p∈Rn: yx, p0 for everyyΩ .

The normal cone of a convex set is a non-empty convex cone for every boundary point and in factΩis convex if and only ifνΩ(x)= ∅for everyx∂Ω. The following elementary lemma about Minkowski addition will be useful in the sequel.

Lemma 2.1.LetΩ0, Ω1⊆Rnbe open bounded convex sets andμ(0,1).

ThenΩμ=(1μ)Ω0+μΩ1is an open bounded convex set; moreover ifx0Ω0and x1Ω1are such that x=(1μ)x0+μx1∂Ω, thenx0∂Ω0,x1∂Ω1andνΩμ(x)=νΩ0(x0)νΩ1(x1)= ∅.

The properties stated in the lemma can be considered folklore in the theory of convex bodies and the proof is straightforward.

For further details on convex sets, Minkowski addition and support functions, we refer to[37].

2.3. Power concave functions

Letp∈ [−∞,+∞]andμ(0,1). Given two real numbersa >0 andb >0, the quantity

Mp(a, b;μ)=

⎧⎪

⎪⎪

⎪⎪

⎪⎩

max{a, b} p= +∞,

[(1μ)ap+μbp]1/p forp= −∞,0,+∞, a1μbμ p=0,

min{a, b} p= −∞

(10)

is the (μ-weighted) p-mean of a and b. For a, b0, we define Mp(a, b;μ) as above if p0 and we set Mp(a, b;μ)=0 if p <0 andab=0. Notice that Mp is continuous with respect to (a, b)∈ [0,∞)× [0,∞)for everyp. See[17]for more details.

A simple consequence of Jensen’s inequality is that

Mp(a, b;μ)Mq(a, b;μ) ifpq. (11)

Moreover for everyμ(0,1)it holds

p→+∞lim Mp(a, b;μ)=max{a, b} and lim

p→−∞Mp(a, b;μ)=min{a, b}.

(6)

Definition 2.2.LetΩbe an open convex set inRnandp∈ [−∞,∞]. A functionv:Ω→ [0,+∞)is saidp-concave if

v

(1μ)x+μy

Mp

v(x), v(y);μ for allx,yΩandμ(0,1).

In the casesp=0 andp= −∞,vis also calledlog-concaveandquasi-concaveinΩ. In other words, a non-negative functionu, with convex supportΩ, isp-concave if:

– it is a non-negative constant inΩ, forp= +∞; – upis concave inΩ, forp >0;

– loguis concave inΩ, forp=0;

upis convex inΩ, forp <0;

– it is quasi-concave, i.e. all of its superlevel sets are convex, forp= −∞.

Notice thatp=1 corresponds to usual concavity.

It follows from(11)thatifvisp-concave, thenvisq-concave for anyqp. Hence quasi-concavity is the weakest conceivable concavity property.

It is well known that solutions of elliptic Dirichlet problems in convex domains are often power concave. For instance, a famous result by Brascamp and Lieb[8]says that the first positive eigenfunction of the Laplace operator in a convex domain is log-concave; another classical result states that the square root of the solution to the torsion problem in a convex domain is concave, see [20,23,30]. These results about Laplacian were both extended to the case ofp-Laplacian by Sakaguchi in[34]. Power concave solutions have been also studied in[21,22,25]and more recent developments are for instance in[1,26–29,36,44]; furthermore see[14]and[5], which are strongly related to the present paper.

2.4. The Borell–Brascamp–Lieb inequality

The Borell–Brascamp–Lieb inequality (see[6,8]) is a generalization of the Prékopa–Leindler inequality. I recall it here in the form taken from[16, Theorem 10.1].

Proposition 2.3.Letμ(0,1),f, g, hbe nonnegative functions inL1(Rn), and−1/ns∞. Assume that h

(1μ)x+μy

Ms

f (x), g(y);μ

(12) for allx∈sprt(f ),y∈sprt(g). Then

Rn

h dxMq Rn

f dx,

Rn

g dx;μ

,

where

q=

⎧⎪

⎪⎩

1/n ifs= +∞,

s/(ns+1) ifs(−1/n,+∞),

−∞ ifs= −1/n.

(13)

The Prékopa–Leindler inequality corresponds to the case s=0 and it is a functional version of the Brunn–

Minkowski inequality.

3. The(p, μ)-convolution of non-negative functions

From now on, throughout the paper, we consider two open bounded convex sets Ω0, Ω1⊂Rn and a fixed real numberμ(0,1), and denote byΩμ the Minkowski convex combination (with coefficientμ) ofΩ0andΩ1, i.e.

Ωμ=(1μ)Ω0+μΩ1.

(7)

Definition 3.1. Let p ∈R, μ(0,1), u0C(Ω0) and u1C(Ω1) such that ui 0 in Ωi, i =0,1. The (p, μ)-convolutionofu0andu1is the functionup,μ:Ωμ→Rdefined as follows:

up,μ(x)=sup Mp

u0(x0), u1(x1);μ

:x=(1μ)x0+μx1, xiΩi, i=0,1

. (14)

The above definition can be extended to the casep= ±∞, but we do not need here. Let me recall however that the casep= −∞has been useful in[7,11]to prove the Brunn–Minkowski inequality forp-capacity of convex sets.

Letp=0; then, roughly speaking, the graph ofupp,μ is obtained as the Minkowski convex combination (with coefficientμ) of the graphs ofup0 andup1; precisely we have

Kμ(p)=(1μ)K0(p)+μK1(p), where

Kμ(p)=

(x, t)∈Rn+1:xΩμ,0tup,μ(x)p , Ki(p)=

(x, t)∈Rn+1:xΩi, 0tui(x)p

, i=0,1.

In other words, the (p, μ)-convolution of u0 andu1 corresponds to the(1/p)-power of the supremal convolution (with coefficientμ) ofup0 andup1. Whenp=0, the above geometric considerations continue to hold with logarithm in place of powerpand exponential in place of power 1/p. Whenp=1,u1,μis just the usual supremal convolution ofu0andu1. For more details on infimal/supremal convolutions of convex/concave functions, see[33,38](and also [12,35]).

FromDefinition 3.1and(11), we get

uup,μuq,μ for−∞pq+∞. (15)

Lemma 3.2.Letp∈ [−∞,+∞),μ(0,1). Fori=0,1letuiC(Ωi)such thatui=0on∂Ωi andui>0inΩi. Thenup,μC(Ωμ)and

up,μ>0 inΩμ, up,μ=0 on∂Ωμ. (16)

Proof. The proof of this lemma is almost straightforward and completely analogous to the proof of Lemma 1[5]. We just notice thatup,μ>0 inΩby the very definition ofup,μwhileup,μ=0 on∂ΩbyLemma 2.1. 2

Notice that, asΩi is compact fori=0,1 andMp,u0andu1are continuous, then the supremum in(14)is in fact a maximum. Hence for everyx¯∈Ωμthere existx0Ω0andx1Ω1such that

¯

x=(1μ)x0+μx1, up,μ(x)¯ =Mp

u0(x0), u1(x1);μ

. (17)

The next lemma is fundamental to this paper.

Lemma 3.3.Letp∈ [0,1),μ(0,1),uiC1i)C(Ωi)such thatui=0on∂Ωi,ui>0inΩi fori=0,1.

In casep >0assume furthermore that fori=0,1it holds lim inf

yx

∂ui(y)

∂ν >0 (18)

for everyx∂Ωi, whereνis any inward direction ofΩi atx.

Ifx¯lies in the interior ofΩμ, then the pointsx0andx1defined by(17)belong to the interior ofΩ0andΩ1and u0(x0)p1Du0(x0)=u1(x1)p1Du1(x1). (19) Proof. First we prove thatxiΩi fori=0,1.

The casep=0 easily follows from(16)and the definition ofM0, sinceup,μ(x) >¯ 0 whileu0(x0)1μu1(x1)μ=0 ifx0∂Ω0orx1∂Ω1.

(8)

Then letp >0. By contradiction, assume that (up to a relabeling)x0∂Ω0. Thenu0(x0)=0 andx1must lie in the interior ofΩ1, otherwiseup,μ(x)¯ =0, contradicting(16). Notice that in this case

up,μ(x)¯ =μ1/pu1(x1).

Setv0=up0,v1=up1 and

a=Dv1(x1)=pu1(x1)p1Du1(x1).

By the regularity ofu1, we have

|Dv1|< a+1 inB(x1, r1)Ω1 (20)

forr1>0 small enough.

Now take any directionνpointing inwards intoΩ0atx0; by assumption(18)we get lim inf

xx0

∂v0(x)

∂ν = +∞, (21)

whence

∂v0

∂ν > a+1 inΩ0B(x0, r0) (22)

forr0>0 small enough.

Next we takeρ <min{(1μ)r0, μr1}and we consider the points

˜

x0=x0+ ρ (1μ)ν,

˜

x1=x1ρ μν.

We have

˜

x0B(x0, r0)Ω0, x˜1B(x1, r1) and

¯

x=(1μ)x˜0+μx˜1. (23)

Then from(20)and(22)we get

u0(x˜0)p=v0(x˜0) > v0(x0)+(a+1) ρ

(1μ)=(a+1) ρ (1μ), u1(x˜1)p=v1(x˜1)v1(x1)p(a+1)ρ

μ=u1(x1)p(a+1)ρ μ, whence

(1μ)u0(x˜0)p+μu1(x˜1)p1/p

>

(1μ)(a+1) ρ

(1μ)+μu1(x1)pμ(a+1)ρ μ

1/p

=up,μ(x)¯ which contradicts the definition ofup,μ, due to(23).

So far, we have proved that xi must stay in the interior of Ωi for i=0,1. Then by the Lagrange Multipliers Theorem we easily get(19)(in fact, it is easily seen that the latter holds if just one ofx0andx1lies in the interior of the correspondingΩi and the involved functions are differentiable up to the boundary: indeed, ifx0Ω0, then it is an interior maximum point for the function

f (x)=Mp

u0(x), u1

x¯−(1μ)x μ

;μ and∇f (x0)=0 gives(19)).

The proof of the lemma is complete. 2

(9)

3.1. The(p, μ)-convolution of more than two functions

The definition of the(p, μ)-convolution of two functions is easily extended to an arbitrary number of functions.

Let 3m∈Nand setΓm+= {(x1, . . . , xm)∈Rm:xi0, i=1, . . . , m}and Γm1=

1, . . . , μm)Γm+:μi>0 fori=1, . . . , mand m i=1

μi=1

.

Letp∈ [−∞,+∞],μΓm1anda=(a1, . . . , am)Γm+. Ifm

i=1ai>0, thep-mean ofa1, . . . , amwith coefficient μis defined as follows:

Mp(a1, . . . , am;μ)=

⎧⎪

⎪⎪

⎪⎪

⎪⎩

max{a1, . . . , am} p= +∞, [m

i=1μiapi]1/p p= −∞,0,+∞, m

i=1aμii p=0, min{a1, . . . , am} p= −∞. Ifm

i=1ai=0, we defineMp(a, μ)as above ifp0 and we setMp(a, μ)=0 ifp <0.

If we now considermnon-negative functionsu1, u2, . . . , umsupported in the setsΩ1, Ω2, . . . , Ωm, we can define up,μ(x)=sup

Mp

u0(x0), . . . , um(xm);μ

:xiΩi, i=1, . . . , m, x= m i=1

μixi

. (24)

Clearly all the properties and lemmas stated and proved before for the casem=2 continue to hold in the casem3, with the obvious modifications. In particular we explicitly write the following.

Lemma 3.4.Letp∈ [−∞,+∞),μΓm1. LetuiC(Ωi)such thatui=0on∂Ωiandui>0inΩi, fori=1, . . . , m.

Thenup,μC(Ωμ)and

up,μ>0 inΩμ, up,μ=0 on∂Ωμ. (25)

As before, sinceΩi is compact fori=1, . . . , mandMp,u1, . . . , umare continuous, the supremum in(24)is in fact a maximum. Hence for everyx¯∈Ωμthere existx0Ω0, . . . , xmΩmsuch that

¯ x=

m i=1

μixi, up,μ(x)¯ =Mp

u1(x1), . . . , um(xm);μ

. (26)

Lemma 3.5.Letp∈ [0,1),μ(0,1),uiC1i)C(Ωi)such thatui=0on∂Ωi,ui>0inΩi fori=1, . . . , m.

In casep >0assume furthermore that for(18)holds fori=1, . . . , m.

Ifx¯lies in the interior ofΩμ, then the pointsx1, . . . , xmdefined by(17)belong to the interior ofΩ1, . . . , Ωmand u1(x1)p1Du1(x1)=. . .=um(xm)p1Dum(xm). (27) 4. The main theorem

As before and throughout,Ω0andΩ1are open bounded convex sets inRn,μ(0,1)andΩμ=(1μ)Ω0+μΩ1. Fori=0,1, μ, we denote byui a solution of the following problem

(Pi)

⎧⎪

⎪⎩ Fi

x, ui, Dui, D2ui

=0 inΩi,

ui=0 on∂Ωi,

ui>0 inΩi,

whereFi:Ωi× [0,+∞)×Rn×Snis a proper elliptic operator.

(10)

If not otherwise specified, we will consider classical solutions fori=0,1 (that is:u0C20)C(Ω0)and u1C21)C(Ω1)and they satisfy pointwise everywhere all the equations in(P0)and(P1)), whileuμC(Ωμ) may be aviscosity solutionof the corresponding problem(Pμ).

Fori=0,1, μand for every fixed(θ, p)∈Rn× [0,∞)we defineG(θ )i,p:Ωi×(0,+∞)×Sn→Ras G(θ )i,p(x, t, A)=Fi

x, t1p, tp11θ, tp13A

forp >0, (28)

and

G(θ )i,0(x, t, A)=Fi

x, et, etθ, etA

. (29)

Assumption (Aμ,p).Letμ(0,1)andp0. We say thatF0, F1, Fμ satisfy the assumption(Aμ,p)if, for every fixedθ∈Rn, the following holds

G(θ )μ,p

(1μ)x0+μx1, (1μ)t0+μt1, (1μ)A0+μA1

min

G(θ )0,p(x0, t0, A0);G(θ )1,p(x1, t1, A1) for everyx0Ω0,x1Ω1,t0, t1>0 andA0, A1Sn.

Now we are ready to state the main result of the paper.

Theorem 4.1.Letμ(0,1)andΩi andui,i=0,1, μ, be as above described. Assume that the operatorFμsatisfies the comparison principle(CP)and thatF0, F1, Fμsatisfy the assumption(Aμ,p)for somep∈ [0,1). Ifp >0, assume furthermore that(18)holds true fori=0,1.

Then uμ

(1μ)x0+μx1

Mp

u0(x0), u1(x1);μ

(30) for everyx0Ω0,x1Ω1.

We remark that assumption(18)is not needed forp=0, while forp >0 it is in general provided by a suitable version of Hopf’s Lemma. Notice also that, forp <1,(18)implies(21). In fact, we could also apply our argument to the casep1; in such a case however we would need to assume directly(21)instead of(18).

Coupling(30)with the Borell–Brascamp–Lieb inequality (i.e.Proposition 2.3) leads to a comparison of theLr norms ofuμwith suitable combinations of theLr norms ofu0andu1. To be precise, we have the following corollary.

Corollary 4.2.With the same assumptions and notation ofTheorem 4.1, for everyr >0we have uμLrμ)Mq

u0Lr0),u1Lr1);μ

, (31)

where

q= pr

np+r forr(0,+∞), p forr= +∞.

Proof. The inequality for theLnorms is a straightforward consequence of(30), obtained by takingx0 andx1as points which realize the maximum ofu0andu1, respectively (in fact, in this case equality holds in31). The proof of the inequality for a genericr(0+ ∞)follows fromProposition 2.3, applied to the functionsh=urμ,f =ur0and g=ur1withs=p/r, assumption(12)being satisfied thanks to(30). 2

Notice that in some special cases, involving particular operators, results similar to those we could obtain by apply- ingTheorem 4.1andCorollary 4.2to the situations at hands, have already been obtained (even though not explicitly stated) and used to prove Brunn–Minkowski type inequalities for variational functionals, see for instance[10,12,28, 36,43]. Indeed,Theorem 4.1could be regarded as a general Brunn–Minkowski inequality for solutions of PDE’s (and then applied to obtain Brunn–Minkowski type inequalities for possibly related functionals).

(11)

5. Proof ofTheorem 4.1

The proof ofTheorem 4.1essentially consists of the following lemma.

Lemma 5.1.With the same assumptions and notation ofTheorem 4.1, it follows thatup,μis a viscosity subsolution of problem(Pμ).

Proof. The proof follows somehow the steps of[5,14,19], and the strategy is the following: for everyx¯∈Ωμ we construct a functionϕp,μC2μ)which touchesup,μby below atx¯and such that

F

¯

x, ϕp,μ(x), Dϕ¯ p,μ(x), D¯ 2ϕp,μ(x)¯

0. (32)

Clearly this implies thatup,μis a viscosity subsolution of(Pμ): indeed every test functionφtouchingup,μatx¯by above must also touchϕp,μatx¯ by above, then

φ(x)¯ =ϕp,μ(x),¯ Dφ(x)¯ =p,μ(x)¯ and D2φ(x)¯ D2ϕp,μ(x)¯ and(8)follows from the ellipticity ofF.

Then considerx¯∈Ω. ByLemma 3.3, there existx0Ω0andx1Ω1satisfying(17)and such that(19)holds.

First we treat the casep >0 and, for a small enoughr >0, we introduce the functionϕp,μ:B(x, r)¯ →Rdefined as follows

ϕp,μ(x)=

(1μ)u0

x0+a0(x− ¯x)p

+μu1

x1+a1(x− ¯x)p1/p

(33) where

ai= ui(xi)p

up,μ(x)¯ p, fori=0,1. (34)

The following facts trivially hold:

(A) (1μ)a0+μa1=1 by(17);

(B) x =(1μ)(x0+a0(x− ¯x))+μ(x1+a1(x− ¯x))for everyxB(x, r), thanks to (A) and the first equation¯ in(17);

(C) ϕp,μ(x)¯ =up,μ(x);¯

(D) ϕp,μ(x)up,μ(x)inB(x, r)¯ (this follows from(B)and the definition ofup,μ).

In particular, (C) and (D) say thatϕp,μtouchesup,μfrom below atx¯. A straightforward calculation yields

p,μ(x)¯ =ϕp,μ(x)¯ 1p

(1μ)u0(x0)p1a0Du0(x0)+μu1(x1)p1a1Du1(x1) . Then, by(19),(34)and the definition ofϕp,μ, we get

p,μ(x)¯ =ϕp,μ(x)¯ 1pui(xi,p)p1Dui(xi,p) fori=0,1. (35) Thanks to another straightforward calculation and using(19),(34),(35)and the definition ofϕp,μ, we also obtain

D2ϕp,μ(x)¯ =(1μ)u0(x0)3p1

ϕp,μ(x)¯ 3p1D2u0(x0)+μ u1(x1)3p1

ϕp,μ(x)¯ 3p1D2u1(x1) +(1p)ϕp,μ(x)¯ 1ADϕp,μ(x)¯ ⊗p,μ(x),¯

where

A=1−ϕp,μ(x)¯ p

(1μ)u0(x0)p+μu1(x1)p . Now notice that (C) and(17)give

A=0.

(12)

Then

D2ϕp,μ(x)¯ =(1μ) u0(x0)3p1

ϕp,μ(x)¯ 3p1D2u0(x0)+μ u1(x1)3p1

ϕp,μ(x)¯ 3p1D2u1(x1). (36) Sinceu0andu1are classical solutions of(P0)and(P1), it follows that fori=0,1

G(θ )i,p

xi, ui(xi)p, ui(xi)3p1D2ui(xi)

=Fi

xi, ui(xi), Dui(xi), D2ui(xi)

=0, where

θ=ϕp,μ(x)¯ p1p,μ(x).¯

Then, by settingμ0=(1μ)andμ1=μ, assumption(Aμ,p)entails G(θ )μ,p

1

i=0

μixi, 1

i=0

μiui(xi)p, 1 i=0

μiui(xi)3p1D2ui(xi)

0,

and thanks to(C)and(36)this precisely coincides with G(θ )μ,p

¯

x, ϕp,μ(x)¯ p, ϕp,μ(x)¯ 3p1D2ϕp,μ(x)¯

0.

The latter implies(32)by the definition ofG(θ )μ,pand this concludes the proof forp >0.

The casep=0 is similar, the only difference consisting in that we set ϕ0,μ:=exp

(1λ)logu0(x1,0+x− ¯x)+μlogu1(xn+1,0+x− ¯x) , which meansai,0=1 fori=0,1. 2

The proof ofTheorem 4.1is now very easy.

Proof of Theorem 4.1. Under the assumptions of the theorem, we can apply the previous lemma to obtain thatup,μ is a viscosity subsolution of(Pμ). Then by the Comparison Principle we get the claim. 2

5.1. A generalization

Looking at the proof ofLemma 5.1, it is easily understood that assumption(Aμ,p)can be in fact substituted by a slightly weaker one: precisely what really matters is that the inequality in(Aμ,p)holds only for(xi, ti, Ai)such that Gp,θ(xi, ti, Ai)=0,i=0,1.

Moreover, it is clear that, when considering the combination of more than two Dirichlet problems, a generalized version ofTheorem 4.1continues to hold.

Exactly, letm∈N,m2, andμ=1, μ2, . . . , μm)Γm1; letΩi,Fi andui be a convex set, a proper elliptic operator and the solution of problem(Pi)fori=1, . . . , mandi=μ, where

Ωμ= m i=1

μiΩi.

Now defineG(θ )i,p as in(28)and(29)and set Z(θ )i,p=

(x, t, A):G(θ )i,p(x, t, A)=0 fori=1, . . . , m.

Then we say that the operatorsFμ, F1, . . . , Fmsatisfies theAssumption Weak(Aμ,p)if (WAμ,p) Sμ,p(θ ) =

(x, t, A):G(θ )μ,p(x, t, A)0

m i=1

μiZθi,p for everyθ∈Rn.

Références

Documents relatifs

Specifically, we prove in this paper that any nonnegative solution u of (1.1), (1.2) has to be even in x 1 and, if u ≡ 0 and u is not strictly positive in Ω , the nodal set of u

Proof. the second remark at the end of Section 2). As shown in [13] Theorem 4, any solution z of such a system must be real analytic in B. [5] Section 1.7, where corresponding

in Section 4: consider a sequence of renormalized solutions u£ of (1.7), (1.8) relative to the same matrix A and to a sequence of right-hand sides f E which

As a consequence, the local regularity of solutions to elliptic equations and for their gradient in arbitrary rearrangement invariant spaces is reduced to one-dimensional

The first theorem, Theorem 2.1, is a Harnack inequality for the level sets, the second is an improvement of flatness (Theorem 2.2) and the third theorem gives the equation for

In order to obtain Schauder esti- mates on solutions at this particular point we need to derive a priori estimates.. on asymptotic polynomials and error

elliptic, Theorem 9.1 holds true for spaces of holomorphic functions in every bounded domain in C with real analytic

«Symmetry properties and isolated singularities of positive solutions of nonlinear elliptic equations.».. Nonlinear Differential Equations in Engineering and