• Aucun résultat trouvé

Local solvability of the $k$-Hessian equations

N/A
N/A
Protected

Academic year: 2021

Partager "Local solvability of the $k$-Hessian equations"

Copied!
24
0
0

Texte intégral

(1)

HAL Id: hal-01093001

https://hal.archives-ouvertes.fr/hal-01093001v2

Preprint submitted on 14 Dec 2014

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Local solvability of the k-Hessian equations

G Tian, Qi Wang, C.-J Xu

To cite this version:

G Tian, Qi Wang, C.-J Xu. Local solvability of the

k-Hessian equations. 2014. �hal-01093001v2�

(2)

OF THEk-HESSIAN EQUATIONS

G. TIAN, Q. WANG AND C.-J. XU

Abstract. In this work, we study the existence of local solutions inRntok-Hessian equa- tion, for which the nonhomogeneous term f is permitted to change the sign or be non negative; iff isC,so is the local solution. We also give a classification for the second order polynomial solutions to thek−Hessian equation, it is the basis to construct the local solutions and obtain the uniform ellipticity of the linearized operators at such constructed local solutions.

1. Introduction

In this paper, we focus on the existence of local solution for the followingk-Hessian equation,

(1.1) Sk[u]=f(y,u,Du),

on an open domainΩofRn, where 2≤kn. For a smooth functionuC2, thek-Hessian operatorSkis defined by

(1.2) Sk[u]=Sk(D2u)k(λ(D2u))= X

1i1<i2...<ikn

λi1λi2. . . λik,

whereλ(D2u) = (λ1, . . . , λn) are the eigenvalues of the Hessian matrix (D2u), σk(λ) is thek−th elementary symmetric polynomial, andSk[u] is the sum of all principal minors of orderkfor the Hessian matrix (D2u) . We say that a smooth functionuisk-convex if the eigenvalues of the Hessian matrix (D2u) are in the so-called Gårding coneΓkwhich is defined by

Γk(n)={λ=(λ1, λ2, . . . , λn)∈Rnj(λ)>0,1≤ jk}.

For the local solvability, Hong and Zuily [7] obtained the existence ofClocal solutions for Monge-Amp´ere equation

(1.3) detD2u= f(y,u,Du) y∈Ω⊂Rn,

when fCis nonnegative, it is the case ofk=nin (1.1). The geometric background of Monge-Amp`ere equation can be found in [6, 4, 12]. In this work, we only consider the Hessian equation for 2≤k<n,since it is classical for k=1. We will follow the method of [7] (see also [11, 14]) to construct the local solution by a perturbation of the polynomial- typed solution ofSk[u]=cfor some real constantc. Since the right hand side function in (1.1) possibly vanishes, then, the solution is in the closure ofΓk. Thus, we need to study the closure ofΓk, its boundary is

∂Γk(n)={λ∈Rnj(λ)≥0, σk(λ)=0,1≤ jk−1}.

Date: December 14, 2014.

2000Mathematics Subject Classification. 35J60; 35J70.

Key words and phrases. k-Hessian equations, local solution, uniform ellipticity, Nash-Moser iteration.

1

(3)

From the Maclaurin’s inequalities

k(λ)

n k

#1/k

l(λ)

n l

#1/l

, λ∈Γk,kl≥1,

we see thatσk+1(λ)>0 cannot occur forλ∈∂Γk(n), therefore we can express∂Γkas two parts

∂Γk(n)=P1∪P2, with

P1={λ∈Γk(n) :σj(λ)≥0, σk(λ)=σk+1(λ)=0,1≤ jk−1} P2={λ∈Γk(n) :σj(λ)≥0, σk(λ)=0, σk+1(λ)<0,1≤ jk−1}. Besides,

(1.4) P2=∅, if k=n.

In Section 2, we will prove thatP1andP2have a more precise version, andP2,∅ifk<n.

In this paper, we always discuss thek-Hessian equation under the framework of ellip- ticity, then we follow the ideas of [8] and [9] to get the existence of the solution. Here we explain the ellipticity: in the matrix language, the ellipticity set of thek-Hessian operator, 1≤kn,is

Ek=n

S ∈ Ms(n) :Sk(S +tξ×ξ)>Sk(S)>0, ξ∈Sn1,∀t∈R+o , whereMs(n) is the space ofn-symmetric real matrix. Then the Gårding cones is

Γk=S ∈ Ms(n) :Sk(S +tI)>Sk(S)>0,∀t∈R+ .

It is possible to show thatEk = Γkonly fork= 1,nand the example in [9] assures that ΓkEkandmess(Ekk)>0.Ivochkina, Prokofeva and Yakunina [9] pointed out that the ellipticity of (1.1) is independent of the sign of f ifk<n. But for the Monge-Amp`ere equation (1.3), the type of equation is determined by the sign of f, it is elliptic if f >0, hyperbolic if f <0 and degenerate if f vanishes at some points; it is of mixed type if f changes sign [5].

There are many results for the Dirichlet problem of (1.1) under the conditionf >0 (see [15] and references therein), there are also some results aboutC1,1 weak solution of the Dirichlet problem of (1.1) under the degenerate condition f ≥0 (see [16] and references therein). But similarly to Monge-Amp`ere equation, the existence of the smooth solution to Dirichlet problem of (1.1) is completely an open problem if f is not strictly positive.

In this work, for the local solution of thek-Hessian equation (1.1), we prove the fol- lowing results.

Theorem 1.1. Let f =f(y,u,p)be defined and continuous near a point Z0=(y0,u0,p0)∈ Rn×R×Rn,0 < α < 1,2≤k<n. Assume that f is Cαwith respect to y and C2,1with respect to u,p.We have that

(1) If f(Z0) = 0, then the equation(1.1) admits a (k−1)- convex local solution uC2,αnear y0which is not(k+1)- convex.

(2) If f ≥0near Z0, then the equation(1.1)admits a k -convex local solution uC2,α near y0which is not(k+1)- convex.

(3) If f(Z0) < 0, the equation(1.1)admits a(k−1)-convex local solution u ∈ C2,α near y0which must not be k - convex.

Moreover, in all the case above, the linearized operator of (1.1)at u is uniformly elliptic, and if fCnear Z0, then the local solution above is Cnear y0.

(4)

Remark 1.2. 1) Without loss of generality, by a translationyyy0and replacinguby uu(0)y· Du(0),we can assumeZ0=(0,0,0) in Theorem 1.1, then the local solution in the above Theorem 1.1 is of the following form

(1.5) u(y)= 1

2 Xn

i=1

τiy2iε4w(ε2y),

with arbitrarily fixed (τ1, τ2, . . . , τn)∈P2in the cases of (1) and (2). In the case of (3), we take some special (τ1, τ2, . . . , τn)∈Γk1. In (1.5), we always takeε >0 and

(1.6) ε=

( εα, 0< α≤12 ε, 12 < α <1,

then (1.5) is of the same form as the solution in [7, 11, 14], wheref has good smoothness.

2) Notice that, in Case (1) of Theorem 1.1,f is permitted to change sign nearZ0. 3) If f = f(y,u,p) is independent ofuand p, then the assumption on f is reduced to f = f(y) ∈Cα, which is a necessary requirement on f for the classical Schauder theory.

The condition that f isC2,1with respect touandpis a technical one to meet the need for tackling with the quadratic error in Nash-Moser iteration (see (3.14)).

4) In Theorem 1.1, we consider only thek-Hessian equation with 2k<n, since the Monge-Amp`ere equation (1.3) is considered by [5, 7].

This article consists of three sections besides the introduction. In Section 2, we give a classification of the polynomial-typed solutions forSk[u]= cfor some real constantc.

Such a classification will assure the ellipticity of linearized operators at each polynomial.

The results of this section given also a good understanding for the structure of solutions tok- hessian equation. In Section 3, Theorem 1.1 is proved by Nash-Moser iteration.

Section 4 is an appendix in which three equivalent definitions for Gårding cone are given and proved.

2. Aclassification of polynomial solutions Forλ∈Rn, setψ(y)= 12Pn

i=1λiy2i, then

(2.1) Sk[ψ]=σk(λ)=c,

wherecis a real constant. The linearized operators ofSk[·] atψis

(2.2) Lψ=

Xn i=1

σk1,i(λ)∂2i,

whereσk1,i(λ), furthermore,σl,i1,i2,···,is(λ), is defined in (4.7). To give a classification of polynomial solutions to equation (2.1), we recall a results of Section 2 of [15].

Proposition 2.1. (See[15]) Assume thatλ∈Γk(n)is in descending order, (i) then we have

λ1≥ · · ·λk≥ · · ·λp≥0≥λp+1≥ · · ·λn

with pk.

(ii) we have

(2.3) 0≤σk1;1(λ)≤σk1;2(λ)≤ · · · ≤σk1,n(λ).

(iii) For any{i1,i2,· · ·is} ⊂ {1,2, . . . ,n}with l+sk, we have (2.4) σl;i1,i2···is(λ)≥0.

(5)

Using (ii) of the Proposition 2.1, for anyλ∈Γk(n), the linearized operatorsLψdefined in (2.2) could be degenerate elliptic. Now we study the non-strictlyk-convex Garding’s coneΓk(n), we will show some special uniformly elliptic case.

Theorem 2.2. Suppose thatλ∈∂Γk(n)=P1∪P2,2≤kn−1. Then either (I) Ifσk(λ)=0andσk+1(λ)<0, then

σk1;i(λ)>0,i=1,2,· · ·,n.

or

(II) Ifσk(λ)=0=σk+1(λ), then

σj(λ)=0, j=k+2,· · ·,n, that meansλ∈Γn(n).

In order to prove this theorem, we need several lemmas.

Lemma 2.3. Letλ = (λ1,· · ·, λn) ∈ Rn and it is in the descending order. For0 ≤ s <

nk−1, denoteλ(s)=(λs+1,· · ·, λn). Suppose thatλ(s)∈Γk(n−s)and





σk1;s+1(s))=0, σk(s))=0, σk+1(s))<0.

Thenλ(s+1)=(λs+2,· · ·, λn)∈Γk(n−s−1)and





σk1;s+2(s+1))=0, σk(s+1))=0, σk+1(s+1))<0.

Proof. It suffices to prove this lemma fors=0 since we can complete the proof by an in- duction on the length ofλ(s). Here we call the length ofλ(nj)isjifλ(nj) =(λnj+1, . . . , λn).

Thus, we suppose that

(2.5) σk1;1(λ)=0, σk(λ)=0, σk+1(λ)<0.

Using (4.8)

σk(λ)=λ1σk1;1(λ)+σk;1(λ), and

σk+1(λ)=λ1σk;1(λ)+σk+1;1(λ), we get

(2.6)





σk1;11,· · ·, λn)=σk12,· · ·, λn)=0, σk;11,· · ·, λn)=σk2,· · ·, λn)=0, σk+1;11,· · ·, λn)=σk+12,· · ·, λn)<0.

By (2.4) and (2.6), we have, forλ∈Γk(n) satisfying (2.5),

σj;11,· · ·, λn)=σj2,· · · , λn)≥0, ∀jk, which implies

(2.7) λ(1):=(λ2, λ3,· · ·, λn)∈Γk(n−1).

Using Proposition 2.1 forλ(1)∈Γk(n−1), we have

(2.8) λ2 ≥ · · · ≥λ1+k≥0, σk1;2(1))≥0, σk2;2(1))≥0.

(6)

Using the first equation in (2.6) and the first and third inequalities in (2.8), we have (2.9) 0=σk1;1(λ)=σk1(1))=λ2σk2;2(1))+σk1;2(1))≥σk1;2(1)).

Then, by (2.8) and (2.9)

(2.10) 0≥σk1;2(1))≥0.

Accordingly, from (2.6), (2.7) and (2.10), we getλ(1) ∈Γk(n−1) and





σk1;2(1))=0, σk(1))=0, σk+1(1))<0.

This completes the proof of Lemma 2.3 fors=0.Then, by an induction on the length of

λ(s), we finish the proof of Lemma 2.3.

By Proposition 2.1, ifλ∈Γm1(m),m>1 andσm1(λ)=0,then σm2,i(λ)≥0, i=1,2,· · ·,m.

Under additional conditionσm(λ)<0,we have

Lemma 2.4. Letλ∈Γm1(m),m>2andσm1(λ)=0.Ifσm(λ)<0, then we have σm2,i(λ)>0, i=1,2,· · ·,m.

Proof. By Proposition 2.1, we haveσm2,i(λ)≥0; the Maclaurin’s inequalities (4.5) yields σm(λ) ≤0. Thus, we can equivalently say, if the inequality above does not hold, that is, σm2,i(λ)=0 for somei∈ {1,· · ·,m}, then

σm(λ)=λ1· · ·λm=0.

It is enough to prove thatσm1(λ)=0 =σm2;1(λ)=0 implyσm(λ)=0, since the other case can be deduced by absurd argument of this results.

Substitutingσm1(λ)=0=σm2;1(λ)=0 into

σm1(λ)=λ1σm2;1(λ)+σm1;1(λ), we have

0=σm1;11,· · ·, λm)=σm12,· · ·, λm)= Ym

i=2

λi. Thus,

0=λ1 Ym

i=2

λim(λ).

Proof of Theorem 2.2. By these two lemmas above, we prove Theorem 2.2 by an induc- tion onk. Letλ=(λ1, λ2,· · ·, λn)∈Rnandλis in the descending order.

Step 1: The casek=2.We claim the following results : Letλ∈Γ2(n)andσ2(λ)=0, ifσ3(λ)<0, then we have

σ1,i(λ)>0, i=1,2,· · ·,n.

Equivalently, forλ∈Γ2(n)withσ2(λ)=0, ifσ1,i(λ)=0for some i∈ {1,· · ·,n}, then σj(λ)=0, j=3,· · ·,n.

(7)

By assumption above, we have

σ2(λ)=λ1σ1;1(λ)+σ2;1(λ), σ1,1(λ)=σ1(λ)−λ1=

Xn j=2

λj12,· · ·, λn),

σ2;1(λ)=σ22,· · ·, λn)= (σ1−λ1)2−Pm i=2λ2i

2 .

If we assumeσ1,1(λ)=0, thenσ1,1(λ)=0 together withσ2(λ)=0 yields 0=σ2(λ)=λ1σ1,1(λ)+σ2,1(λ)=σ2,1(λ)=(σ1−λ1)2−Pm

i=2λ2i

2 =−Pm

i=2λ2i

2 ,

which implies

Xm i=2

λ2i =0, and thus

λi=0, i=2,· · ·,n.

Then

σj(λ)=0, j=2,· · ·,n,

which contradicts withσ3(λ)<0, thereforeσ1,1(λ)=0 is impossible.

Step 2: The case2 < kn−1. Ifk = n−1, it is included in Lemma 2.4. Now we consider the general case 2<k<n−1.

The proof of part (I):We will prove that, for 2 <k<n−1, ifλ ∈Γk(n),σk(λ) =0 andσk+1(λ)<0, then

σk1;1(λ)>0.

We prove this claim by absurd argument. Recallλ(s) = (λs+1, . . . , λn) and suppose that λ=λ(0)∈Γk(n),





σk1;1(0))=0,(the absurd assumption), σk(0))=0,

σk+1(0))<0.

By using Lemma 2.3 and the induction assumption up to s = nk−1, we have that λ(nk1)∈Γk(k+1) and





σk1;nk(nk1))=σk1;nknk,· · ·, λn)=0, σk(nk1))=σknk,· · ·, λn)=0,

σk+1nk1)=σk+1nk,· · ·, λn)<0.

This contradicts with the conclusion of Lemma 2.4 withm=k+1. Thus, the assumption σk1;1(λ)=0 is really absurd, and thereforeσk1;1(λ)>0.

The proof of part (II):We suppose thatλ ∈ Γk(n) andσk(λ) =σk+1(λ) = 0. Then λ∈Γk+1(n), and for anyε >0, from the formula

σk+1(λ+ε)= Xk+1

j=0

C(j,k,n)εjσk+1j(λ), C(j,k,n)=(nk+1)(k+1j ) (nk+1

j) with the conventionσ0(λ)=1, we have

λ+ε=(λ1+ε, λ2+ε,· · ·, λn+ε)∈Γk+1(n).

(8)

We see that eitherσk+2(λ)≥0 orσk+2(λ)<0.Now we proveσk+2(λ)=0.Firstly we claim thatσk+2(λ)<0 is impossible. Otherwise, from the assumptionσk(λ)=σk+1(λ)=0, we have

σk(λ)=0, σk+1(λ)=0, σk+2(λ)<0.

Using the results of part (I), we obtain from bothσk+1(λ)=0 andσk+2(λ)<0 that

(2.11) σk;1(λ)>0.

Sinceλ∈ Γk(n),it is necessary by Proposition 2.1 thatλ1 ≥0 andσk1;1(λ) ≥0. So we have

0=σk(λ)=λ1σk1;1(λ)+σk;1(λ)≥σk;1(λ).

There is a contradiction of (2.11), thus, the case thatσk+2(λ)<0 does not occur and we obtainσk+2(λ) ≥0 in which caseλ ∈ Γk+2(n). Applying the Maclaurin’s inequalities to λ+ε∈Γk+2(n), we obtain

"

1

(nk+1k+1(λ+ε)

#k+11

"

1

(nk+2k+2(λ+ε)

#k+21 . Letε→0+, then

σk+1(λ)=0 implies σk+2(λ)=0.

Repeating the above argument fork+1,k+2,· · ·,n, we obtain σk+j(λ)=0, j=1,2,· · ·,nk,

that is,λ∈Γn(n) and this completes the proof.

Now we are back to the definitions ofP1andP2, by Theorem 2.2, can be stated more precisely as, fork<n,

P1={λ∈Γkj(λ)≥0, σk(λ)=. . .=σn(λ)=0,1≤ jk−1}, P2={λ∈Γkj(λ)>0, σk(λ)=0, σk+1<0,1≤jk−1}. If 2≤k<n, thenP2,∅. Here is an example , let





λi=1,1≤ik−1 λk=M, λk+1=−M1

λi=0,k+1<in withM= k1+

(k1)2+4

2 >1, thenMM1 =k−1, σj(λ)>0(1≤ jk−1),σk(λ)=0 and σk+1(λ)=−1, which means thatP2,∅.

The significance of Theorem2.2 is the breakthrough of the classical framework of the ellipticity of Hessian equations. It is well known that, Sk[u] = f is elliptic if f > 0 and degenerate elliptic if f ≥ 0.By the definition ofP2, the condition f(0) = 0 must lead to degenerate ellipticity for Monge-Amp´ere equation. However, it is no longer true fork−Hessian (2 ≤ k < n) equation by Theorem 2.2. Next theorem gives a complete k-Hessian classification of second-order polynomials in the degenerate elliptic case.

Theorem 2.5. Suppose thatλ∈∂Γk(n),2≤kn−1.

(1) For anyλ∈P2, we have thatψ= 12Pn

i=1λiy2i is a solution of kHessian equation Sk[ψ] = 0,and the linearized operators of Lψ = Pn

i=1σk1,i(λ)∂2i is uniformly elliptic.

(9)

(2) For anyλ ∈ P1 withλ1 ≥ . . . ≥ λn, thenψ = 12Pn

i=1λiy2i is a non-strict convex solution of kHessian equation Sk[ψ]=0, and the linearized operators ofLψis degenerate elliptic with

σk1,i=0, 1≤ik−1.

Moreover, ifσk1(λ) = 0, then Lψ = 0, that is, σk1,i = 0 for1 ≤ in; if σk1(λ)>0,then

(2.12)







λi>0,1≤ik−1 λi=0,kin σk1,i=0,1≤ik−1 σk1,i=Qk1

j=1λj>0,kin

Proof. (1) is a direct consequence of (I) in Theorem 2.2. Now we prove (2). Ifλ∈P1, we haveσk1,i ≥0 for 1≤inby (2.3), andσj(λ)=0 forkjnby Theorem 2.2 (II).

Fromσn(λ)=Qn

i=1λi =0,λj≥0(1≤ jn) andλis in decreasing order, it follows that λn=0 and

σk1, . . . , λn1)=σk(λ)−λnσk1,n(λ)=σk(λ)=0.

Similarly,

σj1, . . . , λn1)=σj(λ)≥0, σk+11, . . . , λn1)=σk+1(λ)=0,1≤ jk−1.

Applying Theorem 2.2 (II) to (λ1, . . . , λn1)∈Γk(n−1),we obtain σn11, . . . , λn1)=0

and by the same reasoning in then−dimensional case,λn1 =0.By an induction on the dimension up tok+1,we see thatλi=0,k+1≤in.Sinceσk(λ)=0 and

σk(λ)=σk1, . . . , λk,0, . . . ,0)= Yk

i=1

λi, we haveλk=0 by recalling thatλis in descending order. By the virtue of

Xn i=1

σk1;i(λ)=(n−k+1)σk1(λ), σk1;i(λ)≥0, 1≤in, we see that, ifσk1(λ)=0, thenσk1,i=0 for 1≤in; ifσk1(λ)>0,from

0< σk1(λ)=σk11, . . . , λk1,0, . . . ,0)=

k1

Y

i=1

λi, we have

λi>0,1≤ik−1.

By the definition ofσk1;i(λ) and

λ=(λ1, . . . , λk1,0, . . . ,0),

we obtain the last two conclusions of (2.12).

Ifλ∈Γk(n), we certainly haveλ∈Γk1(n).Ifλ∈P2⊂∂Γk(n) withσk+1(λ)<0, from Pn

i=1σk1;i(µ)=(n−k+1)σk1(µ), µ∈Rnand Theorem 2.2 (I), it follows thatσk1(λ)>0 andλ ∈ Γk1(n).In those two cases above, 0 < σk1;1 ≤ σk1;2 ≤ . . . ≤ σk1;n implies the uniform ellipticity. Conversely, we want to know whether and how the ellipticity is true forλ∈Γk1(n).Also notice that, in the monotonicity formula (2.3), it is required that λ∈Γk(n) rather thanλ∈Γk1(n) as Lemma 2.6 below.

(10)

Lemma 2.6. . Suppose thatλ∈Γk1(n),2≤kn−1. Ifλ1≥λ2≥. . .≥λn, then σk1;1(λ)≥0

is equivalent to

0≤σk1;1(λ)≤σk1;2(λ)≤. . .≤σk1;n(λ).

Proof. We claim thatσk1;2(λ)≥σk1;1(λ), that is

σk11, λ3, . . . , λn)≥σk12, λ3, . . . , λn).

If the claim is true, by using it again, we obtain

σk1;n(λ)≥. . . σk1;2(λ)≥σk1;1(λ).

Sinceλ∈Γk1(n),by (4.9) we have

σl;1(λ)>0, l+1≤k−1, which, together with the assumption

σk12, λ3, . . . , λn)=σk1,1(λ)≥0, yields (λ2, λ3, . . . , λn)∈Γk1(n−1).Using (4.9) again, we obtain

σk23, λ4, . . . , λn)=σk2,22, λ3, . . . , λn)≥0.

Letε=λ1−λ2≥0, we have

σk1;2(λ)=σk11, λ3, . . . , λn)

=(λ2+ε)σk23, . . . , λn)+σk13, . . . , λn)

≥λ2σk23, . . . , λn)+σk13, . . . , λn)

k12, λ3, . . . , λn)=σk1;1(λ),

thus, the claim is proved.

We will give a characterization of ellipticity for the linearized operator ofSk[ψ] = c withc∈R.

Theorem 2.7. For any c∈R, there existsλ∈Γk1(n)such that (2.13) 0< σk1;1(λ)≤σk1;2(λ)≤. . .≤σk1;n(λ), andψ = 12Pn

i=1λiy2i is a (k−1)- convex solution of kHessian equation Sk[ψ] = c, moreover, the linearized operators of Sk[u]atψ

(2.14) Lψ=

Xn i=1

σk1,i(λ)∂2i is uniformly elliptic.

Proof. Ifc>0, we takeλ12 =. . . =λn = [(cn

k)]1k >0,thenσk(λ)=c; Ifc=0, see Theorem 2.5 (1). It is left to consider the casec<0.

Notice thatP2 ,∅whenk<n,so we can chooseδ=(δ2, . . . , δn)∈P2⊂∂Γk1(n−1) withδ2≥δ3≥. . .≥δn, that is,

σk12, . . . , δn)=0, σk2, . . . , δn)<0 Obviouslyδ2>0.Choosing 1>t>0 small such that, forδ12+1,

σk12+t, . . . , δn+t)>0,

δ1σk12+t, . . . , δn+t)k2+t, . . . , δn+t)<0.

(11)

Then

σk1, δ2+t, . . . , δn+t)1σk12+t, . . . , δn+t)k2+t, . . . , δn+t)<0.

Sinceσk(sλ)=skσk(λ) fors>0,we can choose suitables>0 such that σk(sδ1,s(δ2+t), . . . ,s(δn+t))=c.

Letλ=(sδ1,s(δ2+t), . . . ,s(δn+t)), then

σk(λ)=c.

The fact (λ2, λ3, . . . , λn)∈Γk1(n−1) and (4.9) lead to

σl2, λ3, . . . , λn)>0, 1≤lk−1.

Therefore, by virtue ofλ1=1>0,

σ1(λ)=λ112, λ3, . . . , λn)> σ12, λ3, . . . , λn)>0 and

σl(λ)=λ1σl12, λ3, . . . , λn)+σl2, λ3, . . . , λn)>0, 2≤lk−1.

By the definition ofΓk1(n), we have proved thatλ∈Γk1(n). Noticing λ∈Γk1(n), λ1≥λ2. . .≥λn, σk12, λ3, . . . , λn)>0 and applying Lemma 2.6, we see that

0< σk1;1(λ)≤σk1;2(λ)≤. . .≤σk1;n(λ),

which leads to that the operator (2.14) is uniformly elliptic. This completes the proof.

Theorem 2.8. For any0<c,there existsλ∈Γk(n)such that ( 0< σk1;i(λ), 1≤in

σk+l1(λ)>0, σk+l(λ)<0, 1≤lnk.

In particular, there existsλ∈Γn(n)such that σk(λ)=c.

Therefore, for1≤lnk,ψ=12Pn

i=1λiy2i is a(k+l−1)- convex solution of kHessian equation Sk[ψ] = c which is not(k+l)convex. Moreover, the linearized operators of Sk[·]atψ

Lψ= Xn

i=1

σk1,i(λ)∂2i is uniformly elliptic.

Proof. In the proof of Theorem 2.7, replacingσkbyσk+l, we obtain thatλ∈Γk+l1(n) with σk+l(λ)<0 for 1≤lnk. For thisλ, choosings>0 such thatσk(sλ)=c.The other

part of proof is the same as those in Theorem 2.7.

(12)

3. Existence ofClocalSolutions

In this section, by the classification of precedent section, we now prove Theorem 1.1 which is stated in the following precise version.

Theorem 3.1. For2≤kn−1,let f = f(y,u,p)be defined and continuous near a point Z0=(0,0,0)∈Rn×R×Rnand0< α <1. Assume that f is Cαwith respect to y and C2,1 with respect to u,p. We have the following results

(1) If f(Z0)=0, then(1.1)admits a(k−1)-convex local solution C2,α near y0 =0, which is not(k+1)-convex and of the following form

(3.1) u(y)= 1

2 Xn

i=1

τiy2iε4w(ε2y)

with arbitrarily fixed1, . . . , τn)∈P2,εis defined in(1.6)andε >0very small, the function w satisfies

(3.2)

( kwkC2,α ≤1

w(0)=0,∇w(0)=0.

(2) If f ≥0near Z0, then the equation(1.1)admits a k -convex local solution uC2,α near y0=0which is not(k+1)- convex and of the form(3.1).

Moreover, the equation (1.1)is uniformly elliptic with respect to the solution (3.1). If fCnear Z0, then uCnear y0.

Theorem 3.1 is exactly the part (1) and (2) of Theorem 1.1.

We now proceed to take the change of unknown functionuw and the change of variableyx, the aim is to consider the equation (1.1) in the domainB1(0) with the new variablexand then the so-called ”local” enter into the new equation itself, see (3.3)- (3.4). The trick is from Lin [11]. Letτ= (τ1, . . . , τn)∈ P2, thenψ(y)= 12Pn

i=1τiy2i is a polynomial-type solution of

Sk[ψ]=0.

We follow Lin [11] to introduce the following function u(y)=1

2 Xn

i=1

τiy2iε4w(ε2y)=ψ(y)+εε4w(ε2y), τ∈P2, ε >0, as a candidate of solution for equation (1.2). Notingy2x,we have

(Dyju)(x)jε2xjε2wj(x), j=1,· · ·,n, and

(Dyjyku)(x)kjτjwjk(x), j,k=1,· · ·,n,

whereδkjis the Kronecker symbol,wj(x)=(Dyjw)(x) andwjk(x)=(D2yjkw)(x). Then (1.1) transfers to

S˜k(w)= f˜ε(x,w(x),Dw(x)), xB1(0)={x∈Rn;|x|<1}, where

S˜k[w]=Skijτiwi j(x))=Sk(r(w)), with symmetric matrixr(w)=(δijτiwi j(x)), and

f˜ε(x,w(x),Dw(x))= f2x, ε4ψ(x)+εε4w(x), τ1ε2x1ε2w1(x),· · ·, τnε2xnε2wn(x)).

(13)

We now explain the smooth condition on f = f(x,u,p) and its norms, that is, f is H¨older continuous with respect to x, denoted by fCαx, andC2,1 with respect tou,p, denoted byC2,1u,p. We will consider f = f(x,u,p) defined on

B={(x,u,p)Rn×R×Rn:xB1(0),|u| ≤A,|p| ≤A} for some fixedA>0. We say fCαx if

kfkCαx =kfkL(B)+ sup

(x,u,p)∈B,(z,u,p)∈B,x,z

|f(x,u,p)f(z,u,p)|

|xz|α <∞.

When f = f(x), then fCαx is the usual fCα(B1(0)).When defining f = f(z,u,p)C2,1u,p, we regardzas a parameter as follow:

kfkC1,1u,p =sup

B

n|Dβu,pf(z,u,p)|: 0≤ |β| ≤2o

<∞ and

kfkC2,αu,p =kfkC1,1u,p+ sup

(z,u,p)∈B,(z,u,p)∈B,(u,p),(u,p)



|Dβu,pf(z,u,p)Dβu,pf(z,u,p)|

|(u,p)−(u,p)|α ,|β|=2



<∞, for 0< α <1 and a similar definition forkfkC2,1u,p. Here and later on, without confusion, we will denotekfkCαx,kfkC1,1u,pandkfkC2,1u,paskfkCα,kfkC1,1andkfkC2,1respectively .

Similar to [11] we consider the nonlinear operators

(3.3) G(w)= 1

ε[Sk(r(w))−f˜ε(x,w,Dw)], in B1(0).

The linearized operator ofGatwis

(3.4) LG(w)=

Xn i,j=1

∂Sk(r(w))

∂ri j2i j+ Xn

i=1

aii+a, where

ai=−1 ε

∂efε(x,z,pi)

∂pi (x,w,Dw)=−ε2f

∂pi a=−1

ε

∂efε(x,z,pi)

∂z (x,w,Dw)=−ε4f

∂z. Hereafter, we denoteSi jk(r(w))= ∂Sk∂r(r(w))

i j .

Lemma 3.2. Assume thatτ ∈P2andkwkC2(B1(0)) ≤1, then the operator LG(w)is a uni- formly elliptic operator ifε >0is small enough.

Proof. In order to prove the uniform ellipticity ofLG(w) Xn

i,j=1

Si jk(r(w)ξiξjc|ξ|2, ∀(x, ξ)∈B1(0)×Rn, it suffices to prove

(3.5)

( Siik(r(w))=σk1;i1, τ2, . . . , τn)+O(ε), 1≤in Si jk(r(w))=O(ε), i, j,

because if it does hold, we see by (2.13) that Siik(r(w))− X

j=1,j,i

|Si jk(r(w))|>1

k1,i1, . . . , τn)>0, 1≤in

(14)

ifε > 0 is small enough, then the matrix (Si jk(r(w))) is strictly diagonally dominant and LG(w) is a uniformly elliptic operator.

Indeed, SinceSk(r) is the sum all principal minors of orderkof the Hessian det(r) , then Skll(r(w))=Sk1(r(w;l,l))

wherer(w;l,l) is a (n−1)×(n−1) matrix determined fromrby deleting thelthrow and lthcolumn. Butr(w)=(δijτi+εwi j(x)), then

Sk1(r(w;l,l))k1;l1, τ2, . . . , τn)+O(ε) and

Si jk(r(w))=O(ε), i, j.

Proof is done.

We follows now the idea of Hong and Zuily [7] to prove the existence and a priori estimates of solution for linearized operator. In fact, if following the proof of [7] step by step, we can also obtain the existence of the local solution if f is smooth enough, the reason is thatLG(w) being uniformly elliptic can regarded as a special case ofLG(w) being degenerately elliptic in [7]. But if fCαx, which is the least requirement in classical Schauder estimates, their proof [7] does not work anymore because the degeneracy results in the loss of regularity. In our case, althoughLG(w) is uniformly elliptic, the existence and the priori Schauder estimates of classical solutions can not be directly obtained. The difficulty lies in that we do not know whether the coefficienta of the terma uin (3.4) is non-positive. After proving the existence of the linearized equation (Lemma 3.3), we can employ Nash-Moser procedure to prove the existence of local solution for (1.1) in H¨older space. We shall use the following schema :

(3.6)







w0=0, wm=wm1m1, m≥1, LG(wmm=gm,in B1(0),

ρm=0 on ∂B1(0), gm=−G(wm), where

g0(x)= 1 ε

σk(τ)−f ε2x, ε4ψ(x), ε21x1, τ2x2, . . . , τnxn).

It is pointed out on page 107, [3] that, if the operatorLGdoes not satisfy the condition a ≤0,as is well known from simple examples, the Dirichlet problem forLG(w)ρ=gno longer has a solution in general. Noticeain (3.4) has the factorε4, we will take advantage of the smallness ofato obtain the uniqueness and existence of solution for Dirichlet prob- lem (3.7). We will assumekwmkC2,αArather thankwmkC2,α ≤1 as in [7], the advantage is to see how the procedure depends onA. ActuallyAcan be taken as 1. We have uniformly Schauder estimates of its solution as follows.

Lemma 3.3. Assume that kwkC2,α(B1(0))A. Then there exists a unique solutionρ ∈ C2,α(B1(0))to the following Dirichlet problem

(3.7)

( LG(w)ρ=g, in B1(0), ρ=0 on ∂B1(0) for all gCα(B1(0)).Moreover,

(3.8) kρkC2,α(B1(0))CkgkCα(B1(0)), ∀gCα(B1(0)),

where the constant C depends on A, τandkfkC2,1. Moreover, C is independent of0< ε≤ε0 for someε0>0.

Références

Documents relatifs

Erratum to “The Hölder continuous subsolution theorem for complex Hessian equations”.. Tome 8

By replacement of r by r for constant e&gt;0, it suffices to obtain second derivative estimates for solutions that are independent of inf r As in the non-degenerate

Keywords: Complex Ginzburg–Landau equation, stationary measures, inviscid limit, local time..

We first use the weak stabilty result Theorem 3.11 to re- duces the estimation of the modulus of continuity of the solution to the Dirichlet problem (0.1) to the estimate of the L

Then the uniqueness of Kähler-Einstein metrics follows from the uniqueness of the solution of the Calabi equation combined with the uniqueness of the solutions in the implicit

[16] DOMSHLAK, Y.: On oscillation properties of delay differencial equations with oscillating coefficients, Funct. Dif- ference Equ. P.: Oscillations of first-order delay

[5] Kenig C.E., Ponce G., Vega L., Well-posedness and scattering results for the generalized Korteweg–de Vries equation via the contraction principle, Comm. 122

We study the optimal solution of the Monge-Kantorovich mass trans- port problem between measures whose density functions are convolution with a gaussian measure and a