• Aucun résultat trouvé

Hyperbolicity of the time-like extremal surfaces in minkowski spaces

N/A
N/A
Protected

Academic year: 2021

Partager "Hyperbolicity of the time-like extremal surfaces in minkowski spaces"

Copied!
25
0
0

Texte intégral

(1)

HAL Id: hal-01538332

https://hal.archives-ouvertes.fr/hal-01538332

Preprint submitted on 13 Jun 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

minkowski spaces

Xianglong Duan

To cite this version:

Xianglong Duan. Hyperbolicity of the time-like extremal surfaces in minkowski spaces. 2017. �hal-

01538332�

(2)

IN MINKOWSKI SPACES

XIANGLONG DUAN

Abstract. In this paper, it is established, in the case of graphs, that time- like extremal surfaces of dimension 1 +n in the Minkowski space of dimension 1 + n + m can be described by a symmetric hyperbolic system of PDEs with the very simple structure (reminiscent of the inviscid Burgers equation)

t

W +

n

X

j=1

A

j

(W )∂

xj

W = 0, W : (t, x) ∈ R

1+n

→ W (t, x) ∈ R

n+m+

m+n n

, where each A

j

(W ) is just a n + m +

m+nn

× n + m +

m+nn

symmetric matrix depending linearly on W .

Introduction

In the (1 + n + m) − dimensional Minkowski space R 1+(n+m) , we consider a time- like (1 + n) − dimensional surface (called n − brane in String Theory [8]), namely,

(t, x) ∈ Ω ⊂ R × R n → X (t, x) = (X 0 (t, x), . . . , X n+m (t, x)) ∈ R 1+(n+m) , where Ω is a bounded open set. This surface is called an extremal surface if X is a critical point, with respect to compactly supported perturbations in the open set Ω, of the following area functional (which is the Nambu-Goto action in the case n = 1)

− ZZ

q

− det(G µν ) , G µν = η MN ∂ µ X M ∂ ν X N ,

where M, N = 0, 1, . . . , n + m, µ, ν = 0, 1, . . . , n, and η = ( − 1, 1, . . . , 1) denotes the Minkowski metric, while G is the induced metric on the (1 + n) − surface by η.

Here ∂ 0 = ∂ t and we use the convention that the sum is taken for repeated indices.

By variational principles, the Euler-Lagrange equations gives the well-known equations of extremal surfaces,

(0.1) ∂ µ

− GG µν ∂ ν X M

= 0, M = 0, 1, . . . , n + m,

where G µν is the inverse of G µν and G = det(G µν ). In this paper, we limit ourself to the case of extremal surfaces that are graphs of the form:

(0.2) X 0 = t, X i = x i , i = 1, . . . , n, X n+α = X n+α (t, x), α = 1, . . . , m The main purpose of this paper is to prove:

Date: June 13, 2017.

Key words and phrases. extremal surfaces, hyperbolic system of conservation laws, conserva- tion laws with polyconvex entropy.

1

(3)

Theorem 0.1. In the case of a graph as (0.2) the equations of extremal surfaces (0.1) can be translated into a first order symmetric hyperbolic system of PDEs, which admits the very simple form

(0.3) ∂ t W + X n j=1

A j (W )∂ x

j

W = 0, W : (t, x) ∈ R 1+n → W (t, x) ∈ R n+m+ (

m+nn

) , where each A j (W ) is just a (n + m + m+n n

) × (n + m + m+n n

) symmetric ma- trix depending linearly on W . Accordingly, this system is automatically well-posed, locally in time, in the Sobolev space W s,2 as soon as s > n/2 + 1.

The structure of (0.3) is reminiscent of the celebrated prototype of all nonlinear hyperbolic PDEs, the so-called inviscid Burgers equation ∂ t u + u∂ x u = 0, where u and x are both just valued in R, with the simplest possible nonlinearity. Of course, to get such a simple structure, the relation to be found between X (valued in R 1+n+m ) and W (valued in R n+m+ (

m+nn

)) must be quite involved. Actually, it will be shown more precisely that the case of extremal surfaces corresponds to a special subset of solutions of (0.3) for which W lives in a very special algebraic sub-manifold of R n+m+ (

m+nn

), which is preserved by the dynamics of (0.3).

To establish Theorem 0.1, the strategy of proof follows the concept of system of conservation laws with “polyconvex” entropy in the sense of Dafermos [4]. The first step is to lift the original system of conservation laws to a (much) larger one which enjoys a convex entropy rather than a polyconvex one. This strategy has been successfully applied in many situations, such as nonlinear Elastodynamis [5, 7], nonlinear Electromagnetism [1, 3, 9], just to quote few examples. In our case, the calculations will crucially start with the classical Cauchy-Binet formula.

Finally, at the end of the paper, following the ideas recently introduced in [2], we will make a connection between our result and the theory of mean-curvature flows in the Euclidean space, in any dimension and co-dimension.

Acknowledgements. The author is very grateful to his thesis advisor, Yann Bre- nier, for introducing the polyconvex system to him and pointing out the possibility of augmenting this system as a hyperbolic system of conservation laws, in the spirit of [1].

1. Extremal surface equations for a graph

Let us first write equations (0.1) in the case of a graph such as (0.2). We denote V α = ∂ t X n+α , F αi = ∂ i X n+α , α = 1, . . . , m, i = 1, . . . , n.

Then the induced metric tensor G µν can be written as (G µν ) =

− 1 + | V | 2 V T F F T V I n + F T F

. We can easily get that

G = det(G µν ) = − det(I n + F T F) 1 − | V | 2 + V T F (I n + F T F ) −1 F T V . So, in the case of graph, the extremal surface can be solved by varying the follow Lagrangian of the vector V and matrix F,

Z Z

L(V, F ), L(V, F ) = − √

− G,

(4)

under the constraints

∂ t F αi = ∂ i V α , ∂ i F αj = ∂ j F αi , α = 1, . . . , m, i, j = 1, . . . , n.

The resulting system combines the above constraints and

∂ t

∂L(V, F )

∂V α

+ ∂ i

∂L(V, F )

∂F αi

= 0.

Now let us denote

D α = ∂L(V, F )

∂V α =

p det(I n + F T F)(I m + F F T ) −1 αβ V β

p 1 − V T (I m + F F T ) −1 V and the energy density h by

h(D, F ) = sup

V D · V − L(V, F ) = q

det(I n + F T F ) + D T (I m + F F T )D.

We have

V α = ∂h(D, F )

∂D α

= (I m + F F T ) αβ D β

h .

So, the extremal surface should solve the following system for a matrix valued function F = (F αi ) m×n and a vector valued function D = (D α ) α=1,2,...,m ,

(1.1) ∂ t F αi + ∂ i

D α + F αj P j

h

= 0,

(1.2) ∂ t D α + ∂ i

D α P i + ξ (F ) αi

h

= 0, (1.3) ∂ j F αi = ∂ i F αj , 1 ≤ i, j ≤ n, 1 ≤ α ≤ m, where

(1.4) P i = F αi D α , h = p

D 2 + P 2 + ξ(F), 1 ≤ i, j ≤ n, 1 ≤ α ≤ m, (1.5) ξ(F) = det I n + F T F

, ξ (F ) αi = 1 2

∂ξ(F )

∂F αi

= ξ(F )(I n + F T F) −1 ij F αj . In fact, we can get the above equations directly from (0.1). Interested readers can refer to Appendix A for the details. Moreover, we can find that there are other conservation laws for the energy density h and vector P as defined in the above equations, namely, (see Appendix B)

(1.6) ∂ t h + ∇ · P = 0,

(1.7) ∂ t P i + ∂ j

P i P j

h − ξ(F)(I n + F T F ) −1 ij h

!

= 0.

Now, let’s take h and P as independent variables, then we can find that the system (1.1),(1.2),(1.3),(1.6),(1.7) admits an additional conservation law for

S = D 2 + P 2 + ξ(F)

2h ,

namely,

(1.8) ∂ t S + ∇ · SP

h

= ∂ i

"

ξ(F )(I n + F T F ) −1 ij (P j − F αj D α ) h 2

#

(5)

2. Lifting of the system

2.1. The minors of the matrix F. In previous part, S is generally not a convex function of (h, D, P, F ), but a polyconvex function of F , which means that S can be written as convex functions of the minors of F . Now we denote r = min { m, n } . For 1 ≤ k ≤ r, and any ordered sequences 1 ≤ α 1 < α 2 < . . . < α k ≤ m and 1 ≤ i 1 < i 2 < . . . < i k ≤ n, let A = { α 1 , α 2 , . . . , α k } , I = { i 1 , i 2 , . . . , i k } , then the minor of F with respect to the rows α 1 , α 2 , . . . , α k and columns i 1 , i 2 , . . . , i k is defined as

[F ] A,I = det

(F α

p

i

q

) p,q=1,...,k

For the minors [F] A,I , let us first introduce the generalized Cauchy-Binet formula which is very convenient for us to compute the minors of the product of two ma- trices.

Lemma 2.1. (Cauchy-Binet formula) Suppose M is a m × l matrix, N is a l × n matrix, I is a subset of { 1, 2, . . . , m } with k( ≤ l) elements and J is a subset of { 1, 2, . . . , n } with k elements, then

(2.1) [M N ] I,J = X

K⊆{1,2,...,l}

|K|=k

[M ] I,K [N ] K,J

Now let us look at ξ(F ) = det I n + F T F

, we can show that it is a convex function for the minors [F ] A,I . In fact, we have,

ξ(F ) = det I n + F T F

= 1 + X n k=1

X

I⊆{1,2,...,n}

|I|=k

[F T F] I,I

(by the Cauchy-Binet formula)

= 1 + X r k=1

X

|I|,|A|=k

[F T ] I,A [F ] A,I

So we have

(2.2) ξ(F ) = 1 +

X r k=1

X

|A|,|I|=k

[F ] 2 A,I

The above equality tells us that ξ(F ) is a polyconvex function of F . By introducing all the minors of F as independent variables, the energy S becomes a strictly convex function of h, D, P, [F ] A,I . Now we will see that the system can be augmented as a system of conservation laws of h, D, P, [F ] A,I .

2.2. Conservation laws for the minors [F ] A,I . First, we will see that [F ] A,I

satisfy similar equations as (1.3). For simplicity, we denote [F ] A,I = 1 if A = I = ∅ . Proposition 2.2. Suppose F satisfy (1.3), then for any 2 ≤ k ≤ r + 1, A = { 1 ≤ α 1 < α 2 < . . . < α k−1 ≤ m } , I = { 1 ≤ i 1 < i 2 < . . . < i k ≤ n } , we have

(2.3)

X k q=1

( − 1) q ∂ i

q

[F] A

,I\{i

q

}

= 0

(6)

Proof. This can be showed quite directly, for the left hand side, we have Left =

X k q=1

X

l<q 1≤p≤k−1

( − 1) l+p+q [F] A\{α

p

},I\{i

l

,i

q

} ∂ i

q

F α

p

i

l

+ X k q=1

X

l>q 1≤p≤k−1

( − 1) l−1+p+q [F] A\{α

p

},I\{i

l

,i

q

} ∂ i

q

F α

p

i

l

= X

1≤l<q≤k 1≤p≤k−1

( − 1) l+p+q [F] A\{α

p

},I\{i

l

,i

q

}

∂ i

q

F α

p

i

l

− ∂ i

l

F α

p

i

p

= 0

With the above proposition, we can get the conservation laws for [F] A,I . For A = { 1 ≤ α 1 < α 2 < . . . < α k ≤ m } , I = { 1 ≤ i 1 < i 2 < . . . < i k ≤ n } , 1 ≤ k ≤ r, we have

(2.4)

∂ t [F ] A,I

= X k p,q=1

( − 1) p+q [F ] A\{α

p

},I\{i

q

} ∂ t F α

p

i

q

= − X k p,q=1

( − 1) p+q [F] A\{α

p

},I\{i

q

} ∂ i

q

D α

p

+ F α

p

j P j

h

= − X k p,q=1

( − 1) p+q ∂ i

q

"

[F ] A\{α

p

},I\{i

q

} D α

p

+ F α

p

j P j

h

#

2.3. The augmented system. Now let us consider the energy density h, the vector field P and the minors [F] A,I as independent variables. The original system (1.1)-(1.3) can be augmented to the following system of conservation laws. More precisely, for h > 0, D = (D α ) α=1,2,...,m , P = (P i ) i=1,2,...,n , M A,J with A ⊆ { 1, 2, . . . , m } , I ⊆ { 1, 2, . . . , n } , 1 ≤ | A | = | I | ≤ r = min { m, n } , the system are composed of the following equations

(2.5) ∂ t h + ∇ · P = 0

(2.6) ∂ t D α + ∂ i

D α P i

h

+ X

A,I,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

M A,I M A\{α},I\{i}

h

= 0

(2.7) ∂ t P i + X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

M A,(I \{j})

S

{i} M A,I

h

+ ∂ j

P i P j

h

− ∂ i

1 + P

A,I M A,I 2 h

!

= 0

(7)

(2.8) ∂ t M A,I + X

i,j i∈I,j /∈I\{i}

( − 1) O

I\{i}

(j)+O

I

(i) ∂ i

M A,(I\{i})

S

{j} P j

h

+ X

α,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

M A\{α},I\{i} D α

h

= 0

(2.9) X

i∈I

( − 1) O

I

(i) ∂ i

M A

,I\{i}

= 0, 2 ≤ | I | = | A | + 1 ≤ r + 1

Here O A (α) represents the number such that α is the O A (α)th smallest element in A S

{ α } . All the sum are taken in the convention that A ⊆ { 1, . . . , m } , I ⊆ { 1, . . . , n } , 1 ≤ α ≤ m, 1 ≤ i, j ≤ n.

Note that there are many different ways to enlarge the original system since the equations can be written in many different ways in terms of minors. Although our above augmented system looks quite complicated, in the following part, we will show that by extending the system in this way is quite useful. Now, let’s first show that the augmented system can be reduced to the original system under the algebraic constraints we abandoned we enlarge the system.

Proposition 2.3. We can recover the original system (1.1)-(1.3) from the aug- mented system (2.5)-(2.9)under the algebraic constrains

P i = F αi D α , h = p

D 2 + P 2 + ξ(F ), M A,I = [F ] A,I

Proof. It suffices to show the following three equalities, (2.10) ξ (F ) αi = X

A,I α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) [F] A,I [F ] A\{α},I\{i}

(2.11) ξ(F )(I n + F T F) −1 ij = (1 + X

A,I

[F ] 2 A,I )δ ij

− X

A,I j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) [F] A,(I\{j})

S

{i} [F ] A,I

(2.12) X k p=1

( − 1) p+q [F ] A\{α

p

},I\{i

q

} F α

p

j =

( ( − 1) O

(I\{iq})S{j}

(j)+q [F ] A,(I\{i

q

})

S

{j} j / ∈ I \ { i q }

0 j ∈ I \ { i q }

(2.12) is obvious because of the Laplace expansion. Now, since ξ (F) αi = 1

2

∂F αi

1 + X

A,I

[F ] 2 A,I

= X

A,I

[F ] A,I ∂

∂F αi

[F ] A,I

= X

A,I α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) [F ] A,I [F ] A\{α},I\{i}

so (2.10) is true. Let’s look at (2.11). First, we have

ξ(F )δ ij − ξ(F )(I n + F T F) −1 ij = ξ(F )F αi (I m + F F T ) −1 αβ F βj

= ( − 1) α+β F αi [I m + F F T ] {α}

c

,{β}

c

F βj

(8)

Because

[I m + F F T ] {α}

c

,{β}

c

=

m−1 X

k=0

X

|A′|=k α,β /∈A′

( − 1) O

A

(α)+O

A

(β) [F F T ] (A

S

{α})

c

,(A

S

{β})

c

= X m k=1

X

|A|=k α,β∈A

( − 1) O

A

(α)+O

A

(β)+α+β [F F T ] A\{α},A\{β}

=

min{m,r+1}

X

k=1

X

|A|=k,|I′ |=k−1 α,β∈A

( − 1) O

A

(α)+O

A

(β)+α+β [F ] A\{α},I

[F] A\{β},I

then we have

ξ(F )δ ij − ξ(F )(I n + F T F) −1 ij

=

min{m,r+1} X

k=1

X

|A|=k,|I′|=k−1 α,β∈A

( − 1) O

A

(α)+O

A

(β) [F] A\{α},I

[F ] A\{β},I

F αi F βj

= X r k=1

X

|A|=k,|I′ |=k−1 i,j /∈I′

( − 1) O

I

(i)+O

I

(j) [F ] A,I

S

{i} [F ] A,I

S

{j}

= X

A,I j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) [F] A,(I\{j})

S

{i} [F] A,I

Now we can show that the augmented system have a convex entropy.

Proposition 2.4. The system (2.5)-(2.9) satisfies an additional conservation law for

S(h, D, P, M A,I ) = 1 + D 2 + P 2 + P

A,I M A,I 2 2h

More precisely, we have (2.13) ∂ t S + ∇ ·

SP h

+ X

A,I,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

D α M A\{α},I\{i} M A,I

h 2

+ X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

P i M A,(I\{j})

S

{i} M A,I

h 2

− ∂ j

P j (1 + M A,I 2 ) h 2

!

= 0 We leave the proof in Appendix C.

Remark 2.5. There are many possible ways to augment the original system because of the different ways to write a function of minors. To find the write way to express the equation (1.2) and (1.7) such that it has a convex entropy S is somehow a little technical.

3. Properties of the augmented system

3.1. Propagation speeds and characteristic fields. Let’s look at the special

case n = 1, where our extremal surface is just a relativistic string. In this case, the

augmented system coincides with the system of h, P, D, F , where the P is a scalar

(9)

function and F = (F α ) α=1,...,m becomes a vector. More precisely, the equations in the case n = 1 are,

∂ t h + ∂ x P = 0, ∂ t F αi + ∂ x

D α + F α P h

= 0,

∂ t P + ∂ x

P 2 − 1 h

= 0, ∂ t D α + ∂ x

D α P + F α

h

= 0.

Let us denote U = (h, P, D α , F α ) then, the system can be written as

∂ t U + A(U )∂ x U = 0, where

A(U ) = 1 h

 

0 h 0 0

1−P

2

h 2P 0 0

P D+F h D P I m I m

D+P F h F I m P I m

 

We can find that, the propagation speeds are λ + = P + 1

h , λ − = P − 1 h

with each of them having multiplicity m + 1. The characteristic field for λ + is composed of

v + 0 = (h, P + 1, D, F ), v + i = (0, 0, e i , e i ), i = 1, . . . , m.

Here e i is the base of R m . The characteristic field for λ − is composed of v 0 = (h, P − 1, D, F ), v i = (0, 0, e i , − e i ), i = 1, . . . , m.

We can easily check that

∂λ + (U )

∂U · v + i (U ) = 0, ∂λ − (U )

∂U · v i (U ) = 0, i = 0, 1, . . . , m.

So the augmented system is linearly degenerate in the sense of the theory of hyper- bolic conservation laws [4].

3.2. Non-conservative form. Now let’s look at the non-conservative form of the augmented system (2.5)-(2.9). We denote

τ = 1

h , d = D

h , v = P

h , m A,I = M A,I

h .

For simplicity, we denote m A,I = τ if A = I = ∅ . We have the following proposition.

Proposition 3.1. Suppose (h, D, P, M A,I ) is a smooth solution of (2.5)-(2.9), then (τ, d, v, m A,I ) is the solution of the following symmetric hyperbolic system,

(3.1) ∂ t τ + v j ∂ j τ − τ ∂ j v j = 0 (3.2) ∂ t d α + v i ∂ i d α + X

A,I,i

1

{α∈A,i∈I } ( − 1) O

A

(α)+O

I

(i) m A\{α},I\{i} ∂ i m A,I = 0 (3.3) ∂ t v i + X

A,I,j

1

{j∈I,i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I\{j})

S

{i} ∂ j m A,I

− X

A,I

m A,I ∂ i m A,I + v j ∂ j v i − τ ∂ i τ = 0

(10)

(3.4)

∂ t m A,I + v j ∂ j m A,I + X

i,j

1

{i∈I,j / ∈I\{i}} ( − 1) O

I\{i}

(j)+O

I

(i) m A,(I \{i})

S

{j} ∂ i v j

− m A,I ∂ j v j + X

α,i

1

{α∈A,i∈I} ( − 1) O

A

(α)+O

I

(i) m A\{α},I\{i} ∂ i d α = 0 We can prove the above proposition by just using (2.9). It is easy to verify that this system is symmetric. If we set W = (τ, d α , v i , m A,I ) ∈ R n+m+ (

m+nn

), then the equations can be written as

∂ t W + X

j

A j (W )∂ j W = 0,

where A j (W ) is a symmetric matrix, and more surprisingly, it is a linear function of W . This is exactly the form (0.3) announced in the introduction. Notice that this system does not require any restriction on the range of W ! In particular the variable τ may admit positive, negative or null values. This is a very remarkable situation, if we compare with more classical nonlinear hyperbolic systems, such as the Eu- ler equations of gas dynamics (where typically τ should admit only positive values).

Now let us prove that the two system are equivalent when initial data satisfies (2.9)

Proposition 3.2. Suppose the initial data for (3.1)-(3.4) satisfies (2.9), i.e., X

i∈I

( − 1) O

I

(i) ∂ i

τ −1 m A

,I\{i}

= 0, 2 ≤ | I | = | A | + 1 ≤ r + 1

then the corresponding smooth solutions satisfy (2.5)-(2.9).

Proof. We only need to proof that the smooth solutions always satisfy (2.9) provided that initial data satisfies it. For 2 ≤ | I | = | A | + 1 ≤ r + 1, let us denote

σ A

,I = X

i∈I

( − 1) O

I

(i) ∂ i

τ −1 m A

,I\{i}

= 0

then by (3.1),(3.4), we have

∂ t σ A

,I = X

i∈I

( − 1) O

I

(i) ∂ i

τ −1 ∂ t m A

,I\{i} − τ −2 m A

,I\{i} ∂ t τ

= − v j ∂ j σ A

,I + X

α,i α∈A′,i∈I

( − 1) O

A

(α)+O

I

(i) σ A

\{α},I\{i} ∂ i d α

− X

i,j i∈I,j /∈I\{i}

( − 1) O

I\{i}

(j)+O

I

(i) σ A

,(I\{i})

S

{j} ∂ i v j

Then we have the following estimate,

∂ t

X

A

,I

Z

σ A 2

,I ≤ C( k∇ v k , k∇ d k ) X

A

,I

Z σ A 2

,I

Since the initial data σ A

,I (0) = 0, then by Gronwall’s lemma, we have σ A

,I ≡ 0.

With these equalities, it is easy to prove the statement just by doing the reverse computation as in the previous proposition.

(11)

Now let us look at the connection with the original system. It is obvious that the non-conservative form of the augmented system is symmetric, thus, the initial value problem is at least locally well-posed. But for the original system, this kind of property is not obvious. However, we can show that, the augmented system is equivalent to the original system if the initial value satisfy the following constraints (3.5) P i = F αi D α , h = p

D 2 + P 2 + ξ(F ), M A,I = [F ] A,I

or, in the non-conservative form,

(3.6) τ v i = m αi d α , 1 = d 2 α + v i 2 + τ 2 + m 2 A,I , m A,I = τ[F ] A,I

Now let us denote λ = 1

2 (τ 2 + v 2 i + d 2 α + m 2 A,I − 1), ω i = τ v i − m αi d α

ϕ α A,I = X

i∈I

( − 1) O

A

(α)+O

I

(i) m A,I\{i} m αi −

1

{α / ∈A} τ m A

S

{α},I

ψ i A,I = X

α∈A

( − 1) O

A

(α)+O

I

(i) m A\{α},I m αi −

1

{i / ∈I } τ m A,I

S

{i}

It is obvious that (τ, v i , d α , m A,I ) satisfy the above constraints (3.6) if and only if λ, ω i , ϕ α A,Ii A,I vanish for all possible choice of A, I, α, i. Furthermore, we can show that the algebraic constraints (3.6) are preserved by the non-conservative system (3.1)-(3.4). First, we have the following lemma.

Lemma 3.3. If (τ, v i , d α , m A,I ) solves the non-conservative system (3.1)-(3.4), then λ, ω i , ϕ α A,IA,I i as defined above satisfy the following equalities,

(3.7)

∂ t ω i = ω i ∂ j v j − ω j ∂ i v j − v j ∂ j ω i +τ ∂ i λ+ X

A,I,j

1

{j∈I} ( − 1) O

I

(j)+O

I\{j}

(i) ∂ j m A,I ψ A,I\{j} i (3.8)

∂ t λ = − v j ∂ j λ+τ ∂ i ω i + X

A,|I

|≥2,i,j

1

{i,j∈I

} ( − 1) O

I

(i)+O

I

(j) m A,I

\{j} ∂ j

m A,I

\{i} ω i

τ

+ X

A

,|I |≥2,α,i

1

{i∈I} ( − 1) O

A

(α)+O

I

(i) m A

,I\{i} ∂ i

ϕ α A

,I d α

τ

(3.9)

∂ t ϕ α A,I = 2ϕ α A,I ∂ j v j − v j ∂ j ϕ α A,I − X

j,k

1

{j∈I,k / ∈I \{j}} ( − 1) O

I\{j}

(k)+O

I

(j) ϕ α A,(I\{j})

S

{k} ∂ j v k

+ X

β,j

1

{β∈A,j∈I } ( − 1) O

A

(α)+O

A

(β)+O

A\{β}

(α)+O

I

(j) ϕ α A\{β},I\{j} ∂ j d β

(3.10) ∂ t ψ A,I i = 2ψ i A,I ∂ j v j − v j ∂ j ψ i A,I − X

k

( − 1) O

I

(i)+O

I

(k) ψ k A,I ∂ i v k

+ X

β∈A,j∈I

( − 1) O

A

(β)+O

I

(j)+O

I

(i)+O

I\{j}

(i) ψ A\{β},I\{j} i ∂ j d β

− X

j,k

1

{j∈I,k / ∈I\{j}} ( − 1) O

I

(i)+O

I

(j)+O

I\{j}

(k)+O

(I\{j})S{k}

(i) ψ A,(I i \{j})

S

{k} ∂ j v k

(12)

The proof of this lemma requires very lengthy and tedious computation. Inter- esting readers can refer to appendix D for the details of the proof. By the above lemma, we can show that the algebraic constraints are preserved. We summarise our result in the following proposition.

Proposition 3.4. Supposed (τ, v i , d α , m A,I ) is a solution to the non-conservative equations (3.1)-(3.4) and the initial data satisfies the constraints

τ v i = m αi d α , 1 = d 2 α + v i 2 + τ 2 + m 2 A,I , m A,I = τ[F ] A,I

where F αi = τ −1 m αi , then the above constraints are always satisfied.

Proof. Let us denote λ = 1

2 (τ 2 + v 2 i + d 2 α + m 2 A,I − 1), ω i = τ v i − m αi d α

ϕ α A,I = X

i∈I

( − 1) O

A

(α)+O

I

(i) m A,I\{i} m αi −

1

{α / ∈A} τ m A

S

{α},I

ψ i A,I = X

α∈A

( − 1) O

A

(α)+O

I

(i) m A\{α},I m αi −

1

{i / ∈I } τ m A,I

S

{i}

It is enough to show that λ, ω i , ϕ α A,IA,I i always vanish. Since ϕ α A,Ii A,I sat- isfy (3.9) and (3.10), which are linear symmetric system of PDEs when we see (τ, v i , d α , m A,I ) as fixed functions. It is easy to know that 0 is the unique solution when initial data is 0. So we get ϕ α A,I = ψ A,I i = 0 for all possible choice of A, I, α, i.

Therefore, we know that m A,I = τ[F ] A,I , where F αi = τ −1 m αi . So, by (3.7) and (3.8), we know that λ, ω i solves the following linear system of PDEs

(3.11) D t λ = τ Z ij ∂ j ω i + f i ω i

(3.12) D t ω i = τ ∂ i λ + c ij ω j

where D t = ∂ t + v · ∇ , Z ij = ξ(F )(I n + F T F ) −1 ij is a positive definite matrix, f i = ∂ j Z ij , c ij = δ ij ∇ · v − ∂ i v j . This system is of hyperbolic type and looks very like the acoustic waves. Now, since Z ij is positive definite, we can find a positive definite matrix Q such that Z = Q 2 . Now we do the change of variable ω e i = Q ij ω j , then λ, ω e i should solve the following linear symmetric system of PDEs

(3.13) D t λ = τ Q ij ∂ j ω e i + f e i ω e i

(3.14) D t e ω i = τ Q ij ∂ j λ + e c ij ω e j

where

f e i = τ Z jk ∂ k Q −1 ij + Q −1 ij f j , e c ij = (D t Q ik + Q il c lk )Q −1 kj

By the standard method of analysis of PDEs, it is easy to know that this linear symmetric system has a unique solution. Since λ = ω e i = 0 at t = 0, so we have λ ≡ e ω i ≡ 0. So we have ω i ≡ 0, which completes the proof.

(13)

4. Toward mean curvature motions in the Euclidean space We conclude this paper by explaining how mean curvature motions in the Eu- clidean space are related to our study of extremal surfaces in the Minkowski space.

This can be done very simply by the elementary quadratic change of time θ = t 2 /2 in the extremal surface equations (0.1). Let us work in the case where X 0 (t, x) = t.

We do the change of coordinate θ = t 2 /2, and in the new coordinate system, the extremal surface is denoted by X M (θ, x). The chain rule tells us

∂ t X 0 ≡ 1, ∂ t X M = θ ∂ θ X M , M = 1, . . . , m + n

Now for fixed θ, the slice of X (θ, x) = (X 1 (θ, x), . . . , X m+n (θ, x)) is a n dimensional manifold Σ in R m+n . Let us denote the induced metric on Σ by g ij = h ∂ i X, ∂ j X i , i, j = 1, . . . , n. Denote g = det g ij , g ij the inverse of g ij . Then we can get that

G 00 = − 1 + θ 2 | ∂ θ X | 2 , G 0i = G i0 = θ h ∂ θ X, ∂ i X i = θ h i , G ij = g ij

G = det

− 1 + θ 2 | ∂ θ X | 2 θ h j

θ h i g ij

= −

1 + 2θ h i h j g ij − | ∂ θ X | 2 g

√ − G = √ g

1 + θ h i h j g ij − | ∂ θ X | 2

+ O (θ 2 ) G 00 = − 1+2θ h i h j g ij − | ∂ θ X | 2

+ O (θ 2 ), G 0i = G i0 = θ g ij h j + O (θ), G ij = g ij + O (θ) Therefore, (0.1) can be rewritten as

0 = ∂ t

− GG 00 + ∂ i

− GG i0

= θ

− ∂ θ ( √ g) + √ g h i h j g ij − | ∂ θ X | 2

+ ∂ i √ gg ij h j + O (θ) 0 = ∂ t

− GG 00 ∂ t X M + ∂ i

− GG i0 ∂ t X M + ∂ i

− GG ij ∂ j X M

= − ∂ t θ √ g∂ θ X M + ∂ i

θ 2 √ gg ij h j ∂ θ X M

+ ∂ i √ gg ij ∂ j X M + O (θ)

= − √ g∂ θ X M + ∂ i √ gg ij ∂ j X M + O (θ) In the regime θ ≪ 1, we have the following equations (4.1) ∂ θ ( √ g) + √ g | ∂ θ X | 2 = ∂ i √ gg ij h j

+ √ gh i h j g ij (4.2) ∂ θ X M = 1

√ g ∂ i √ gg ij ∂ j X M

, M = 1, . . . , m + n

(4.2) is exactly the equation for the n dimensional mean curvature flow in R m+n , and (4.1) is just a consequence of (4.2).

Remark 4.1. It can be easily shown that (4.2) is equivalent to the following equa- tion

(4.3) ∂ θ X M = g ij ∂ ij X M − g ij g kl ∂ k X M ∂ l X N ∂ ij X N

Therefore,

h i = ∂ θ X M ∂ i X M = g jk ∂ jk X M − g jk g lm ∂ l X M ∂ m X N ∂ jk X N

∂ i X M

= g jk ∂ jk X M ∂ i X M − g jk g lm g il ∂ m X N ∂ jk X N

= 0

(14)

As a consequence, we have

∂ θ ( √ g)= 1

√ g gg ij ∂ iθ X M ∂ j X M

= ∂ i ( √ gg ij ∂ θ X M ∂ j X M ) − ∂ θ X M ∂ i √ gg ij ∂ j X M

= − √ g | ∂ θ X | 2

which is exactly (4.1) since h i = 0, i = 1, . . . , n.

So, we may expect to perform for the mean-curvature flow the same type of analysis we did for the extremal surfaces, which we intend to do in a future work.

5. Appendix A Let us denote

F α i = ∂ i X p+α , V α = ∂ t X p+α , ξ ij = δ ij + F α i F αj , ζ αβ = δ αβ + F α i F βi , and let ξ ij , ζ αβ be respectively the inverse of ξ ij , ζ αβ , ξ = det ξ ij = det ζ αβ , i, j = 1, . . . , n, α, β = 1, . . . , m. Since ξ ij F α j = F αi + F β i F βj F α j = F β i ζ βα , we have ξ ij F αj = ζ αβ F βi . By using the above notations, the induced metric G µν has the following expression,

(G µν ) =

− 1 + | V | 2 F α j V α

F α i V α ξ ij

, G = − ξ

1 − | V | 2 + ξ ij F α i F β j V α V β

= − ξ 1 − ζ αβ V α V β

(G µν ) = G −1 ξ

1 − ζ αβ V α F βj

− ζ αβ V α F βi ( − 1 + | V | 2ij + (ξ ik ξ jl − ξ ij ξ kl )F α k F β l V α V β

Now let’s start looking at the equation (0.1). The equation for X i , i = 1, . . . , n, reads

∂ t

√ ξζ αβ V α F βi

p 1 − ζ αβ V α V β

!

− ∂ j

 

√ ξ h

( − 1 + | V | 2ij + (ξ ik ξ jl − ξ ij ξ kl )F α k F β l V α V β

i p 1 − ζ αβ V α V β

 

 = 0 We denote

D α = − √ ξζ αβ V β

p 1 − ζ βγ V β V γ

, P i = f α i D α , h =

√ ξ p 1 − ζ αβ V α V β

Then we have

V α = − D α − F α i P i

p ξ + | D | 2 + | P | 2 , h = p

ξ + | D | 2 + | P | 2 Therefore, the equation can be rewritten as

(5.1) ∂ t P i + ∂ j

P i P j − ξξ ij h

= 0 The equation for X 0 = t reads,

− ∂ t

√ ξ p 1 − ζ αβ V α V β

! + ∂ j

√ ξζ αβ V α F βj

p 1 − ζ αβ V α V β

!

= 0 which can be rewritten by using our new notations as

(5.2) ∂ t h + ∂ j P j = 0

(15)

The equation for X p+α , α = 1, . . . , m, reads,

− ∂ t

√ ξ(V α − ζ βγ F α j F βj V γ ) p 1 − ζ αβ V α V β

! +∂ j

√ ξ(ζ βγ F β j V γ V α − F αi ξ ik F β k V β ξ jl F γ l V γ ) p 1 − ζ αβ V α V β

!

+ ∂ j

p ξ q

1 − ζ αβ V α V β ξ ij F αi

= 0 which can be rewritten as

(5.3) ∂ t D α + ∂ j

D α P j + ξξ ij F αi

h

= 0 At last, since ∂ t F αi = ∂ i V α , ∂ i F αj = ∂ j F αi , we have (5.4) ∂ t F αi + ∂ i

D α + F αj P j h

= 0, ∂ i F αj = ∂ j F αi

(5.1)-(5.4) are just the equations that we propose.

6. Appendix B

First, let’s prove the equation (1.6). Quite directly, we have

∂ t h = 1 2h

2D α ∂ t D α + 2P i ∂ t P i + ∂ξ(F)

∂F αi

∂ t F αi

= D α ∂ t D α + P i ∂ t (F αi D α ) + ξ (F) αi ∂ t F αi

h

=

D α + F αi P i

h

∂ t D α +

D α P i + ξ (F ) αi

h

∂ t F αi

= −

D α + F αj P j

h

∂ i

D α P i + ξ (F) αi

h

D α P i + ξ (F ) αi

h

∂ i

D α + F αj P j

h

= − ∂ i

(D α + F αj P j )(D α P i + ξ (F ) αi ) h 2

Now, since

ξ (F ) αi (D α + F αj P j ) = ξ(F )(I + F T F ) −1 ik F αk (D α + F αj P j )

= ξ(F)(I + F T F ) −1 ik (P k + F αk F αj P j )

= ξ(F)(I + F T F ) −1 ik (I + F T F) kj P j = ξ(F)δ ij P j = ξ(F )P i

So we have

∂ t h = − ∂ i

(D 2 + P 2 + ξ(F ))P i

h 2

= − ∂ i P i

Now, let’s look at the equation for P i = F αi D α . We have

∂ t P i = ∂ t (F αi D α ) = D α ∂ t F αi + F αi ∂ t D α

The first term

D α ∂ t F αi = − D α ∂ i

D α + F αj P j

h

= − ∂ i

D 2 α + D α F αj P j

h

+ D α ∂ i D α + P j F αj ∂ i D α

h

= − ∂ i

D 2 + P 2 h

+ D α ∂ i D α + P j ∂ i (F αj D α ) + ξ (F ) αj ∂ i F αj

h − P j D α ∂ i F αj + ξ (F) αj ∂ i F αj

h

(16)

= − ∂ i

h 2 − ξ(F) h

+ ∂ i h −

D α P j + ξ (F ) αj

h

∂ i F αj

= ∂ i

ξ(F ) h

D α P j + ξ (F ) αj

h

∂ i F αj

The second term

F αi ∂ t D α = − F αi ∂ j

D α P j + ξ (F ) αj

h

= − ∂ j

(F αi D α )P j + F αi ξ (F ) αj

h

+

D α P j + ξ (F ) αj

h

∂ j F αi

= − ∂ j

P i P j + F αi ξ (F ) αj

h

+

D α P j + ξ (F) αj

h

∂ i F αj

So we have

∂ t P i = − ∂ j

P i P j + F αi ξ (F ) αj − ξ(F )δ ij

h

Now, since

(I + F T F ) −1 ik (δ jk + F αj F αk ) = δ ij

then we have

F αi ξ (F) αj − ξ(F )δ ij = ξ(F )((I + F T F ) −1 ik F αj F αk − δ ij ) = − ξ(F )(I + F T F) −1 ij So we get

∂ t P i + ∂ j

P i P j

h − ξ(F )(I + F T F) −1 ij h

!

= 0 7. Appendix: C

We have that

∂ t S = D α ∂ t D α + P i ∂ t P i + M A,I ∂ t M A,I

h − 1 + D α 2 + P i 2 + M A,I 2 2h 2 ∂ t h Let’s look at the first term,

D α ∂ t D α

h = − D α

h ∂ j

D α P j

h

− D α

h X

A,I,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

M A,I M A\{α},I\{i}

h

(since (2.9), we have P

i∈I ( − 1) O

A

(α)+O

I

(i) ∂ i M A\{α},I\{i} = 0, )

= − D α 2

h 2 ∂ j P j − P j ∂ j

D 2 α 2h 2

− D α M A\{α},I\{i}

h

X

A,I,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

M A,I

h

For the second term,

P i ∂ t P i

h = L 1 + L 2

where

L 1 = − P i

h X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

M A,(I\{j})

S

{i} M A,I

h

L 2 = − P i

h ∂ j

P i P j

h

+ P j

h ∂ j

1 + P

A,I M A,I 2 h

!

(17)

Now let’s first prove the following equality,

∂ i M A,I = X

j∈I

1

{i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) ∂ j M A,(I \{j})

S

{i}

In fact, since

1

{j∈I,i / ∈I\{j}} =

1

{j∈I,i / ∈I} +

1

{j∈I,i=j} , we have Right =

1

{i / ∈I}

X

j∈I

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j M A,(I

S

{i})\{j} +

1

{i∈I} ∂ i M A,I

For i 6 = j, we can check that

O I (j) + O I\{j} (i) ≡ O I (i) + O I

S

{i} (j) + 1 (mod 2) So the right hand side

Right = −

1

{i / ∈I} X

j∈I

( − 1) O

I

(i)+O

IS{i}

(j) ∂ j M A,(I

S

{i})\{j} +

1

{i∈I} ∂ i M A,I

= −

1

{i / ∈I} X

j∈I

S

{i}

( − 1) O

I

(i)+O

IS{i}

(j) ∂ j M A,(I

S

{i})\{j} +

1

{i / ∈I } +

1

{i∈I}

∂ i M A,I

Because of (2.9), we finally get X

j∈I

1

{i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) ∂ j M A,(I\{j})

S

{i} = ∂ i M A,I

So we have L 1 = − X

A,I

P i M A,I

h 2 ∂ i M A,I − P i M A,(I\{j})

S

{i}

h

X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

M A,I

h

For L 2 , we have L 2 = − P i 2

h 2 ∂ j P j − P j ∂ j

P i 2 2h 2

+ ∂ j

P j (1 + M A,I 2 ) h 2

!

− 1 + M A,I 2 h ∂ j

P j

h

Since

− 1 + M A,I 2 h ∂ j

P j

h

= − 1 + M A,I 2

h 2 ∂ j P j − P j ∂ j

1 + M A,I 2 2h 2

!

+ P j M A,I

h 2 ∂ j M A,I

so we have P i ∂ t P i

h = − P i M A,(I\{j})

S

{i}

h

X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

M A,I

h

− 1 + P i 2 + M A,I 2

h 2 ∂ j P j − P j ∂ j

1 + P i 2 + M A,I 2 2h 2

! + ∂ j

P j (1 + M A,I 2 ) h 2

!

Therefore, we have

D α ∂ t D α + P i ∂ t P i + M A,I ∂ t M A,I

h = − 2S

h ∂ j P j − P j ∂ j

S h

+ L 3

where L 3 = ∂ j

P j (1 + M A,I 2 ) h 2

!

− X

A,I,j j∈I,i /∈I\{j}

( − 1) O

I

(j)+O

I\{j}

(i) ∂ j

P i M A,(I\{j})

S

{i} M A,I

h 2

(18)

− X

A,I,i α∈A,i∈I

( − 1) O

A

(α)+O

I

(i) ∂ i

D α M A\{α},I\{i} M A,I

h 2

So we have

∂ t S = − 2S

h ∂ j P j − P j ∂ j

S h

+ L 3 − S

h ∂ t h = − ∂ j

SP j

h

+ L 3

which completes the proof.

8. Appendix D

Equation for ω i . First, let’s compute ∂ t ω i , by definition,

∂ t ω i = ∂ t τ v i + τ ∂ t v i − ∂ t m αi d α − m αi ∂ t d α

The first two terms are,

∂ t τ v i + τ ∂ t v i = v i (τ ∂ j v j − v j ∂ j τ) + τ X

A,I

m A,I ∂ i m A,I − v j ∂ j v i + τ ∂ i τ + Σ 1

= (τ v i )∂ j v j − v j ∂ j (τ v i ) + τ

2 ∂ i (τ 2 + m 2 A,I ) + Σ 1

where

Σ 1 = − τ X

A,I,j

1

{j∈I,i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I \{j})

S

{i} ∂ j m A,I

Here we use the equation for m αi :

∂ t m αi + v j ∂ j m αi + m αj ∂ i v j − m αi ∂ j v j + τ ∂ i d α = 0 The last two terms are,

− ∂ t m αi d α − m αi ∂ t d α = d α (v j ∂ j m αi +m αj ∂ i v j − m αi ∂ j v j +τ ∂ i d α )+ m αi v j ∂ j d α +Σ 2

= − (m αi d α )∂ j v j + v j ∂ j (m αi d α ) + τ

2 ∂ i (d 2 α + v j 2 ) − (τ v j − m αj d α )∂ i v j + Σ 2

where

Σ 2 = X

A,I,α,j

1

{α∈A,j∈I} ( − 1) O

A

(α)+O

I

(j) m αi m A\{α},I\{j} ∂ j m A,I

Now, we have

∂ t ω i = (τ v i − m αi d α )∂ j v j − v j ∂ j (τ v i − m αi d α ) + τ

2 ∂ i (τ 2 + v j 2 + d 2 α + m 2 A,I )

− (τ v j − m αj d α )∂ i v j + Σ 1 + Σ 2

= ω i ∂ j v j − ω j ∂ i v j − v j ∂ j ω i + τ ∂ i λ + Σ 1 + Σ 2

It is easy to check that Σ 1 + Σ 2 = X

A,I,j

1

{j∈I} ( − 1) O

I

(j)+O

I\{j}

(i) ∂ j m A,I ψ i A,I\{j}

So ω i should satisfy the following equation

∂ t ω i = ω i ∂ j v j − ω j ∂ i v j − v j ∂ j ω i +τ ∂ i λ+ X

A,I,j

1

{j∈I} ( − 1) O

I

(j)+O

I\{j}

(i) ∂ j m A,I ψ A,I\{j} i

(19)

Equation for λ. Now let’s compute ∂ t λ, we have,

∂ t λ = τ ∂ t τ + v i ∂ t v i + d α ∂ t d α + m A,I ∂ t m A,I

= τ(τ ∂ j v j − v j ∂ j τ) + v i (m A,I ∂ i m A,I − v j ∂ j v i + τ ∂ i τ) + Σ 3

− d α v j ∂ j d α + Σ 4 + m A,I (m A,I ∂ j v j − v j ∂ j m A,I ) + Σ 5 + Σ 6

= − v j

2 ∂ j (τ 2 + v i 2 + d 2 α + m 2 A,I ) + τ ∂ i (τ v i ) + m A,I ∂ i (m A,I v i ) +Σ 3 + Σ 4 + Σ 5 + Σ 6

where

Σ 3 = − X

A,I,i,j

1

{j∈I,i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I\{j})

S

{i} v i ∂ j m A,I

Σ 4 = − X

A,I,α,i

1

{α∈A,i∈I } ( − 1) O

A

(α)+O

I

(i) m A\{α},I\{i} d α ∂ i m A,I

Σ 5 = − X

A,I,α,i

1

{α∈A,i∈I } ( − 1) O

A

(α)+O

I

(i) m A\{α},I\{i} m A,I ∂ i d α

Σ 6 = − X

A,I,i,j

1

{j∈I,i / ∈I\{j}} ( − 1) O

I\{j}

(i)+O

I

(j) m A,(I\{j})

S

{i} m A,I ∂ j v i

It is easy to see that Σ 3 + Σ 6 = − X

A,I,i,j

1

{j∈I,i / ∈I\{j}} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I\{j})

S

{i} ∂ j (m A,I v i ) (since

1

{j∈I,i / ∈I\{j}} =

1

{j∈I,i / ∈I} +

1

{j∈I,i=j} )

= − X

A,I,i

1

{i∈I} m A,I ∂ i (m A,I v i ) − X

A,I,i,j

1

{j∈I,i / ∈I} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I\{j})

S

{i} ∂ j (m A,I v i ) (since

1

{i∈I} = 1 −

1

{i / ∈I} )

= − m A,I ∂ i (m A,I v i ) + X

A,I,i,j

1

{i / ∈I,j=i} ( − 1) O

I

(i)+O

IS{i}

(j) m A,(I

S

{i})\{j} ∂ j (m A,I v i )

− X

A,I,i,j

1

{i / ∈I,j∈I} ( − 1) O

I

(j)+O

I\{j}

(i) m A,(I

S

{i})\{j} ∂ j (m A,I v i ) Now we can easily check that for any i / ∈ I and j ∈ I,

O I (i) + O I

S

{i} (j) ≡ O I (j) + O I\{j} (i) + 1 (mod 2) [We can prove this equality by discussing in the cases i < j and i > j.]

Because

1

{i / ∈I,j=i} +

1

{i / ∈I,j∈I} =

1

{i / ∈I,j∈I

S

{i}} , we have Σ 3 +Σ 6 +m A,I ∂ i (m A,I v i ) = X

A,I,i,j

1

{i / ∈I,j∈I

S

{i}} ( − 1) O

I

(i)+O

IS{i}

(j) m A,(I

S

{i})\{j} ∂ j (m A,I v i ) (let I = I S

{ i } )

= X

A,|I

|≥2,i,j,

1

{i,j∈I

} ( − 1) O

I

(i)+O

I

(j) m A,I

\{j} ∂ j (m A,I

\{i} v i ) ( and, since m A,I

\{i} v i = m

A,I′\{i}

τ (ω i + m αi d α ))

= X

A,|I

|≥2,i,j,

1

{i,j∈I

} ( − 1) O

I

(i)+O

I

(j) m A,I

\{j} ∂ j

m A,I

\{i} ω i + m A,I

\{i} m αi d α

τ

(20)

Also, we have

Σ 4 + Σ 5 = − X

A,I,α,i

1

{α∈A,i∈I } ( − 1) O

A

(α)+O

I

(i) m A\{α},I\{i} ∂ i (m A,I d α ) (let A = A \ { α } )

= − τ ∂ i (m αi d α ) − X

A

,|I|≥2,α,i

1

{α / ∈A

,i∈I} ( − 1) O

A

(α)+O

I

(i) m A

,I\{i} ∂ i (m A

S

{α},I d α ) Since

1

{α / ∈A

} m A

S

{α},I d α

= 1 τ

X

j∈I

( − 1) O

A

(α)+O

I

(j) m A

,I\{j} m αj − ϕ α A

,I

d α

we have

Σ 4 + Σ 5 = − τ ∂ i (m αi d α ) + X

A

,|I |≥2,α,i

1

{i∈I} ( − 1) O

A

(α)+O

I

(i) m A

,I\{i} ∂ i

ϕ α A

,I d α

τ

− X

A

,|I|≥2,α,i,j

1

{i,j∈I} ( − 1) O

I

(i)+O

I

(j) m A

,I\{i} ∂ i

m A

,I\{j} m αj d α

τ

We find that the last term is cancel when we add it up with Σ 3 + Σ 6 , so we have

∂ t λ = − v j ∂ j λ+τ ∂ i ω i + X

A,|I

|≥2,i,j,

1

{i,j∈I

} ( − 1) O

I

(i)+O

I

(j) m A,I

\{j} ∂ j

m A,I

\{i} ω i

τ

+ X

A

,|I|≥2,α,i

1

{i∈I} ( − 1) O

A

(α)+O

I

(i) m A

,I\{i} ∂ i

ϕ α A

,I d α

τ

Equation for ϕ α A,I . Now let’s find the equation for ϕ α A,I . We only consider the case | A | ≥ 2. We have

∂ t ϕ α A,I = ∂ t

X

i∈I

( − 1) O

A

(α)+O

I

(i) m A,I\{i} m αi −

1

{α / ∈A} τ m A

S

{α},I

!

= X

i∈I

( − 1) O

A

(α)+O

I

(i) m αi m A,I\{i} ∂ j v j − v j ∂ j m A,I\{i}

+ Σ 7 + Σ 8

− X

i∈I

( − 1) O

A

(α)+O

I

(i) m A,I\{i} v j ∂ j m αi + m αj ∂ i v j − m αi ∂ j v j + τ ∂ i d α

1

{α / ∈A} m A

S

{α},I (τ ∂ j v j − v j ∂ j τ) −

1

{α / ∈A} τ m A

S

{α},I ∂ j v j − v j ∂ j m A

S

{α},I

+Σ 9 +Σ 10

= 2ϕ α A,I ∂ j v j − v j ∂ j ϕ α A,I − X

i∈I

( − 1) O

A

(α)+O

I

(i) m A,I\{i} m αj ∂ i v j +τ ∂ i d α

+Σ 7 +Σ 8 +Σ 9 +Σ 10

where Σ 7 = − X

i,j,k

1

{i6=j∈I,k / ∈I\{i,j}} ( − 1) O

A

(α)+O

I

(i)+O

I\{i}

(j)+O

I\{i,j}

(k) m A,(I\{i,j})

S

{k} m αi ∂ j v k

Σ 8 = − X

β,i,j

1

{β∈A,i6=j∈I} ( − 1) O

A

(α)+O

I

(i)+O

A

(β)+O

I\{i}

(j) m A\{β},I\{i,j} m αi ∂ j d β

Σ 9 = X

j,k

1

{α / ∈A,j∈I,k / ∈I\{j}} ( − 1) O

I\{j}

(k)+O

I

(j) τ m A

S

{α},(I\{j})

S

{k} ∂ j v k

Références

Documents relatifs

Ainsi, l’IGBT est un composant mixte dont la structure principale est un transistor bipolaire de puissance (Bipolar Junction Transistor ou BJT) commandé par un

In Chapter 5, a novel tape-peeling tribo-charging method is used to charge FEP electret films, the output voltage and current of TE- KEHs based on the FEP films are apparently

Namely, relying on the analysis of the evolution of the Riemann coordinates along the characteristics and on the Oleˇınik-type inequalities, we will establish upper and

It is well-known that compactness arguments based on BV bounds for the existence of solutions for quasilinear hyperbolic conservation laws is limited to one-dimensional

Among the most known results in this topic are the Faber-Krahn inequality for the first Dirichlet eigenvalue, the Szeg¨o-Weinberger inequality for the first positive Neumann

• Arnaud Debussche et Julien Vovelle [ DV10b ] ou [ DV14 ] pour leur article de référence, base de réflexion pour mon mémoire de M2 et origine de cette thèse « Scalar

Methods: After participating research groups obtained and assessed results from their systems in the image retrieval task of Cross-Language Evaluation Forum, we assessed the results

may originate only at the initial line t = 0. In those respects, the behavior of solutions of these systems resembles closely the behavior of solutions of