• Aucun résultat trouvé

SEMICLASSICAL PARAMETRIX FOR THE MAXWELL EQUATION AND APPLICATIONS TO THE ELECTROMAGNETIC TRANSMISSION EIGENVALUES

N/A
N/A
Protected

Academic year: 2021

Partager "SEMICLASSICAL PARAMETRIX FOR THE MAXWELL EQUATION AND APPLICATIONS TO THE ELECTROMAGNETIC TRANSMISSION EIGENVALUES"

Copied!
24
0
0

Texte intégral

(1)

HAL Id: hal-03143832

https://hal.archives-ouvertes.fr/hal-03143832

Submitted on 17 Feb 2021

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

SEMICLASSICAL PARAMETRIX FOR THE

MAXWELL EQUATION AND APPLICATIONS TO

THE ELECTROMAGNETIC TRANSMISSION

EIGENVALUES

Georgi Vodev

To cite this version:

Georgi Vodev. SEMICLASSICAL PARAMETRIX FOR THE MAXWELL EQUATION AND AP-PLICATIONS TO THE ELECTROMAGNETIC TRANSMISSION EIGENVALUES. Research in the Mathematical Sciences , Springer, In press. �hal-03143832�

(2)

APPLICATIONS TO THE ELECTROMAGNETIC TRANSMISSION EIGENVALUES

GEORGI VODEV

Abstract. We introduce an analog of the Dirichlet-to-Neumann map for the Maxwell equation in a bounded domain. We show that it can be approximated by a pseudodifferential operator on the boundary with a matrix-valued symbol and we compute the principal symbol. As a consequence, we obtain a parabolic region free of the transmission eigenvalues associated to the Maxwell equation.

Key words: Maxwell equation, semiclassical parametrix, transmission eigenvalues.

1. Introduction

Let Ω ⊂ R3 be a bounded, connected domain with a Csmooth boundary Γ = ∂Ω, and

consider the Maxwell equation

(1.1)      ∇ × E = iλµ(x)H in Ω, ∇ × H = −iλε(x)E in Ω, ν× E = f on Γ,

where λ ∈ C, |λ| ≫ 1, ν = (ν1, ν2, ν3) denotes the Euclidean unit normal to Γ, µ, ε ∈ C∞(Ω)

are scalar-valued strictly positive functions. The functions E = (E1, E2, E3) ∈ C3 and B =

(B1, B2, B3) ∈ C3 denote the electric and magnetic fields, respectively. The equation (1.1)

describes the propagation of electromagnetic waves in Ω with a frequency λ moving with a speed (εµ)−1/2. Recall that given two vectors a = (a1, a2, a3) and b = (b1, b2, b3), a× b denotes

the vector (a2b3− a3b2, a3b1− a1b3, a1b2− a2b1) and it is perpendicular to both a and b. Thus

we have

∇ × E = (∂x2E3− ∂x3E2, ∂x3E1− ∂x1E3, ∂x1E2− ∂x2E1)

and similarly for ∇ × H. Throughout this paper, given s ∈ R we will denote by Hs(Γ) the

Sobolev space Hs(Γ; C3). Introduce the spaces

Hts(Γ) :={f ∈ Hs(Γ) :hν(x), f(x)i = 0}, s = 0, 1,

where hν, fi := ν1f1+ ν2f2+ ν3f3. In view of Theorem 3.1 we can introduce the operator

N (λ) : Ht1(Γ)→ Ht0(Γ)

defined by

N (λ)f = ν × H|Γ,

which can be considered as an analog of the Dirichlet-to-Neumann map. Set h = |Re λ|−1 if |Re λ| ≥ |Im λ| and h = |Im λ|−1 if |Im λ| ≥ |Re λ|, z = hλ and θ = |Im z| ≤ 1. Clearly, in

the first case we have z = 1 + iθ, while in the second case we have θ = 1. We would like to approximate the operator N (λ) by a matrix-valued h − ΨDO. It is proved in [8], [10] that the

(3)

Dirichlet-to-Neumann operator associated to the Helmholtz equation with refraction index εµ can be approximated by Oph(ρ), where

ρ(x′, ξ′, z) =p−r0(x′, ξ′) + z2(ε0µ0)(x′), Im ρ > 0, (x′, ξ′)∈ T∗Γ,

where ε0 = ε|Γ, µ0 = µ|Γ, and r0 ≥ 0 is the principal symbol of the operator −∆Γ. Here ∆Γ

denotes the negative Laplace-Beltrami operator on Γ with Riemannian metric induced by the Euclidean one. It is well-known (see Section 2) that r0 =hβ, βi, where β = β(x′, ξ′)∈ R3 is a

vector-valued homogeneos polynomial of order one in ξ′, which is perpendicular to the normal

ν(x′), that is,hβ, νi = 0. Set

m = (zµ0)−1 ρI + ρ−1B

 ,

where I is the identity 3× 3 matrix, while the matrix B is defined by Bg = hβ, giβ, g∈ R3.

Our main result is the following

Theorem 1.1. Let θ ≥ h2/5−ǫ, where 0 < ǫ≪ 1 is arbitrary. Then for every f ∈ Ht

1 we have

the estimate

(1.2) kN (λ)f − Oph(m + h em)(ν× f)kH0 .hθ−5/2kfkH−1

where em ∈ C(TΓ) is a matrix-valued function independent of h, belonging to the space S0 0,1

uniformly in z and such that µ0m is independent of ε and µ.e

Hereafter the Sobolev spaces are equipped with the h-semiclassical norm. Clearly, the estimate (1.2) provides a good approximation of the operator N (λ) as long as θ ≥ h2/5−ǫ. It also implies

the following improvement upon the estimate (3.4). Corollary 1.2. Let θ≥ h2/5−ǫ. Then for every f ∈ Ht

1 we have the estimate

(1.3) kN (λ)fkH0 .θ−1/2kfkH1.

Note that analog estimates for the Dirichlet-to-Neumann operator associated to the Helmholtz equation are proved in [8], [10] for θ ≥ h1/2−ǫ, in [12] for θ ≥ h2/3−ǫ and in [9] for θ ≥ h1−ǫ,

0 < ǫ ≪ 1 being arbitrary. In the last case it is assumed that the boundary is strictly concave. In all these papers the approximation of the Dirichlet-to-Neumann map is used to get parabolic regions free of transmission eigenvalues.

To prove Theorem 1.1 we build in Section 4 a semiclassical parametrix near the boundary for the solutions to the equation (1.1). It takes the form of oscilatory integrals with a complex-valued phase function ϕ satisfying the eikonal equation modO(xN

1 ) (see (4.5)), where N ≫ 1 is

arbitrary and 0 < x1 ≪ 1 denotes the normal variable near the boundary, that is, the distance

to Γ. The amplitudes satisfy some kind of transport equations mod O(xN

1 ) (see (4.2)). Thus

the parametrix satisfies the Maxwell equation modulo an error term which is given by oscilatory integrals with amplitudes of the form O(xN

1 ) +O(hN). To estimate the difference between the

exact solution to equation (1.1) and its parametrix we use the a priori estimate (3.5). Note that there exists a different approach suggested in [2] which could probably lead to (1.2) as well. It consists of using the results in [8], [10] to approximate the normal derivatives −ih∂νE|Γ and

−ih∂νH|Γ by Oph(ρ)E|Γ and Oph(ρ)H|Γ. Thus the equation (1.1) can be reduced to a system

of h− ΨDOs on Γ by restricting the equations in (1.1) on the boundary.

In analogy with the Helmholtz equation, Theorem 1.1 can be used to study the location on the complex plane of the transmission eigenvalues associated to the Maxwell equation (see Section 5). It can also be used to study the complex eigenvalues associated to the Maxwell equation with dissipative boundary conditions like that one considered in [2].

(4)

2. Preliminaries

We will first introduce the spaces of symbols which will play an important role in our analysis and will recall some basic properties of the h− ΨDOs. Given k ∈ R, δ1, δ2 ≥ 0, we denote by

Sδk12 the space of all functions a∈ C∞(TΓ), which may depend on the semiclassical parameter

h, satisfying ∂xα′∂β ξ′a(x′, ξ′, h) ≤ Cα,βhξ′ik−δ1|α|−δ2|β|

for all multi-indices α and β, with constants Cα,β independent of h. More generally, given a

function ω > 0 on T∗Γ, we denote by Sδk12(ω) the space of all functions a∈ C∞(TΓ), which

may depend on the semiclassical parameter h, satisfying ∂xα′∂β ξ′a(x′, ξ′, h) ≤ Cα,βωk−δ1|α|−δ2|β|

for all multi-indices α and β, with constants Cα,β independent of h and ω. Thus Sδk1,δ2 =

Skδ12(hξ′i). Given a matrix-valued symbol a, we will say that a ∈ Sk

δ1,δ2 if all entries of a

belong to Sδk12. Also, given k ∈ R, 0 ≤ δ < 1/2, we denote by Sk

δ the space of all functions

a∈ C∞(TΓ), which may depend on the semiclassical parameter h, satisfying

∂xα′∂β ξ′a(x′, ξ′, h) ≤ Cα,βh−δ(|α|+|β|)hξ′ik−|β|

for all multi-indices α and β, with constants Cα,β independent of h. Again, given a

matrix-valued symbol a, we will say that a∈ Sk

δ if all entries of a belong to Sδk. The h− ΨDO with a

symbol a is defined by (Oph(a)f ) (x′) = (2πh)−2 Z Z e−hihx ′ −y′ ,ξ′ ia(x, ξ, h)f (y)dξdy. If a∈ Sk

0,1, then the operator Oph(a) : Hhk(Γ)→ L2(Γ) is bounded uniformly in h, where

kukHk h(Γ) := Oph(hξ′ik)u L2(Γ).

It is also well-known (e.g. see Section 7 of [3]) that, if a ∈ Sδ0, 0 ≤ δ < 1/2, then Oph(a) : Hs

h(Γ)→ Hhs(Γ) is bounded uniformly in h. More generally, we have the following (see Section

2 of [8]):

Proposition 2.1. Let hℓ±a±∈ S±k

δ , 0≤ δ < 1/2, where ℓ±≥ 0 are some numbers. Assume in

addition that the functions a± satisfy

(2.1) ∂α1 x′ ∂ β1 ξ′ a+(x′, ξ′)∂α 2 x′∂ β2 ξ′ a−(x′, ξ′) ≤ κCα1,β1,α2,β2h −(|α1|+|β1|+|α2|+|β2|)/2

for all multi-indices α1, β1, α2, β2 such that|αj|+|βj| ≥ 1, j = 1, 2, with constants Cα1,β1,α2,β2 >

0 independent of h and κ. Then we have

(2.2) Oph(a+)Oph(a−)− Oph(a+a−) L2(Γ)→L2(Γ).h + κ.

Let η ∈ C(TΓ) be such that η = 1 for r

0 ≤ C0, η = 0 for r0 ≥ 2C0, where C0 > 0 does

not depend on h. It is easy to see (e.g. see Lemma 3.1 of [8]) that taking C0 big enough we can

arrange

C1θ1/2≤ |ρ| ≤ C2, Im ρ≥ C3|θ||ρ|−1≥ C4|θ|

on supp η, and

(5)

on supp(1− η) with some constants Cj > 0. We will say that a function a∈ C∞(T∗Γ) belongs to Sk1 δ1,δ2(ω1) + S k2 δ3,δ4(ω2) if ηa∈ S k1 δ1,δ2(ω1) and (1− η)a ∈ S k2 δ3,δ4(ω2). It is shown in Section 3 of

[8] (see Lemma 3.2 of [8]) that

(2.3) ρk∈ S2,2k (|ρ|) + S0,1k (|ρ|) ⊂ S1,1−ek/2(θ) + S0,1k ⊂ θ−ek/2S1/2−ǫ−N + S k

0,1⊂ θ−ek/2S1/2−ǫk

as long as θ ≥ h1/2−ǫ, uniformly in θ and h, where ek = 0 if k ≥ 0, ek = −k if k ≤ 0 and N ≫ 1

is arbitrary. Proposition 2.1 implies the following Proposition 2.2. Let h1/2−ǫ≤ θ±≤ 1, ℓ± ≥ 0, and let

a± ∈ S−ℓ± 1,1 (θ±) + S0,1k±⊂ θ −ℓ± ± S k± 1/2−ǫ. Then we have (2.4) Oph(a+)Oph(a−)− Oph(a+a−) Hk h(Γ)→L2(Γ) . hθ−1−ℓ+ + θ −1−ℓ− − , where k = k++ k−− 1.

Proof. Let η0, η1, η2 ∈ C0∞(T∗Γ) be such that η1 = 1 on supp η, η2 = 1 on supp η1, η = 1 on

supp η0. Then we have

Oph(a+a−)− Oph(ηa+η1a−)Oph(η2)− Oph((1− η)a+(1− η0)a−)

= Oph(ηa+η1a−)Oph(1− η2) =O(h∞) : Hhk(Γ)→ L2(Γ),

Oph(a+)Oph(a−)− Oph(ηa+)Oph(η1a−)Oph(η2)− Oph((1− η)a+)Oph((1− η0)a−)

= Oph(ηa+)Oph((1− η1)a−) + Oph((1− η)a+)Oph(η0a−)

+Oph(ηa+)Oph(η1a−)Oph(1− η2) =O(h∞) : Hhk(Γ)→ L2(Γ).

By assumption, ηa+ ∈ S−ℓ+

1,1 (θ+), η1a− ∈ S1,1−ℓ−(θ−), which implies that the functions ηa+ and

η1a− satisfy the condition (2.1) with κ = hθ+−1−ℓ+θ −1−ℓ−

− . Therefore, by (2.2) we have

Oph(ηa+η1a−)− Oph(ηa+)Oph(η1a−)Oph(η2)f

L2 .hθ−1−ℓ+ + θ−1−ℓ − − kOph(η2)fkL2 .hθ−1−ℓ + + θ−1−ℓ − − kfkHk h.

On the other hand, (1− η)a+∈ Sk+

0,1, (1− η0)a− ∈ S0,1k−. The standard pseudodifferential calculas

gives that, modO(h), the operator

Oph((1− η)a+(1− η0)a−)− Oph((1− η)a+)Oph((1− η0)a−)

is an h− ΨDO with symbol hω, ω ∈ S0,1k uniformly in h, where k = k++ k−− 1. Therefore,

Oph((1− η)a+(1− η0)a−)f − Oph((1− η)a+)Oph((1− η0)a−)f

L2 .hkfkHk h.

Clearly, (2.4) follows from the above estimates. ✷

We also have

Proposition 2.3. Let h1/2−ǫ≤ θ ≤ 1, ℓ ≥ 0, and let

a∈ S1,1−ℓ(θ) + S0,1k ⊂ θ−ℓS1/2−ǫk .

Then we have

(2.5) kOph(a)kHk

h(Γ)→L2(Γ).θ

(6)

Note that these propositions remain valid for matrix-valued symbols.

We will next write the gradient ∇ in the local normal geodesic coordinates near the boundary (see also Section 2 of [2]). Fix a point y0 ∈ Γ and let U ⊂ R3 be a small neighbourhood of y0.

Let U0 be a small neighbourhood of x′ = 0 in R2 and let x′ = (x2, x3) be local coordinates in

U0. Then there exists a diffeomorphism s : U0 → U ∩ Γ. Let y = (y1, y2, y3) ∈ U ∩ Ω, denote

by y′ ∈ Γ the closest point from y to Γ and let ν′(y′) be the unit inner normal to Γ at y′. Set x1= dist(y, Γ), x′ = s−1(y′) and ν(x′) = ν′(s(x′)) = (ν1(x′), ν2(x′), ν3(x′)). We have

y = s(x′) + x1ν(x′) and hence ∂ ∂yj = νj(x′) ∂ ∂x1 + 3 X k=2 αj,k(x) ∂ ∂xk , where αj,k = ∂x∂ykj, provided x1 is small enough. Note that the matrix

 ∂xk ∂yj  , 1≤ k, j ≤ 3, is the inverse of ∂yk ∂xj 

, 1≤ k, j ≤ 3. In particular, this implies the identities

3

X

j=1

νj(x′)αj,k(x) = 0, k = 2, 3.

Set ζ1 = (1, 0, 0), ζ2 = (0, 1, 0), ζ3 = (0, 0, 1). Clearly, we can write the Euclidean gradient

∇ = (∂y1, ∂y2, ∂y3) in the coordinates x = (x1, x ′) as ∇ = γ(x)∇x= ν(x′) ∂ ∂x1 + 3 X k=2 γ(x)ζk ∂ ∂xk ,

where γ is a smooth matrix-valued function such that γ(x)ζ1 = ν(x′), γ(x)ζk = (α1,k, α2,k, α3,k),

k = 2, 3. Notice that the above identities can be rewritten in the form

(2.6) hν(x′), γ(x)ζki = 0, k = 2, 3.

Let (ξ1, ξ′), ξ′ = (ξ2, ξ3), be the dual variable of (x1, x′). Then the symbol of the operator

−i∇|x1=0 in the coordinates (x, ξ) takes the form ξ1ν(x

) + β(x, ξ), where β(x′, ξ′) = 3 X k=2 ξkγ(0, x′)ζk.

Thus we get that the principal symbol of −∆|x1=0 is equal to

ξ12+hβ(x′, ξ′), β(x′, ξ′)i.

This implies that the principal symbol, r0(x′, ξ′), of the positive Laplace-Beltrami operator on

Γ is equal to

hβ(x′, ξ′), β(x′, ξ′)i. Note also that (2.6) implies the identity

(2.7) hν(x′), β(x′, ξ′)i = 0

for all (x′, ξ).

In what follows in this section we will solve the linear system

(2.8)      ψ0× a − zµ0b = a♯, ψ0× b + zε0a = b♯, ν× a = g,

(7)

where ψ0 = ρν− β and hg, νi = 0. To this end, we rewrite it in the form (2.9)      β× a + zµ0b = ρg− a♯, ρν× b − β × b + zε0a = b♯, ν× a = g.

Using the identity −β × (β × a) = hβ, βia − hβ, aiβ, we obtain zρµ0ν× b = zµ0β× b − z2ε0µ0a + zµ0b♯

=−β × (β × a) − z2ε0µ0a + β× (ρg − a♯) + zµ0b♯

= (hβ, βi − z2ε0µ0)a− hβ, aiβ + β × (ρg − a♯) + zµ0b♯

= (r0− z2ε0µ0)a− hβ, aiβ + β × (ρg − a♯) + zµ0b♯

=−ρ2a− hβ, aiβ + β × (ρg − a♯) + zµ0b♯.

Taking the scalar product of this identity with ν and using that hν, βi = 0 and hν, ν × bi = 0, we get

hν, ai = ρ−1hν, β × gi − ρ−2hβ × a♯, νi + zµ0ρ−2hb♯, νi.

On the other hand, at= a− hν, aiν satisfies ν × at= ν× a = g. Hence,

ν× g = ν × (ν × at) =−hν, νiat+hν, atiν = −at.

Thus we find

a =−ν × g + ρ−1hν, β × giν − ρ−2hβ × a♯, νiν + zµ0ρ−2hb♯, νiν,

zµ0b = ρg + β× (ν × g) − ρ−1hν, β × giβ × ν

−a♯+ ρ−2hβ × a♯, νiβ × ν − zµ0ρ−2hb♯, νiβ × ν,

zµ0ν× b = −ρa + β × g + ρ−1hβ, ν × giβ − ρ−1β× a♯+ zρ−1µ0b♯

= ρν× g + β × g − hν, β × giν + ρ−1hβ, ν × giβ −ρ−1β× a♯+ ρ−1hβ × a♯, νiν + zρ−1µ0b♯− zρ−1µ0hb♯, νiν.

Since hν, gi = 0 and hν, βi = 0, we have

β× g − hν, β × giν = 0. Thus we obtain

zµ0ν× b = ρν × g + ρ−1hβ, ν × giβ

(8)

3. A priori estimates

Let ef ∈ H1t and let the functions U1, U2 ∈ L2(Ω; C3) be such that div U1, div U2 ∈ L2(Ω),

u1 :=hν, U1|Γi ∈ L2(Γ). In this section we will prove a priori estimates for the restrictions on

the boundary of the solutions E and H to the Maxwell equation

(3.1)      h∇ × E = izµ(x)H + U1 in Ω, h∇ × H = −izε(x)E + U2 in Ω, ν× E = ef on Γ.

Since h∇, ∇ × Ei = 0, the solutions to (3.1) must satisfy the equation (3.2) ( h∇, Ei = (izε)−1h∇, U 2i − ε−1h∇ε, Ei in Ω, h∇, Hi = −(izµ)−1h∇, U 1i − µ−1h∇µ, Hi in Ω.

To simplify the notations, in what follows we will denote by k · k (resp. k · k0) the norm on

L2(Ω; C3) (resp. L2(Γ; C3)) or on L2(Ω) (resp. L2(Γ)). We also set Y = (E, H), U = (U 1, U2),

and define the norms kY k, kUk and kdiv Uk by

kY k2 =kEk2+kHk2, kUk2=kU1k2+kU2k2, kdiv Uk2 =kdiv U1k2+kdiv U2k2.

By the Gauss divergence theorem we have the identity (3.3) Z ΩhE, ∇ × Hi − Z ΩhH, ∇ × Ei = Z ΓhH × E, νi.

We will use (3.3) to prove the following

Theorem 3.1. Let θ > 0 and 0 < h ≪ 1. Suppose that E and H satisfy equation (3.1) with U1= U2= 0. Then the functions f = E|Γ, g = H|Γ satisfy the estimate

(3.4) kfkH0+kgkH0 .θ

−1k efk H1.

Suppose that E and H satisfy equation (3.1) with ef = 0. Then the functions f = E|Γ, g = H|Γ

satisfy the estimate

(3.5) kfkH0+kgkH0 .ku1k0+ h

−1/2θ−1kUk + h1/2kdiv Uk.

Proof. We decompose the vector-valued functions f and g as f = ft+ fn, g = gt+ gn, where

fn=hν, fiν, gn =hν, giν. Clearly, we have the idenities hft, fni = hgt, gni = 0 and ν ×f = ν ×ft,

ν× g = ν × gt, ft=−ν × (ν × f), gt=−ν × (ν × g). Applying (3.3) to the solutions of equation

(3.1) leads to the idenity iz Z Ω ε|E|2− iz Z Ω µ|H|2 = Z ΩhH, U 1i − Z ΩhE, U 2i + h Z Γhgt× f t, νi.

Taking the real part yields the estimate

(3.6) kY k2.θ−2kUk2+ hθ−1kgtk0kftk0.

By equation (3.2) we also have

(3.7) |h∇, Ei| + |h∇, Hi| . |div U| + |Y |.

Restricting the first equation of (3.1) on Γ and taking the scalar product with ν leads to the estimate

(9)

In the normal coordinates (x1, x′), x′ ∈ Γ, the gradient takes the form ∇ = γeν∂x1+ γ e∇x′, where

e

ν = (1, 0, 0) and ex′ = (0,∇x′). So, we have

∇|x1=0= γ0ν∂e x1 + γ0∇ex′ = ν∂x1 + γ0∇ex′, γ0(x

) = γ(0, x).

Hence

hν, h∇ × Ei|Γ = hhν, ν × ∂x1E|x1=0i + hν, hγ0∇ex′ × E|x1=0i

=hν, hγ0∇ex′× fi = hν, hγ0∇ex′ × fti + hhν, γ0∇ex′ × fni.

On the other hand,

hν, γ0∇ex′ × fni = hν, fihν, γ0∇ex′× νi + hν, γ0∇ex′(hν, fi) × νi

=hν, fihν, γ0∇ex′× νi.

Therefore (3.8) gives

(3.9) kgnk0 .k efkH1+ku1k0+ hkfk0.

We will now bound the norms of fnand gt. Let the function φ0 ∈ C0∞(R) be such that φ0(σ) = 1

for|σ| ≤ 1, φ0(σ) = 0 for|σ| ≥ 2, and set φ(σ) = φ0(σ/δ), where 0 < δ≪ 1. Then the functions

Y♭:= (E, H) = (φ(x

1)E, φ(x1)H) satisfy equation

(3.10) ( h(γeν∂x1+ γ e∇x′)× E ♭= izµH+ U♭ 1 in Ω, h(γeν∂x1+ γ e∇x′)× H ♭=−izεE+ U♭ 2 in Ω, where U♭:= (U

1, U2♭) satisfy kU♭k0.kUk0+ hkY k0. By (3.7) the functions

p =hγeν, ∂x1E ♭i + hγ e x′, E♭i, q = hγeν, ∂x 1H ♭i + hγ e x′, H♭i, satisfy (3.11) |p| + |q| . |div U| + |Y |.

Denote by h·, ·i0 the scalar product in L2(Γ; C3) or in L2(Γ), that is,

ha, bi0= Z Γha, bi if a, b ∈ L 2(Γ; C3), ha, bi0 = Z Γ ab if a, b∈ L2(Γ). Introduce the functions

F1(x1) = γeν× E♭ 2 0− hγeν, E♭i 2 0, F2(x1) = γeν× H♭ 2 0− hγeν, H♭i 2 0.

Since ν = γ0eν = γeν|x1=0, we have

F1(0) =kftk20− kfnk20, F2(0) =kgtk20− kgnk20.

Using equation (3.10) we will calculate the first derivatives Fj′(x1) = dFdxj1. In view of (3.11), we

get F1′(x1) = 2Re D γeν× ∂x1E ♭ , γeν× E♭E 0+ 2Re D γ′eν× E♭, γeν× E♭E 0 −2Re Dhγeν, ∂x1E ♭ i, hγeν, E♭iE 0− 2Re D hγ′ν, Ee ♭i, hγeν, E♭iE 0 =−2Re Dγ e∇x′ × E♭, γeν× E♭ E 0+ 2h −1Re D(izµH+ U♭ 1), γeν× E♭ E 0

(10)

+2Re Dhγ e∇x′, E♭i, hγeν, E♭i E 0− 2Re D p,hγeν, E♭iE 0+O  kE♭k20  =−2ReDγ e∇x′× E♭, γeν× E♭ E 0+ 2Re D hγ e∇x′, E♭i, hγeν, E♭i E 0+R

with a remainder term R satisfying the estimate

|R| . h−1kY k20+ h−1kUk02+kEk0kdiv U1k0.

Clearly, we have a similar expression for F2′(x1) as well. Let us see now that

(3.12) Re Dγ e∇x′× E♭, γeν× E♭ E 0− Re D hγ e∇x′, E♭i, hγeν, E♭i E 0 =O  kE♭k20  .

It suffices to check (3.12) at a symbol level. Let eξ′ = (0, ξ′) denote the symbol of −i ex′. We

have the identity D

γ eξ′× E, γeν× E♭E=Dγ eξ′, γeνE DE♭, E♭EDE♭, γeνE Dγ eξ′, E♭E=DE♭, γeνE Dγ eξ′, E♭E where we have used that Dγ eξ′, γeνE= 0 (see (2.6)). Hence

ImDγ eξ′× E, γeν× E♭E− Im Dhγ eξ′, E♭i, hγeν, E♭iE= 0 which clearly implies (3.12). Thus we conclude

(3.13) F1′(x1) + F′ 2(x1) . h−1kY k2 0+ h−1kUk20+ hkdiv Uk20. Since Fj(0) =− Z 2δ 0 Fj′(x1)dx1, we deduce from (3.13), (3.14) |F1(0)| + |F2(0)| . h−1kY k2+ h−1kUk2+ hkdiv Uk2. By (3.6) and (3.14), kfnk20+kgtk02.kftk20+kgnk20+ θ−1kftk0kgtk0+ h−1θ−2kUk2+ hkdiv Uk2, which implies (3.15) kfnk20+kgtk20 .θ−2kftk20+kgnk20+ h−1θ−2kUk2+ hkdiv Uk2.

Clearly, the estimates (3.4) and (3.5) follow from (3.9) and (3.15) by taking h small enough. ✷ 4. Parametrix construction

We keep the notations from the previous sections and will suppose that θ≥ h2/5−ǫ, 0 < ǫ≪ 1.

Let (x1, x′) be the local normal geodesic coordinates introduced in Section 2. Clearly, it suffices

to build the parametrix locally. Then the global parametrix is obtained by using a suitable partition of the unity on Γ and summing up the corresponding local parametrices. We will be looking for a local parametrix of the solution to equation (1.1) in the form

e E = (2πh)−2 Z Z ehi(hy ′ ,ξ′ i+ϕ(x,ξ′ ,z))χ(x, ξ)a(x, y, ξ, z, h)dξdy, e H = (2πh)−2 Z Z ehi(hy ′ ,ξ′ i+ϕ(x,ξ′ ,z))χ(x, ξ)b(x, y, ξ, z, h)dξdy, where χ = φ0(x1/δ)φ0(x1/|ρ|3δ),

(11)

the function φ0 being as in the previous section and 0 < δ ≪ 1 is a parameter independent

of h and θ, which is fixed in Lemma 4.1. We require that eE satisfies the boundary condition ν× eE = f on x1= 0, where f ∈ Ht1. The phase function is of the form

ϕ =

N −1X k=0

xk1ϕk, ϕ0 =−hx′, ξ′i, ϕ1 = ρ,

where N ≫ 1 is an arbitrary integer and the functions ϕk, k ≥ 2, are determined from the

eikonal equation (4.5). The amplitudes are of the form a = N −1X j=0 hjaj, b = N −1X j=0 hjbj.

In what follows we will determine the functions aj and bj in terms of f so that ( eE, eH) satisfy

the Maxwell equation modulo an error term. We have

e−iϕ/hh∇ × (eiϕ/ha)− izµeiϕ/hb= i(γxϕ)× a − izµb + h(γ∇x)× a

=

N −1X j=0

hj(i(γ∇xϕ)× aj− izµbj+ (γ∇x)× aj−1) + hN(γ∇x)× aN −1,

e−iϕ/hh∇ × (eiϕ/hb) + izεeiϕ/ha= i(γxϕ)× b + izεa + h(γ∇x)× b

=

N −1X j=0

hj(i(γ∇xϕ)× bj+ izεaj+ (γ∇x)× bj−1) + hN(γ∇x)× bN −1,

where a−1 = b−1 = 0. We let now the functions aj and bj satisfy the equations

(4.1)      (γxϕ)× a0− zµb0 = xN1 Ψ0, (γxϕ)× b0+ zεa0 = xN1 Ψe0, ν× a0= g on x1 = 0, where ν = ν(x′) = (ν

1(x′), ν2(x′), ν3(x′)) is the unit normal vector at x′ ∈ Γ,

g =−ν(x′)× (ν(y′)× f(y′)) = f (y′)− (ν(x′)− ν(y′))× (ν(y′)× f(y′)), and (4.2)      (γxϕ)× aj − zµbj = i(γ∇x)× aj−1+ xN1 Ψj, (γxϕ)× bj+ zεaj = i(γ∇x)× bj−1+ xN1 Ψej, ν× aj = 0 on x1 = 0,

for 1≤ j ≤ N − 1. We will be looking for solutions of the form aj = N −1X k=0 xk1aj,k, bj = N −1X k=0 xk1bj,k.

Let us expand the functions µ, ε and γ as µ(x) =

N −1X k=0

(12)

ε(x) = N −1X k=0 xk1εk(x′) + xN1 E(x), γ(x) = N −1X k=0 xk1γk(x′) + xN1 Θ(x). Observe that ∇xϕ = N −1X k=0 xk1ek with e0 = (ρ,−ξ2,−ξ3), ek = (ϕk+1, ∂x2ϕk, ∂x3ϕk), k≥ 1. Hence γxϕ = N −1X k=0 xk1ψk+ xN1 Θ,e where ψ0 = γ0e0= ρν− β, ψk= k X ℓ=0 γℓek−ℓ, k≥ 1, e Θ = 2N −2X k=N xk−N1 ψk+ Θ(∇xϕ).

We will first solve equation (4.1). We let the functions a0,0 and b0,0 satisfy the equation

(4.3)      ψ0× a0,0− zµ0b0,0 = 0, ψ0× b0,0+ zε0a0,0= 0, ν× a0,0 = g.

The equation (4.3) is solved in Section 2 and we have

a0,0=−ν × g + ρ−1hν, β × giν,

b0,0 = ρ(zµ0)−1g + (zµ0)−1β× (ν × g) − (zµ0)−1ρ−1hν, β × giβ × ν,

(4.4) zµ0ν× b0,0 = ρν× g + ρ−1hβ, ν × giβ.

Next we let ϕ satisfy the eikonal equation mod O(xN 1 ):

(4.5) hγ∇xϕ, γ∇xϕi − z2ε0µ0 = xN1 Φ.

This equation is solved in Section 4 of [8]. The functions ϕk, k ≥ 2, are determined uniquely

and have the following properties (see Lemma 4.1 of [8]): Lemma 4.1. We have

(4.6) ϕk∈ S2,24−3k(|ρ|) + S0,11 (|ρ|), k≥ 1,

(4.7) ∂xk1Φ∈ S2,22−3N −3k(|ρ|) + S0,12 (|ρ|), k≥ 0,

uniformly in z and 0 ≤ x1 ≤ 2δ min{1, |ρ|3}. Moreover, if δ > 0 is small enough, independent

of ρ, we have

(4.8) Im ϕ≥ x1Im ρ/2 for 0≤ x1 ≤ 2δ min{1, |ρ|3}.

Furthermore, there are functions ϕ♭

k ∈ S0,11 , independent of ε and µ, such that

(13)

Set e ϕ = N −1X k=1 xk1ϕk.

Using the above lemma we will prove the following

Lemma 4.2. There exists a constant C > 0 such that we have the estimates (4.9) ∂xα′∂β ξ′  ei eϕ/h ( Cα,βθ−|α|−|β|e−Cx1θ/h on supp η, Cα,β|ξ′|−|β|e−Cx1|ξ ′|/h on supp(1− η),

for 0≤ x1 ≤ 2δ min{1, |ρ|3} and all multi-indices α and β with constants Cα,β > 0 independent

of x1, θ, z and h.

Proof. Let us see that the functions cα,β = e−i eϕ/h∂xα′∂β

ξ′



ei eϕ/h, |α| + |β| ≥ 1, satisfy the bounds

(4.10) xα′′∂β ′ ξ′cα,β . |α|+|β|+|α′ |+|β′ | X j=1  x1 h|ρ| j |ρ|−2(|α|+|β|+|α′ |+|β′ |−j) on supp η, and (4.11) xα′′∂β ′ ξ′cα,β . |α|+|β|+|α′ |+|β′ | X j=1 x1 h j |ξ′|−(|β|+|β′|−j)

on supp(1− η), for all multi-indices α′ and β′. We will proceed by induction in |α| + |β|. Let α1 and β1 be multi-indices such that |α1| + |β1| = 1 and observe that

cα+α1,β+β1 = ∂

α1

x′ ∂ξβ′1cα,β+ ih−1cα,βxα′1∂ξβ′1ϕ.e

More generally, we have (4.12) ∂xα′′∂β ′ ξ′cα+α1,β+β1 = ∂α1+α ′ x′ ∂ β1+β′ ξ′ cα,β+ ih−1∂α ′ x′∂β ′ ξ′  cα,β∂xα′1∂ β1 ξ′ ϕe  . By Lemma 4.1 we have (4.13) x−11 ϕe∈ S2,21 (|ρ|) + S0,11 (|ρ|)

for 0≤ x1 ≤ 2δ min{1, |ρ|3}. By (4.12) and (4.13), it is easy to see that if (4.10) and (4.11) hold

for cα,β, they hold for cα+α1,β+β1 as well. Using (4.10) together with (4.8) we obtain

ei eϕ/hcα,β . |α|+|β| X j=1  x1 h|ρ| j |ρ|−2(|α|+|β|−j)e−2Cθx1(h|ρ|)−1 . |α|+|β|X j=1 θ−j|ρ|−2(|α|+|β|−j)e−Cx1θ/h .θ−|α|−|β|e−Cx1θ/h. Similarly, by (4.11) we obtain ei eϕ/hcα,β . |α|+|β| X j=1  x1 h j |ξ′|−|β|+je−2Cx1|ξ′|/h.|−|β|e−Cx1|ξ′|/h.

(14)

We take a0,k= ea0,kν for k ≥ 1, where ea0,k are scalar functions to be determined such that

(4.14) hγ∇xϕ, a0i = xN1 Φ.e

Using that 0, a0,0i = 0 we can expand the left-hand side as 2N −2X k=1 xk1 k−1 X ℓ=0 hψℓ, νiea0,k−ℓ+hψk, a0,0i ! + xN1 heΘ, a0i. Therefore, if hψk, a0,0i + k−1 X ℓ=0 hψℓ, νiea0,k−ℓ= 0, 1≤ k ≤ N − 1,

then (4.14) is satisfied with e Φ = 2N −2X k=N xk−N1 k−1 X ℓ=0 hψℓ, νiea0,k−ℓ+heΘ, a0i.

Since hψ0, νi = ρ, we arrive at the relations

(4.15) ea0,k=−ρ−1hψk, a0,0i − ρ−1 k−1

X

ℓ=1

hψℓ, νiea0,k−ℓ

which allow us to find all ea0,k, and hence to find a0. To find b0 we will use the expansion

(γ∇xϕ)× aj = N −1X k=0 xk1ψk× N −1X k=0 xk1aj,k+ xN1 Θe× aj, = N −1X k=0 xk1 k X ℓ=0 ψk−ℓ× aj,ℓ+ xN1 2N −2X k=N xk−N1 k X ℓ=0 ψk−ℓ× aj,ℓ+ xN1 Θe× aj with j = 0. We take (4.16) b0,k= (zµ0)−1 k X ℓ=0 ψk−ℓ× a0,ℓ, 0≤ k ≤ N − 1.

Then the first equation of (4.1) is satisfied with Ψ0 = 2N −2X k=N xk−N1 k X ℓ=0 ψk−ℓ× a0,ℓ+ eΘ× a0.

On the other hand, we have the identity

xϕ)× ((γ∇xϕ)× a0) =−hγ∇xϕ, γ∇xϕia0+hγ∇xϕ, a0iγ∇xϕ.

Therefore, in view of (4.5) and (4.14), the second equation of (4.1) is satisfied with e Ψ0 = (zµ)−1  −Φa0+ eΦγ∇xϕ  . To solve equation (4.2) we will use the expansion

(γ∇x)× aj = N −1X k=0 xk1(γk∇x)× N −1X k=0 xk1aj,k+ xN1 (Θ∇x)× aj

(15)

= N −1X k=0 xk1 k X ℓ=0  (γk−ℓ∇ex′)× aj,ℓ+ (ℓ + 1)γk−ℓνe× aj,ℓ+1  +xN1 2N −2X k=N xk−N1 k X ℓ=0  (γk−ℓ∇ex′)× aj,ℓ+ (ℓ + 1)γk−ℓeν× aj,ℓ+1  + xN1x)× aj,

where eν = (1, 0, 0) and e∇x′ = (0,∇x′). Clearly, we have similar expansions with aj replaced by

bj. We let the functions aj,k satisfy the equations

ψ0× aj,k− zµ0bj,k =− k−1 X ℓ=0 (ψk−ℓ× aj,ℓ− zµk−ℓbj,ℓ) + k X ℓ=0 i(γk−ℓ∇ex′)× aj−1,ℓ+ (ℓ + 1)γk−ℓνe× aj−1,ℓ+1  =: a♯j,k, ψ0× bj,k+ zε0aj,k =− k−1 X ℓ=0 (ψk−ℓ× bj,ℓ+ zεk−ℓaj,ℓ) + k X ℓ=0 i(γk−ℓ∇ex′)× bj−1,ℓ+ (ℓ + 1)γk−ℓνe× bj−1,ℓ+1  =: b♯j,k,

ν× aj,k = 0, for 1≤ j ≤ N − 1 and 0 ≤ k ≤ N − 1. Then the equation (4.2) is satisfied with

Ψj = 2N −2X k=N xk−N1 k X ℓ=0 (ψk−ℓ× aj,ℓ− zµk−ℓbj,ℓ) + eΘ× aj− zMbj + 2N −2X k=N xk−N1 k X ℓ=0  (γk−ℓ∇ex′)× aj−1,ℓ+ (ℓ + 1)γk−ℓνe× aj−1,ℓ+1  + (Θx)× aj−1, e Ψj = 2N −2X k=N xk−N1 k X ℓ=0 (ψk−ℓ× bj,ℓ+ zεk−ℓaj,ℓ) + eΘ× bj+ zEaj + 2N −2X k=N xk−N1 k X ℓ=0  (γk−ℓ∇ex′)× bj−1,ℓ+ (ℓ + 1)γk−ℓνe× bj−1,ℓ+1  + (Θx)× bj−1,

where a−1,ℓ = b−1,ℓ= 0. The above equations are solved in Section 2 and we have the formulas

aj,k= ρ−2hβ × a♯j,k, νiν + zµ0ρ−2hb♯j,k, νiν,

bj,k = (zµ0)−1a♯j,k− (zµ0)−1ρ−2hβ × a♯j,k, νiβ × ν − ρ−2hb♯j,k, νiβ × ν,

(4.17) zµ0ρν× bj,k = β× a♯j,k− hβ × a ♯

j,k, νiν + zµ0b♯j,k− zµ0hb♯j,k, νiν.

Thus we can express all functions aj,k, bj,k in terms of g. More precisely, they are of the form

aj,k= Aj,k(x′, ξ′) ef (y′), bj,k = Bj,k(x′, ξ′) ef (y′),

where ef (y′) = ν(y′)× f(y′) = ι

ν(y′)f (y′), ιν being a 3× 3 matrix, and Aj,k, Bj,k are smooth

matrix-valued functions whose main properties are given in Lemma 4.3 below. In what follows, given a vector-valued function a of the form A(x′, ξ′) ef (y′), we will write a∈ Sk1,ℓ2f if all entriese of A belong to Sk1,ℓ2.

(16)

Lemma 4.3. We have (4.18) Aj,k ∈ S2,2−1−3k−5j(|ρ|) + S −j 0,1(|ρ|), j≥ 0, k ≥ 0, (4.19) Bj,k∈ S2,2−1−3k−5j(|ρ|) + S1−j0,1 (|ρ|), j ≥ 0, k ≥ 0, (4.20) ιν(x′)Bj,k ∈ S2,2−3k−5j(|ρ|) + S 1−j 0,1 (|ρ|), j ≥ 1, k ≥ 0, (4.21) ∂xk1Ψj ∈ S−1−3(N +k)−5j2,2 (|ρ|) ef + S0,11−j(|ρ|) ef , j≥ 0, (4.22) ∂xk1Ψej ∈ S −1−3(N +k)−5j 2,2 (|ρ|) ef + S 2−j 0,1 (|ρ|) ef , j≥ 0,

uniformly in z and 0≤ x1≤ 2δ min{1, |ρ|3}.

Proof. By Lemma 4.1,

(4.23) ψk∈ S2,21−3k(|ρ|) + S0,11 (|ρ|), Θe ∈ S1−3N2,2 (|ρ|) + S0,11 (|ρ|).

It is easy to see from (4.15) and (4.16) by induction in k that (4.23) implies (4.18) and (4.19) for j = 0 and all k ≥ 0. To prove the assertion for all j ≥ 1 and k ≥ 0 we will proceed by induction in j + k. Suppose it is fulfilled for all 0 ≤ j ≤ J, k ≥ 0, as well as for j = J + 1 and k ≤ K, where J ≥ 0, K ≥ −1 are integers. This implies

(4.24) a♯J+1,K+1∈ S2,2−7−3K−5J(|ρ|) ef + S0,1−J(|ρ|) ef ,

(4.25) b♯J+1,K+1∈ S2,2−7−3K−5J(|ρ|) ef + S0,11−J(|ρ|) ef .

Recall that a♯j,0 = b♯j,0 = 0. Using (2.3) with k =−2 and the formulas for aj,k and bj,k in terms

of a♯j,k and b♯j,k, we get from (4.24) and (4.25) that (4.18) and (4.19) hold with j = J + 1 and k = K + 1, as desired. It is also clear that (4.20) follows from (4.24) and (4.25)(used with K = k− 1, J = j − 1) and (4.17) together with (2.3) with k = −1. Since the functions Ψj

and eΨj are expressed in terms of Aj,k, Bj,k, ψk and eΘ, one can derive (4.21) and (4.22) from

(4.18),(4.19) and (4.23). One just needs the following simple observation: if a∈ Sℓ1 2,2(|ρ|) + S0,1ℓ2 (|ρ|), then xk1a∈ Sℓ1+3k 2,2 (|ρ|) + Sℓ 2 0,1(|ρ|), k ≥ 0. ✷ Clearly, we have ν× eE|x1=0 = f and

ν× eH|x1=0= ιν(x ′) eH| x1=0 = N −1X j=0 hjOph(ινBj,0) ef = Oph(ινB0,0+ h(1− η)ινB1,0) ef +K1f ,e where K1= hOph(ηινB1,0) + N −1X j=2 hjOph(ινBj,0) .

(17)

Lemma 4.4. There exists a matrix-valued function B1,0∈ S0,10 such that (4.26) (1− η)ινB1,0− B1,0♭ ∈ S0,1−1

and µ0B1,0♭ is independent of ε and µ.

Proof. In view of (4.15) and (4.16) we have

−ia♯1,0 = γ0∇ex′× a0,0+ ν× a0,1= γ0∇ex′× a0,0, −ib♯1,0= γ0∇ex′× b0,0+ ν × b0,1 = γ0∇ex′× b0,0+ (zµ0)−1(ψ1× a0,0+ ψ0× a0,1) = γ0∇ex′× b0,0+ (zµ0)−1(ψ1× a0,0− ρ−1hψ1, a0,00× ν) = γ0∇ex′ × b0,0+ (zµ0)−1(ψ1× a0,0+ ρ−1hψ1, a0,0iβ × ν). Thus by (4.17) we obtain −izµ0ινB1,0f =e −izµ0ν× b1,0 = ρ−1β× (γ0∇ex′ × a0,0)− ρ−1hβ × (γ0∇ex′× a0,0), νiν +zµ0ρ−1γ0∇ex′× b0,0− zµ0ρ−1hγ0∇ex′ × b0,0, νiν +ρ−1ψ1× a0,0− ρ−1hψ1× a0,0, νiν + ρ−2hψ1, a0,0iβ × ν.

Observe now that

ρ = i√r0(1 +O(r0−1)) = i√r0+O  1 √r 0  as r0 → ∞.

More generally, we have

(1− η)(ρ − i√r0)∈ S0,1−1,

(1− η)ρ−k− (i√r0)−k



∈ S0,1−k−2, k = 1, 2.

Define a♭0,0 and b♭0,0 by replacing in the formulas for a0,0 and b0,0 above the function ρ by i√r0.

Clearly, a♭0,0 and µ0b♭0,0 are independent of ε and µ. Moreover, we have

(1− η)(a0,0− a♭0,0)∈ S0,1−1f ,e (1− η)(b0,0− b♭0,0)∈ S0,10 f .e

Define ψ1♭ ∈ S1

0,1 by replacing in the definition of ψ1 the function ϕ2 by ϕ♭2 and ρ by i√r0. We

also define eB1,0♭ by replacing in the formula for ινB1,0 above the function ρ by i√r0, ψ1 by ψ1♭,

a0,0 and b0,0 by a♭0,0 and b0,0♭ . Set B♭1,0= (1− η) eB1,0♭ . With this choice one can easily check that

the conclusions of the lemma hold. ✷

Clearly, we can write the matrix ιν in the formP3j=1νjIj, where Ij are constant matrices. In

view of (4.4) we have ινB0,0f = νe × b0,0= m(ν× g) = m ef + mιν 3 X j=1 (νj(y′)− νj(x′))Ijfe

where m = (zµ0)−1(ρI + ρ−1B). Set m0= i(zµ0)−1√r0(I − r−10 B). We have

Oph(ινB0,0) ef = Oph(m) ef + 3

X

j=1

(18)

= Oph(m) ef + 3 X j=1 [Oph((1− η)m0ινIj), νj] ef + 3 X j=1 [Oph((ηm + (1− η)(m − m0))ινIj), νj] ef = Oph(m + hn) ef + 3 X j=1 ([Oph((1− η)m0ινIj), νj]− Oph(hnj)) ef + 3 X j=1 [Oph((ηm + (1− η)(m − m0))ινIj), νj] ef , where n =P3j=1nj with nj =−i X |α|=1 ∂xα′νjξα′((1− η)m0νIj. Thus we obtain (4.27) ν× eH|x1=0 = Oph(m + h em) ef +K ef ,

where we have put em = n + B1,0♭ andK = K1+K2+K3 with

K2 = hOph  (1− η)ινB1,0− B1,0♭  , K3= 3 X j=1 ([Oph((1− η)m0ινIj), νj]− Oph(hnj)) + 3 X j=1 [Oph((ηm + (1− η)(m − m0))ινIj), νj] .

Furthermore, it is easy to see that

∇ × (χa) = χ∇ × a + eχa,

where eχ is a smooth matrix-valued function, which is a linear combinations of ∂xjχ. Therefore

e χ is supported in δ min{1, |ρ|3} ≤ x 1 ≤ 2δ min{1, |ρ|3}. We have h∇ × eE− izµφ eH = (2πh)−2 Z Z ehi(hy ′ ,ξ′ i+ϕ)V 1(x, y′, ξ′, h, z)dξ′dy′ =: U1, h∇ × eH + izεφ eE = (2πh)−2 Z Z ehi(hy ′ ,ξ′ i+ϕ)V 2(x, y′, ξ′, h, z)dξ′dy′=: U2, where V1 = heχa + hNχ(γ∇x)× aN −1+ xN1 N −1X j=0 hjχΨj, V2 = heχb + hNχ(γ∇x)× bN −1+ xN1 N −1X j=0 hjχ eΨj.

Let α be a multi-index such that |α| ≤ 1. Then we can write ((h∂x)αUj)(x1,·) = Oph



(19)

where Vj(0) = Vj and

Vj(α)= i∂xαϕVj + (h∂x)αVj

if |α| = 1. Since (E − eE, H− eH) satisfy equation (3.1) with ef = 0, by (3.5) together with (4.27) we get the estimate

kN (λ)f − Oph(m + h em)(ν× f)kH0

(4.28) .h−1/2θ−1kUk + h1/2kdiv Uk + ku1k0+kK efk0.

We need now the following

Lemma 4.5. We have the estimates

(4.29) kK efk0 .hθ−5/2kfkH−1,

(4.30) ku1k0+kUk + kdiv Uk . h5ǫN/2−ℓkfkH−1,

with some constant ℓ > 0. Proof. By (4.20),

ηινBj,0ιν ∈ S2,2−5j(|ρ|) ⊂ S −5j/2

1,1 (θ), j≥ 1,

(1− η)ινBj,kιν ∈ S0,11−j(|ρ|) ⊂ S−10,1, j≥ 2.

Therefore Proposition 2.3 yields kK1fek0≤ N −1X j=1 hjkOph(ηινBj,0ιν)fk0+ N −1X j=2 hjkOph((1− η)ινBj,0ιν)fk0 . N −1X j=1 hjθ−5j/2kfkH−1 + N −1X j=2 hjkfkH−1 .hθ −5/2kfk H−1.

Furthemore (4.26) clearly implies K2 =O(h) : H−1 → H0. To bound the norm of K3 we will

use Proposition 2.2 twice – with

a+= (ηm + (1− η)(m − m0))ινIj, θ+= θ, a−= νj, θ−= 1, and with a+= νj, θ+= 1, a−= (ηm + (1− η)(m − m0))ινIj, θ−= θ. Since (ηm + (1− η)(m − m0))ινIj ∈ S2,2−1(|ρ|) + S0,1−1(|ρ|) ⊂ S −1/2 1,1 (θ) + S0,1−1, by Proposition 2.2, k[Oph((ηm + (1− η)(m − m0))ινIj), νj]kH−1→H0 .hθ−3/2.

On the other hand, the standard pseudodifferential calculas gives that, modO(h), the operator

[Oph((1− η)m0ινIj), νj] is an h− ΨDO with a principal symbol hnj, nj ∈ S00,1 being as above.

This implies that

[Oph((1− η)m0ινIj), νj]− Oph(hnj)

is an h− ΨDO with a symbol h2ω, with ω∈ S−1

0,1. Hence

k[Oph((1− η)m0ινIj), νj]− Oph(hnj)kH−1→H0 .h

2,

which completes the proof of (4.29). Furthermore, since xN1 e−Cx1θ/h

.hNθ−N, xN1 e−Cx1|ξ′|/h

(20)

we deduce from Lemma 4.2 that

(4.31) h−NxN1 ei eϕ/h∈ S1,1−N(θ) + S0,1−N

uniformly in x1 and h. On supp eχ we have the bounds

e−Cx1θ/h≤ e−Cδ|ρ|3θ/h≤ e− eCθ5/2/h.hNθ−5N/2,

e−Cx1|ξ′|/h

≤ e−Cδ|ξ′|/h.hN|−N. Therefore, by Lemma 4.2 we have

(4.32) h−Nχee i eϕ/h ∈ S1,1−5N/2(θ) + S0,1−N.

Notice that hjθ−5j/2≤ 1 for j ≥ 1 as long as θ ≥ h2/5−ǫ. Taking this into account one can easily

check that (4.31) and (4.32) together with Lemma 4.3 imply (4.33) h−Nei eϕ/hVj(α)∈ S−5N/2−ℓα

1,1 (θ) ef + S0,1−N +eℓαfe

with some ℓα, eℓα > 0 independent of N , whose exact values are not important in the analysis

that follows. Let N > eℓα+ 1. By (4.33) and Proposition 2.3 we get

(4.34) k((h∂x)αUj)(x1,·)kH0 .h Nθ−5N/2−ℓα ef H−1 .h5ǫN/2−2ℓα/5kfk H−1

as long as θ ≥ h2/5−ǫ, uniformly in x1. Observe also that

h−NV1|x1=0 = (γ∇x)× aN −1|x1=0= (γ0∇ex′)× aN −1,0+ ν× aN −1,1

= (γ0∇ex′)× (AN −1,0f ) + νe × (AN −1,1f ) =: ω ee f .

By Lemma 4.3,

ω ∈ S1,1−5N/2(θ) + S0,1−N +1,

which together with Proposition 2.3 yield

Oph(ω) =Oθ−5N/2:H−1→ H0. Since U1|x1=0 = h NOp h(ω) ef , we get (4.35) kU1|x1=0kH0 .h Nθ−5N/2 ef H−1 .h5ǫN/2kfkH−1.

Clearly, (4.30) follows from (4.34) and (4.35). ✷

Taking N big enough depending on ǫ, it is easy to see that the estimate (1.2) follows from (4.28) and Lemma 4.5.

5. Electromagnetic transmission eigenvalues

A complex number λ is said to be an electromagnetic transmission eigenvalue if the following boundary-value problem has a nontrivial solution:

(21)

(5.1)                      ∇ × E1= iλµ1(x)H1 in Ω, ∇ × H1 =−iλε1(x)E1 in Ω, ∇ × E2= iλµ2(x)H2 in Ω, ∇ × H2 =−iλε2(x)E2 in Ω, ν× (E1− E2) = 0 on Γ, ν× (c1H1− c2H2) = 0 on Γ,

where µj, εj ∈ C∞(Ω), cj ∈ C∞(Γ), j = 1, 2, are scalar-valued strictly positive functions.

The most important question that arrises in the theory of the transmission eigenvalues is to know the conditions on the coefficients under which they form a discreet set on the complex plane. This question has been largely investigated in the context of the acoustic transmission eigenvalues, that is, those associated to the Helmholtz equation. Several sufficient condition have been found that guarantee not only the discreteness, but also Weyl asymptotics for the counting function of the acoustic transmission eigenvalues (see [5], [6], [7]). In particular, it was proved in [6] that the existence of parabolic eigenvalue-free regions implies the Weyl asymptotics. On the other hand, such regions were obtained in [8], [9], [10], [11] and [12] under various conditions, by approximating approprietly the Dirichlet-to-Neumann operator associated to the Helmholtz equation with smooth refraction index. It was proved in [11] that, under quite general conditions on the coefficients on the boundary, all transmission eigenvalues are located in a strip |Im λ| ≤ C, which turns out to be optimal. The situation, however, is very different as far as the electromagnetic transmission eigenvalues are concerned. In this context there are few results and they are mainly concerned with the question of discreteness (e.g see [1], [4]). The most general one is in [1], where the authors considered the case c1 ≡ c2 ≡ 1 and proved the discreteness

under the condition

(5.2) ε1 6= ε2, µ1 6= µ2, ε1 µ1 6= ε2 µ2 on Γ.

They also proved that given any γ > 0 there is Cγ > 0 such that there are no electromagnetic

transmission eigenvalues in the region |Im λ| ≥ γ|Re λ|, |λ| ≥ Cγ.

Our goal is to obtain a parabolic eigenvalue-free region under the condition

(5.3) c1

µ1

= c2 µ2

, ε1µ16= ε2µ2 on Γ.

Indeed, using Theorem 1.1 we will prove the following

Theorem 5.1. Under the condition (5.3), there exists a constant C > 0 such that there are no electromagnetic transmission eigenvalues in the region

(5.4) |Im λ| ≥ C(|Re λ| + 1)57.

Proof. Denote by Nj(λ), j = 1, 2, the operator introduced in Section 1 corresponding to

(εj, µj), and set T (λ) = c1N1(λ)− c2N2(λ). We define the functions ρj by replacing in the

definition of ρ the function εµ|Γ by εjµj|Γ. Set f = ν× E1 = ν × E2 ∈ Ht1. Then λ is an

electromagnetic transmission eigenvalue if f 6= 0 and T (λ)f = 0. Therefore, to get the free region (5.4) we need to show that the operator T (λ) is invertible there. By Theorem 1.1 we have

(5.5) kOph(T )(ν× f)kH0 =kT (λ)f − Oph(T )(ν× f)kH0 .hθ

−5/2kfk H−1

(22)

for θ≥ h2/5−ǫ, where T = c1ρ1 µ1 I + c1 ρ1µ1B − c2ρ2 µ2 I− c2 ρ2µ2B = c1 µ1 (ρ1− ρ2) I− (ρ1ρ2)−1B. Since (ρ1− ρ2)(ρ1+ ρ2) = ρ21− ρ22 = z2ε1µ1− z2ε2µ2, we have T = w eT , where w = z2c1 µ1 (ε1µ1− ε2µ2)6= 0, e T = (ρ1+ ρ2)−1 I − (ρ1ρ2)−1B.

Using that B2 = r0B, one can easily check the identity

(5.6) (I + (ρ1ρ2− r0)−1B) I − (ρ1ρ2)−1B= I.

Lemma 5.2. For all integers k≥ 1 and all multi-indices α and β we have the estimates (5.7) ∂xα′∂β ξ′(r0− ρ1ρ2)−k ≤ ( Ck,α,βθ−k−|α|−|β| on supp η, Ck,α,β|ξ′|−2k−|β| on supp(1− η), (5.8) ∂xα′∂β ξ′(ρ1+ ρ2)−k ≤ ( Ck,α,βθ−|α|−|β| on supp η, Ck,α,β|ξ′|−k−|β| on supp(1− η), (5.9) ∂xα′∂β ξ′ (ρ1ρ2)−1(ρ1+ ρ2)−1 ≤ ( Cα,βθ−1/2−|α|−|β| on supp η, Cα,β|ξ′|−3−|β| on supp(1− η).

Proof. We will first prove the estimates on supp(1− η). Since ρj = i√r0 1 +O(r0−1)

 as r0 → ∞, we have r0− ρ1ρ2 = 2r0 1 +O(r0−1)  , ρ1+ ρ2= 2i√r0 1 +O(r0−1)  .

Therefore, |r0− ρ1ρ2| ≥ r0 and |ρ1+ ρ2| ≥√r0 on supp(1− η), provided the constant C0 in the

definition of η is taken large enough (what we can do without loss of generality). To prove (5.7) for all α and β we will proceed by induction in|α| + |β|. Suppose that (5.7) holds on supp(1− η) for α, β such that |α| + |β| ≤ K and all integers k ≥ 1. We will show that it holds for all α, β such that |α| + |β| = K + 1 and all integers k ≥ 1. Let α1 and β1 be multi-indices such that

|α1| + |β1| = 1. We have ∂α1 x′ ∂ β1 ξ′ (r0− ρ1ρ2)−k=−k(r0− ρ1ρ2)−k−1∂xα′1∂ β1 ξ′ (r0− ρ1ρ2)

and more generally, if α, β are such that |α| + |β| = K, we have ∂α+α1 x′ ∂ξβ+β′ 1(r0− ρ1ρ2)−k=−k∂xα′∂β ξ′  (r0− ρ1ρ2)−k−1∂xα′1∂ξβ′1(r0− ρ1ρ2)  . Recall now that r0is a homogeneous polynomial of order two in ξ′. Hence ∂xα′∂β

ξ′r0=O(hξ′i2−|β|).

Furthermore, by (2.3) we have ∂xα′∂ξβ′(ρ1ρ2) =O(hξ′i2−|β|) on supp(1− η). Uisng this, one can

easily deduce from the above identity that (5.7) holds on supp(1− η) for α + α1, β + β1 and

all integers k ≥ 1. Clearly, the same argument also works for (5.8). The estimate (5.9) on supp(1− η) follows from (5.8).

To prove (5.7) on supp η, we will use the identity

(23)

which we rewrite in the form

(r0− ρ1ρ2)−k = w−k1 (r0+ ρ1ρ2)k(w2r0− z2)−k.

By induction, in the same way as above, one can easily prove the estimates ∂xα′∂β ξ′(w2r0− z2)−k ≤ Ck,α,βθ−k−|α|−|β|

on supp η. On the other hand, by (2.3) we have ∂α

x′∂ξβ′(r0 + ρ1ρ2)k = O(θ−|α|−|β|) on supp η.

Therefore (5.7) on supp η follows from the above estimates. The estimates (5.8) and (5.9) on supp η can be obtained in the same way, using (2.3) and the identities

(ρ1+ ρ2)−k= w−k3 (ρ1− ρ2)k, w3 := z2(ε1µ1− ε2µ2),

(ρ1ρ2)−1(ρ1+ ρ2)−1 = w−13 ρ−12 − ρ−11

 .

✷ We rewrite the identity (5.6) in the form

(5.10) T1T =e hξ′i−1I,

where

T1 =hξ′i−1(ρ1+ ρ2)(I + (ρ1ρ2− r0)−1B).

It follows from Lemma 5.2 together with (2.3) that

T1 ∈ S1,1−1(θ) + S0,10 ⊂ θ−1S1/2−ǫ0 ,

e

T ∈ S1,1−1/2(θ) + S0,1−1⊂ θ−1/2S1/2−ǫ−1 , as long as θ ≥ h1/2−ǫ. Therefore, by Proposition 2.3 we get

(5.11) kOph(T1)kH0→H0 .θ

−1,

while Proposition 2.2 yields

(5.12) kOph(T1T )e − Oph(T1)Oph( eT )kH−1→H0 .hθ

−7/2.

Combining (5.10), (5.11) and (5.12) leads to kOph(hξ′i−1) efkH0 .hθ −7/2k efk H−1+kOph(T1)Oph( eT ) efkH0 (5.13) .hθ−7/2k efkH−1+ θ −1kOp h( eT ) efkH0

where ef = ν× f. Since the norms kOph(i−1) efkH0, k efkH−1 and kfkH−1 are equivalent, by

(5.5) and (5.13) we obtain

(5.14) kfkH−1 .hθ

−7/2kfk H−1.

Thus, if hθ−7/2 ≪ 1 we deduce from (5.14) that f = 0. In other words, the region hθ−7/2≪ 1

is free of transmission eigenvalues. It is easy to see that this region is equivalent to (5.4) on the

(24)

References

[1] F. Cakoni and H-M. Nguyen, On the discreteness of transmision eigenvalues for the Maxwell equations, SIAM J. Math. Anal. 53 (2021), 888-913.

[2] F. Colombini, V. Petkov and J. Rauch, Eigenvalues for the Maxwell’s equations with dissipative boundary conditions, Asymptot. Analysis 99 (2016), 105-124.

[3] M. Dimassi and J. Sj¨ostrand, Spectral Asymptotics in Semi-classical Limit, London Mathematical Society, Lecture Notes Series, Vol. 268, Cambridge University Press, 1999.

[4] H. Hadar and S. Meng, The spectral analysis of the interior transmision eigenvalue problem for Maxwell’s equationsJ. Math. Pure Appl. 120 (2018), 1-32.

[5] H-M. Nguyen and Q-H. Nguyen, The Weyl law of transmision eigenvalues and the completeness of gener-alized transmision eigenvalues, preprint 2020.

[6] V. Petkov and G. Vodev, Asymptotics of the number of the interior transmision eigenvalues, J. Spectral Theory 7 (2017), 1-31.

[7] L. Robbiano, Counting function for interior transmision eigenvalues, Mathematical Control and Related Fields 6 (2016), 167-183.

[8] G. Vodev, Transmision eigenvalue-free regions, Comm. Math. Phys. 336 (2015), 1141-1166. [9] G. Vodev, Transmision eigenvalues for strictly concave domains, Math. Ann. 366 (2016), 301-336.

[10] G. Vodev, Parabolic transmision eigenvalue-free regions in the degenerate isotropic case, Asymptot. Analysis 106(2018), 147-168.

[11] G. Vodev, High-frequency approximation of the interior Dirichlet-to-Neumann map and applications to the transmision eigenvalues, Anal.PDE 11 (2018), 213-236.

[12] G. Vodev, Improved parametrix in the glancing region for the interior Dirichlet-to-Neumann map, Comm. PDE 44 (2019), 367-396.

Universit´e de Nantes, Laboratoire de Math´ematiques Jean Leray, 2 rue de la Houssini`ere, BP 92208, 44322 Nantes Cedex 03, France

Références

Documents relatifs

We study the high frequency limit for the dissipative Helmholtz equation when the source term concentrates on a submanifold of R n.. We prove that the solution has a

Colin, On the local well-posedness of quasilinear Schr¨ odinger equation in arbitrary space dimension, Comm. Colin, On a quasilinear Zakharov system describing laser-

4 Our proof does not use the classical results on the eigenvalues problem for the prolate spheroidal equations as the convergent power series expansions in τ of the eigenvalues

Using the region and cluster information, we combine testing of these for association with a clinical variable in an hierarchical multiple testing approach.. This allows for

Keywords: Boltzmann equation; Perturbative theory; Maxwell boundary condi- tions; Specular reflection boundary conditions; Maxwellian diffusion boundary con-

When the angular cross section of the Boltzmann equation has non-integrable singularity described by the parameter β ∈ (0, 2] (with β = 0 corresponding to the cutoff case), then

In Section III we recall the time adiabatic theorem and some results on the regularity of eigenvalues and eigenstates of parameter-dependent Hamiltonians.. In Section IV we deepen

Our first result states that spread controllability holds for a class of systems having pairwise conical intersections, providing in addition an estimate of the controllability