• Aucun résultat trouvé

Time domain decomposition in final value optimal control of the Maxwell system

N/A
N/A
Protected

Academic year: 2022

Partager "Time domain decomposition in final value optimal control of the Maxwell system"

Copied!
25
0
0

Texte intégral

(1)

URL:http://www.emath.fr/cocv/ DOI: 10.1051/cocv:2002042

TIME DOMAIN DECOMPOSITION IN FINAL VALUE OPTIMAL CONTROL OF THE MAXWELL SYSTEM

∗,∗∗

John E. Lagnese

1

and G. Leugering

2

Abstract. We consider a boundary optimal control problem for the Maxwell system with a final value cost criterion. We introduce a time domain decomposition procedure for the corresponding optimality system which leads to a sequence of uncoupled optimality systems of local-in-time optimal control problems. In the limit full recovery of the coupling conditions is achieved, and, hence, the local solutions and controls converge to the global ones. The process is inherently parallel and is suitable for real-time control applications.

Mathematics Subject Classification. 65N55, 49M27, 35Q60.

Received December 17, 2001. Revised February 16, 2002.

1. Introduction

Problems of optimal control of electromagnetic waves arise in a variety of applications, e.g. in stealth technology, design and control of antennas, diffraction optics, magnetotellurics and related fields. In many important cases the objective to be met relates to the final values of both the electric and the magnetic fields involved. Hence, the problem of exact controllability has attracted considerable interest. See Lagnese [9] as an early paper and Phung [15] or Belishev and Glasman [2] for more recent ones on this topic. The list of references is far from being exhaustive. As a more realistic requirement, one typically asks for controls that bring the final states close to a given target configuration, that is, one wants to achieve optimal controls. While the papers mentioned focus on the case of constant permeabilities and permittivities, modern applications require dealing with heterogeneous materials. In addition, in most cases real-time requirements are to be met.

Therefore, in order to obtain adjoint-based gradients and sensitivities in real-time, the large scale (or global) heterogeneous problem has to be reduced to smaller, more standard ones that can be processed in parallel.

Domain decomposition of the Maxwell system for the purpose of simulation has been considered ine.g. Alonso and Valli [1] and Santos [16]. Domain decompositions with respect to optimality systems have been investigated by Lagnese [8]. In this paper we approach the problem of time domain decompositions of the optimality systems.

Time decomposition methods (TDDM) appear to be a promising tool in real-time applications, as the sometimes long time horizon, which would be prohibitive in terms of numerical calculations, can be split into smaller time

Keywords and phrases:Maxwell system, optimal control, domain decomposition.

Research of J.E.L. supported by the National Science Foundation through grant DMS-9972034.

∗∗Research of G.L. supported by DFG grants Le595/12-2 and Le595/13-1

1Department of Mathematics, Georgetown University, Washington, DC 20057, USA; e-mail: lagnese@math.georgetown.edu

2Fachbereich Mathematik, Technische Universit¨at Darmstadt, Schlossgartenstrasse 7, 64289 Darmstadt, Germany;

e-mail: leugering@mathematik.tu-darmstadt.de

c EDP Sciences, SMAI 2002

(2)

intervals, even down to the underlying time grid. Thus the problem can be recast into the framework of receding horizon control problems, or into what has come to be known as instantaneous control problems. An early reference to the possible utility of TDDM for the wave equation, but with no analysis provided, may be found in Benamou [3]. The transmission conditions suggested by Benamou contain a Robin-type condition in time which can be viewed as an approximation to transparent transmission conditions studied by Ganderet al. [6].

Nevertheless, TDD-methods have been analyzed in the literature only very recently; see Heinkenschloss [7] and J.-L. Lions [13]. The common approach is to perform some sort of shooting method. That is to say, at the break points one restarts the dynamics with initial values that have to be optimized in order to achieve the correct continuity with respect to the time variable at the break points. While Heinkenschloss formulates the problem as an equality constrained optimal control problem with a Lagrangian relaxation, which is solved by a Gauss-Seidel type preconditioning of a GMRES-solve for a Shur-complement-type equation, Lions penalizes the defect of continuity across the break points. We, instead, mimic an augmented Lagrangian relaxation from spatial decompositions of elliptic problems, a strategy we have successfully used for wave equations in [10] (see also the references therein). This procedure leads to a sequence of local-in-time problems on the subintervals, which, in fact, turn out to be optimality systems for local-in-time optimal control problems. The procedure is completely parallel. We show convergence of the iteration and provide some usefula posterioriestimates of the error in the approximation. We emphasize that this novel time domain decomposition method can be combined with the corresponding spatial domain decomposition method such that the resulting iteration scheme provides a decomposition into space-time subdomains or, after discretization, even to space-time atoms on the finite element level.

2. Setting the problem

Let Ω be a bounded, open, connected set in IR3with piecewise smooth, Lipschitz boundary Γ, and letT >0.

We consider the Maxwell system

(εE0rotH+σE =F

µH0+ rotE=G inQ:= Ω×(0, T) Hτ−α(ν∧E) =J on Σ := Γ×(0, T)

E(0) =φ, H(0) =ψ in Ω.

(2.1)

Here 0 =∂/∂t,∧is the standard vector product operation,ν denotes the exterior pointing unit normal vector to Γ,Hτ is the tangential component of H, that is,

Hτ=H−(H·ν)ν =ν∧(H∧ν),

and α∈ L(Γ), α(x) α0 > 0. Further, ε = (εjk(x)), µ = (µjk(x)) and σ = (σjk(x)) are 3×3 Hermitian matrices withL(Ω) entries such thatεandµare uniformly positive definite andσ≥0 in Ω. The functionsF, G∈L1(0, T;L2(Ω)) are given whileJ is a control input and is taken from the class

U =L2τ(Σ) :={J|J ∈L2(0, T;L2τ(Γ))}

where L2τ(Γ) denotes vector valuedL2(Γ) functions having a zero normal component.

WhenJ = 0 andα= 1, the boundary condition (2.1)3is known as the Silver–M¨uller boundary condition. It is the first approximation to the so-called transparent boundary condition, which corresponds to the transmission of electromagnetic waves through the boundary without reflections. In general, whenα >0 on a set on positive measure in Γ, the boundary condition (2.1)3isdissipative. That is, if one defines the electromagnetic energy by

E(t) = Z

(εE·E+µH·H) dx,

(3)

then, in the absence of external inputsF, G, J, the functionalE(t) is nonincreasing. Moreover, the boundary condition (2.1)3 is alsoregularizing. That is to say, ifE(0)<∞, ifF, Gand J have regularity assumed above, and if the support of J is contained in the support ofα, the solution of (2.1) satisfiesE(t)<∞for all t≥0.

On the other hand, if α≡0 andJ6≡0 then, in general, solutions of (2.1) will have less regularity.

In what follows, function spaces ofC-valued functions are denoted by capital Roman letters, while function spaces of C3-valued functions are denoted by capital script letters. We use α·β to denote the natural scalar product inC3, i.e.,α·β =P3

j=1αjβj, and writeh·,·ifor the natural scalar product in various function spaces such asL2(Ω) andL2(Ω). A subscript may sometimes be added to avoid confusion. The spacesL2(Ω) andL2(Ω) denote the usual spaces of Lebesque square integrableC-valued functions andC3-valued functions, respectively, and, similarly, for the Sobolev spaces Hs(Ω), Hs(Ω). We also denote byL2ε(Ω) the space L2(Ω) with weight matrix ε and hφ, ψiε :=hεφ, ψi the scalar product of φ and ψ in that space. With this notation, the energy spaceH:=L2ε(Ω)× L2µ(Ω).

When (φ, ψ)∈ H, (F, G)∈L1(0, T;H) andJ∈ L2τ(Σ), it may be proved that the system (2.1) has a unique solution with regularity (E, H)∈C([0, T];H),ν ∧E|Σ∈ L2τ(Σ) and, moreover, the linear map from the data to ((E, H), ν∧E|Σ) is continuous in the indicated spaces (see,e.g. [11]). Therefore, given (ET, HT)∈ H, we may consider the final value optimal control problem

J∈Uinf J(J), U :=L2τ(Σ) (2.2)

subject to (2.1), where

J(J) = 1 2

Z

Σ|J|2dΣ +z

2k(E(T), H(T))(ET, HT)k2H (2.3) withz a positive penalty parameter. Since the cost functionalJ is convex and in view of the properties of the map ((φ, ψ),(F, G), J) 7→(E, H), it is standard theory that there exists a unique optimal control Jopt. It is shown in the next section thatJopt is given by

Jopt=ν∧P|Σ, (2.4)

where (P, Q) is the solution of the backwards runningadjoint system (εP0rotQ−σP = 0

µQ0+ rotP= 0 in Q Qτ+α(ν∧P) = 0 on Σ (P(T) =z(E(T)−ET)

Q(T) =z(H(T)−HT) in Ω.

(2.5)

The purpose of this paper is to develop a convergent time domain decomposition method (TDDM) to approxi- mate the solution of theoptimality system(2.1, 2.4, 2.5), and to derive certaina posterioriestimate of the error in the approximation.

Our TDDM is introduced in Section 4, and it is shown that each of the local problems entering into the algorithm is itself an optimality system. Convergence of the algorithm is established in Section 5. A posteriori estimates of the error in the approximation in terms of the mismatch of the iterates at the break points are derived in Section 6.

Remark 2.1. Instead ofL2τ(Σ) one may choose as the control spaceUthoseL2τ(Σ) functions that are supported in eΓ×(0, T), where eΓ is a subset of Γ of positive 2-dimensional measure. The analysis presented below may easily be adapted to this more general setting with only minor modifications.

(4)

3. The optimality system

The necessary and sufficient condition for optimality is that the directional derivative of J at Jopt in the direction of ˆJ is equal to zero. ThereforeJopt is the solution of the variational equation

Z

Σ

Jopt·JˆdΣ +zh(E(T)−ET, H(T)−HT),( ˆE(T),Hˆ(T))iH= 0, ∀Jˆ∈ L2τ(Σ), (3.1)

where ( ˆE,Hˆ) is the solution of

(εEˆ0rot ˆH+σEˆ = 0

µHˆ0+ rot ˆE= 0 inQ Hˆτ−α(ν∧E) = ˆˆ J on Σ E(0) = ˆˆ H(0) = 0 in Ω.

Let (P, Q) be the solution of (2.5). Then (P, Q)∈C([0, T];H) andν∧P|Σ∈ U. We have

0 = Z T

0 {hεP0rotQ−σP,Eˆi+hµQ0+ rotP,Hˆi}dt. (3.2) By utilizing Green’s formula

hrotφ, ψi=hφ,rotψi+ Z

Γ

∧φ)·ψτ

=hφ,rotψi − Z

Γ

φτ·∧ψ)dΓ

(3.3)

we may rewrite (3.2) as

0 =zh(E(T)−ET, H(T)−HT),( ˆE(T),Hˆ(T))iH+ Z

Σ

∧P)·JˆdΣ, ∀Jˆ∈ L2τ(Σ). (3.4)

Thus (2.4) follows from (3.1) and (3.4).

4. Time domain decomposition

We introduce a partition of the time interval [0, T] by setting

0 =T0< T1<· · ·< TK < TK+1=T (4.1) and thereby decompose [0, T] into K+ 1 subintervals Ik := [Tk, Tk+1], k = 0, . . . , K. We further introduce locally defined functions Ek = E|Ik, Hk = H|Ik, and so forth. We proceed to decompose the optimality system (2.1, 2.4, 2.5) into the following local systems defined onIk,k= 0, . . . , K:

(εEk0 rotHk+σEk =Fk

µHk0 + rotEk=Gk inQk:= Ω×Ik H−α(ν∧Ek) =ν∧Pk on Σk := Γ×Ik,

(4.2)

(5)

(εPk0 rotQk−σPk = 0

µQ0k+ rotPk = 0 in Qk Q+α(ν∧Pk) = 0 on Σk.

(4.3)

The “initial” values (Ek(Tk), Hk(Tk)) and (Pk(Tk+1), Qk(Tk+1)) for (4.2 4.3) are given through the coupling conditions

Ek(Tk) =Ek−1(Tk), Hk(Tk) =Hk−1(Tk), k= 1, . . . , K,

Pk(Tk+1) =Pk+1(Tk+1), Qk(Tk+1) =Qk+1(Tk+1), k=K−1, . . . ,0, (4.4) together with

E0(0) =φ, H0(0) =ψ,

PK(T) =z(EK(T)−ET), QK(T) =z(HK(T)−HT). (4.5) We now uncouple the local problems by uncoupling (4.4) through an iteration as follows:

(βEkn+1(Tk+1)−Pkn+1(Tk+1) =µnk,k+1

βHkn+1(Tk+1)−Qn+1k (Tk+1) =ηk,k+1n , k= 0, . . . , K1 (βEkn+1(Tk) +Pkn+1(Tk) =µnk,k−1

−βHkn+1(Tk)−Qn+1k (Tk) =ηk,k−1n , k= 1, . . . , K,

(4.6)

where β >0 and

µnk,k+1=βEk+1n (Tk+1)−Pk+1n (Tk+1) ηk,k+1n =βHk+1n (Tk+1)−Qnk+1(Tk+1)

µnk,k−1=βEk−1n (Tk) +Pk−1n (Tk) ηk,k−1n =−βHk−1n (Tk)−Qnk−1(Tk).

(4.7)

It is readily seen that in the limit of (4.6, 4.7) one recovers (4.4), so that (4.6, 4.7) are consistent with (4.4).

For the convenience of the reader we write down the complete set of uncoupled systems:

(ε(Ekn+1)0rotHkn+1+σEkn+1=Fk

µ(Hkn+1)0+ rotEkn+1=Gk in Qk (ε(Pkn+1)0rotQn+1k −σPkn+1= 0

µ(Qn+1k )0+ rotPkn+1= 0 inQk

(4.8)

(Hn+1−α(ν∧Ekn+1) =ν∧Pkn+1

Qn+1 +α(ν∧Pkn+1) = 0 on Σk (4.9)

E0n+1(0) =φ, H0n+1(0) =ψ (PKn+1(T) =z(EKn+1(T)−ET)

Qn+1K (T) =z(HKn+1(T)−HT) in Ω

(4.10)

subject to (4.6, 4.7).

(6)

We wish to show that for (µnk,k+1, ηk,k+1n ) and (µnk,k−1, ηk,k−1n ) given inH, the local problems are well posed for k= 0, . . . , K. To do so we shall show that the local problem with index k is in fact an optimality system concentrated on the intervalIk,k= 0, . . . , K.

Proposition 4.1. Assume thatnk,k+1, ηnk,k+1)∈ H,nk,k−1, ηnk,k−1)∈ H. Set Jk(Jk, hk,k−1, gk,k−1) = 1

2 Z

Σk

|Jk|2dΣ + 1 2β

kβEk(Tk+1)−µnk,k+1k2ε

+kβHk(Tk+1)−ηnk,k+1k2µ+k(hk,k−1, gk,k−1)k2H , k= 1, . . . , K1, (4.11) JK(JK, hK,K−1, gK,K−1) = 1

2 Z

ΣK

|JK|2dΣ +z

2k(EK(T), HK(T))(ET, HT)k2H+ 1

k(hK,K−1, gK,K−1)k2H, (4.12) J0(J0) = 1

2 Z

Σ0

|J0|2dΣ + 1 2β

kβE0(T1)−µn0,1k2ε+kβH0(T1)−η0,1n k2µ · (4.13)

For k = 0. . . , K, the system (4.8–4.10, 4.6) is the optimality system for the optimal control problem inf

UkJk, where Uk =L2τk)× Hfor k= 1, . . . , K, U0=L2τ0), subject to

(εEk0 rotHk+σEk =Fk

µHk0 + rotEk=Gk inQk H−α(ν∧Ek) =Jk onΣk

(4.14)

and





Ek(Tk) = 1

β(hk,k−1+µnk,k−1) Hk(Tk) =1

β(gk,k−1+ηnk,k−1)inΩ, k= 1, . . . , K, (4.15)

E0(0) =φ, H0(0) =ψ inΩ. (4.16)

Remark 4.1. In the local optimal control problem infUkJk(Jk, hk,k−1, gk,k−1), the control Jk is, in the ter- minology of J.-L. Lions and O. Pironneau, the effective control and the controlshk,k−1, gk,k−1 are virtual, or artificial controls; see [12–14].

Proof. We give the proof only for k = 1, . . . , K 1 since the proofs in the two remaining cases are similar.

The necessary and sufficient condition that Jk, hk,k−1, gk,k−1 be optimal is that the directional derivative of Jk atJk,hk,k−1, gk,k−1 in the direction of ˆJk,ˆhk,k−1,gˆk,k−1be zero. This leads to the variational equation

0 = Z

Σk

Jk·JˆkdΣ +hβEk(Tk+1)−µnk,k+1,Eˆk(Tk+1)iε

+hβHk(Tk+1)−ηnk,k+1,Hˆk(Tk+1)iµ+ 1

βh(hk,k−1, gk,k−1),(ˆhk,k−1,gˆk,k−1)iH,

( ˆJk,ˆhk,k−1,gˆk,k−1)∈ Uk, (4.17)

(7)

where

(εEˆk0 rot ˆHk+σEˆk = 0

µHˆk0 + rot ˆEk = 0 inQk Hˆ−α(ν∧Eˆk) = ˆJk on Σk Eˆk(Tk) = 1

βˆhk,k−1, Hˆk(Tk) =1

βˆgk,k−1 in Ω.

(4.18)

Introduce (Pk, Qk) as the solution of

(εPk0rotQk−σPk = 0

µQ0k+ rotPk= 0 in Qk Q +α(ν∧Pk) = 0 on Σk (Pk(Tk+1) =βEk(Tk+1)−µnk,k+1

Qk(Tk+1) =βHk(Tk+1)−ηk,k+1n in Ω.

(4.19)

We have

0 = Z Tk+1

Tk

{hεPk0rotQk−σPk,Eˆki+hµQ0k+ rotPk,Hˆki}dt (4.20)

=h(Pk(t), Qk(t)),( ˆEk(t),Hˆk(t))iH Tk+1

t=Tk

+ Z

Σk

[Q·∧Eˆk) + (ν∧Pk)·Hˆ]dΣ

where we have used (3.3). By utilizing (4.18) and (4.19) it is seen that (4.20) may be written 0 =h(βEk(Tk+1)−µnk,k+1, βHk(Tk+1)−ηnk,k+1),( ˆEk(Tk+1),Hˆk(Tk+1))iH

(Pk(Tk), Qk(Tk)), 1

βˆhk,k−1,−1 βˆgk,k−1

H

+ Z

Σk

∧Pk)·JˆkdΣ. (4.21)

It now follows from (4.21) and (4.17) that

Jk=ν∧Pk|Σk, hk,k−1=−Pk(Tk), gk,k−1=Qk(Tk), from which the conclusion of Proposition 4.1 follows immediately.

Corollary 4.1. Let0k,k+1, η0k,k+1)|K−1k=0 and0k,k−1, ηk,k−10 )|Kk=1 be given arbitrarily inH. Then the iterative procedure described by (4.6–4.10) is well defined for n= 0,1, . . .

5. Convergence of the iterates

We consider the iterative procedure with the basic step given by (4.6–4.10). In fact, we shall consider a relaxation of the iteration step (4.6, 4.7). Thus we introduce a relaxation parameter [0,1) and consider the

(8)

iterative step (4.6, 4.7) with under relaxation:

βEkn+1(Tk+1)−Pkn+1(Tk+1) = (1−)µnk,k+1+(βEkn(Tk+1)−Pkn(Tk+1)) βHkn+1(Tk+1)−Qn+1k (Tk+1) = (1−)ηk,k+1n +(βHkn(Tk+1)−Qnk(Tk+1))

βEkn+1(Tk) +Pkn+1(Tk) = (1−)µnk,k−1+(βEkn(Tk) +Pkn(Tk))

−βHkn+1(Tk)−Qn+1k (Tk) = (1−)ηk,k−1n +(−βHkn(Tk)−Qnk(Tk)).

(5.1)

We introduce the errors

Eekn =Ekn−Ek, Hekn =Hkn−Hk, Pekn=Pkn−Pk, Qenk =Qnk−Qk. These satisfy

(ε(Eekn+1)0rotHekn+1+σEekn+1= 0

µ(Hekn+1)0+ rotEekn+1= 0 inQk (ε(Pekn+1)0rotQen+1k −σPekn+1= 0

µ(Qen+1k )0+ rotPekn+1= 0 inQk

(5.2)

(Hen+1−α(ν∧Eekn+1) =ν∧Pekn+1

Qn+1 +α(ν∧Pkn+1) = 0 on Σk (5.3)

Een+10 (0) =He0n+1(0) = 0

PeKn+1(T) =zEeKn+1(T), Qen+1K (T) =zHKn+1(T) in Ω (5.4) subject to

(βEekn+1(Tk+1)−Pekn+1(Tk+1) = ˜µnk,k+1

βHekn+1(Tk+1)−Qen+1k (Tk+1) = ˜ηk,k+1n , k= 0, . . . , K1 (βEekn+1(Tk) +Pekn+1(Tk) = ˜µnk,k−1

−βHekn+1(Tk)−Qen+1k (Tk) = ˜ηk,k−1n , k= 1, . . . , K

(5.5)

where, for n≥1,

µ˜nk,k+1= (1−)(βEek+1n (Tk+1)−Pek+1n (Tk+1)) +(βEekn(Tk+1)−Pekn(Tk+1)) η˜nk,k+1= (1−)(βHek+1n (Tk+1)−Qenk+1(Tk+1)) +(βHekn(Tk+1)−Qenk(Tk+1))

µ˜nk,k−1= (1−)(βEek−1n (Tk) +Pek−1n (Tk)) +(βEekn(Tk) +Pekn(Tk)) η˜nk,k−1= (1−)(−βHek−1n (Tk)−Qenk−1(Tk)) +(−βHekn(Tk)−Qenk(Tk)),

(5.6)

µ˜0k,k+1=µ0k,k+1(βE(Tk+1)−P(Tk+1)) η˜k,k+10 =ηk,k+10 (βH(Tk+1)−Q(Tk+1))

µ˜0k,k−1=µ0k,k−1(βE(Tk) +P(Tk)) η˜k,k−10 =ηk,k−10 + (βH(Tk) +Q(Tk)).

(5.7)

(9)

We shall prove the following convergence result:

Theorem 5.1. Let β >0. Then (a) for any [0,1)

ν∧Pekn|Σk 0 strongly inL2τk),k= 0, . . . , K (PeKn,QenK)0 inC(IK;H)

(Ee0n,He0n)0 inC(I0;H) ν∧Ee0n|Σ0 0 strongly inL2τ0).

(b) For any∈(0,1)and for k= 0, . . . , K,

(Eenk,Hekn)0 and (Pekn,Qenk)0 inC(Ik;H) ν∧Eekn|Σk 0 strongly inL2τk).

Remark 5.1. Part (a)1,2 states that for any [0,1) the effective local optimal controls ∧Pkn|Σk}Kk=0 converge strongly to the global optimal controlν ∧P|Σ, and the deviation k(EKn(T), HKn(T))(ET, HT)kH of the local optimal trajectory (EKn, HKn) at time T from the target state (ET, HT) converges to the deviation k(E(T), H(T))(ET, HT)kH of the global optimal trajectory at timeT from the target state. Ana posteriori estimate of the error in the approximation is derived in the next section.

Remark 5.2. When= 0 it is possible to show that (

(Eekn,Hekn)0 weakly* inL(0, T;H)

ν∧Eenk|Σk 0 weakly inL2τk), k= 1, . . . , K (Pekn,Qenk)0 weakly* inL(0, T;H),k= 0, . . . , K1

(5.8)

provided the following backwards uniqueness property is valid: if (E, H) satisfies the Maxwell system (2.1)1,2 withF =G= 0 and the boundary condition (2.1)3withJ = 0, and ifE(T) =H(T) = 0, thenE(0) =H(0) = 0.

Whether or not this uniqueness property holds seems to be an open question.

Proof of Theorem 5.1. Although the technical details differ, the proof given below is structurally similar to proofs given in earlier works such as [3–5], and in papers by the present authors (see,e.g.[8, 10] and references therein), all of which dealt with space domain decomposition of either direct or optimal control problems of one type or another. A key role in the convergence proofs in all of the papers is played by a fundamental recursion formula such as (5.22) below.

SetX =H2K with the standard product norm. Let

X ={k,k+1, ηk,k+1)K−1k=0 ,k,k−1, ηk,k−1)Kk=1} ∈ X

and let (Ek, Hk), (Pk, Qk) be the solution of (4.8–4.10, 4.6) with the superscriptsn+ 1 andnremoved and with zero data

Fk=Gk= 0, φ=ψ=ET =HT = 0. (5.9)

Define the linear mapping T : X 7→ X by TX=

n(βEk+1(Tk+1)−Pk+1(Tk+1), βHk+1(Tk+1)−Qk+1(Tk+1))K−1k=0,

(βEk−1(Tk) +Pk−1(Tk),−βHk−1(Tk)−Qk−1(Tk))Kk=1 o·

(10)

Note thatX is a fixed point ofT if and only if the transmission conditions (4.4) are satisfied, that is, if and only if {(Ek, Hk),(Pk, Qk)}Kk=0is the solution of the global optimality system with vanishing data (5.9). Since, in this case, the optimal control is obviouslyJopt= 0, it follows that the only fixed point ofT isX= 0.

The significance of the mapT is that, if we set

Xn={(˜µn−1k,k+1˜k,k+1n−1 )K−1k=0 ,µn−1k,k−1˜n−1k,k−1)Kk=1} (5.10) and let{(Eenk,Hekn),(Pekn,Qenk)}Kk=0 be the solution of (5.2–5.5) withnreplaced byn−1, then

Xn ={Eekn(Tk+1)−Pekn(Tk+1), βHekn(Tk+1)−Qenk(Tk+1))K−1k=0 ,Eenk(Tk) +Pekn(Tk),−βHekn(Tk)−Qenk(Tk))Kk=1}

TXn={(βEek+1n (Tk+1)−Pek+1n (Tk+1), βHek+1n (Tk+1)−Qenk+1(Tk+1))K−1k=0 ,

Eek−1n (Tk) +Pek−1n (Tk),−βHek−1n (Tk)−Qenk−1(Tk))Kk=1} and the relaxed iteration step (5.5, 5.6) may be expressed as the relaxed fixed point iteration

Xn+1= (1−)TXn+Xn:=TXn. (5.11)

The following result shows thatT is nonexpansive.

Lemma 5.1. For anyX ∈ X,

kTXk2X =kXk2X2Fβ (5.12)

where

Fβ = 2βXK

k=0

Z

Σk

|ν∧Pk|2dΣ + 2βzk(EK(TK+1), HK(TK+1))k2H. (5.13)

Proof. We have

kXk2X =

K−1X

k=0

k(βEk(Tk+1)−Pk(Tk+1), βHk(Tk+1)−Qk(Tk+1))k2H

+ XK k=1

k(βEk(Tk) +Pk(Tk),−βHk(Tk)−Qk(Tk))k2H. (5.14)

Let us define

Ek,β(t) =β2(kEk(t)k2ε+kHk(t)k2µ) +kPk(t)k2ε+kQk(t)k2µ. One finds after a little calculation that (5.14) may be written

kXk2X =

K−1X

k=1

Ek,β(Tk+1) +Ek,β(Tk)

+E0,β(T1) +EK,β(TK)

−2βK−1X

k=0

Re

hEk(t), Pk(t)iε+hHk(t), Qk(t)iµTk+1

t=Tk

+2βRe

hEK(TK), PK(TK)iε+hHK(TK), QK(TK)iµ

. (5.15)

(11)

Form

0 = Z Tk+1

Tk

hεEk0 rotHk+σEk, Pki+hµHk0 + rotEk, Qki dt

=h(Ek(t), Hk(t)),(Pk(t), Qk(t))iH|Tt=Tk+1k+ Z

Σk

|ν∧Pk|2dΣ,

(5.16)

where we have used (3.3). In particular we have h(EK(TK), HK(TK)),(PK(TK), QK(TK))iH

=h(EK(TK+1), HK(TK+1)),(PK(TK+1), QK(TK+1))iH+R

ΣK|ν∧PK|2

=zk(EK(TK+1), HK(TK+1))k2H+R

ΣK|ν∧PK|2dΣ. (5.17)

It follows from (5.15–5.17) that

kXk2X=Eβ+Fβ (5.18)

where

Eβ=

K−1X

k=1

Ek,β(Tk+1) +Ek,β(Tk)

+E0,β(T1) +EK,β(TK). (5.19) A similar calculation yields

kTXk2X =

K−1X

k=0

k(βEk+1(Tk+1)−Pk+1(Tk+1), βHk+1(Tk+1)−Qk+1(Tk+1))k2H

+ XK k=1

k(βEk−1(Tk) +Pk−1(Tk),−βHk−1(Tk)−Qk−1(Tk))k2H

=Eβ+ 2β

K−1X

k=0

Re

hEk(t), Pk(t)iε+hHk(t), Qk(t)iµTk+1

t=Tk

2βRe

hEK(TK), PK(TK)iε+hHK(TK), QK(TK)iµ

=Eβ− Fβ.

(5.20)

Thus (5.12) follows from (5.18) and (5.20).

To continue with the proof of Theorem 5.1, we set

Ek,βn (t) =β2(kEekn(t)k2ε+kHekn(t)k2µ) +kPekn(t)k2ε+kQenk(t)k2µ Eβn=

K−1X

k=1

Ek,βn (Tk+1) +Ek,βn (Tk)

+E0,βn (T1) +EK,βn (TK)

=

K−1X

k=0

Ek,βn (Tk+1) +Ek+1,βn (Tk+1)

Fβn= 2βXK

k=0

Z

Σk

|ν∧Pekn|2dΣ + 2βzk(EeKn(TK+1),HeKn(TK+1))k2H.

Références

Documents relatifs

Patients - Les patients admissibles à l’étude HOPE devaient être âgés de 55 ans et plus, avoir des antécédents de maladie cardiovasculaire (coronaropathie, accident

The class of discrete event dynamic systems involving only synchronization phenomena can be seen as linear time-invariant systems in a particular algebraic structure called ( min , +

In this paper we develop a non-overlapping domain decomposition method that combines the Optimized Schwarz Waveform Relaxation method with Robin trans- mission conditions and

In regard to time-op- timal control, we address the issue of existence and uniqueness of optimal trajectories, and explicitly compute the optimal control and the corresponding

- pour la génération de l’information adjacente on utilise la technique montrée dans le chapitre précédent, on prend une seule période de chaque séquence, on la code

A standard algorithm designs a control field of fixed duration which both brings the system close to the target state and minimizes the laser fluence, whereas here we include

In particular, we propose an extension to the method in [19], which leads to a general design framework based on sums-of-squares LMI techniques and we show indeed that the

Yinoussa Adagolodjo, Hadrien Courtecuisse, Raffaella Trivisonne, Laurent Goffin, Stephane Pierre-Alain Bordas, Michel de Mathelin.. To cite