• Aucun résultat trouvé

A sensitivity-based extrapolation technique for the numerical solution of state-constrained optimal control problems

N/A
N/A
Protected

Academic year: 2022

Partager "A sensitivity-based extrapolation technique for the numerical solution of state-constrained optimal control problems"

Copied!
20
0
0

Texte intégral

(1)

DOI:10.1051/cocv/2009016 www.esaim-cocv.org

A SENSITIVITY-BASED EXTRAPOLATION TECHNIQUE

FOR THE NUMERICAL SOLUTION OF STATE-CONSTRAINED OPTIMAL CONTROL PROBLEMS

∗,∗∗

Michael Hinterm¨ uller

1,2

and Irwin Yousept

3

Abstract. Sensitivity analysis (with respect to the regularization parameter) of the solution of a class of regularized state constrained optimal control problems is performed. The theoretical results are then used to establish an extrapolation-based numerical scheme for solving the regularized problem for vanishing regularization parameter. In this context, the extrapolation technique provides excellent initializations along the sequence of reducing regularization parameters. Finally, the favorable numer- ical behavior of the new method is demonstrated and a comparison to classical continuation methods is provided.

Mathematics Subject Classification.49M15, 49M37, 65K05, 90C33.

Received July 13, 2007. Revised January 28, 2009.

Published online July 2nd, 2009.

1. Introduction

The numerical treatment of optimal control problems for partial differential equations (PDEs) with pointwise state inequality constraints is challenging due to the measure-valuedness of the Lagrange multiplier associated with the state constraints; see [2] for an analytical assessment. A typical instance of such a problem is given by

⎧⎪

⎪⎪

⎪⎪

⎪⎩

minimizeJ(u, y) := 12y−yd2L2(Ω)+α2u2L2(Ω)

over (u, y)∈L2(Ω)×H01(Ω)∩H2(Ω) subject toAy=u in Ω, y= 0 on Γ,

ya ≤y≤yb a.e. in Ω.

(P)

Keywords and phrases. Extrapolation, mixed control-state constraints, PDE-constrained optimization, semismooth Newton algorithm, sensitivity, state constraints.

M.H. acknowledges support by the Austrian Ministry for Science and Education and the Austrian Science Fund FWF under START-program Y305 “Interfaces and Free Boundaries”.

∗∗I.Y. is supported by DFG Research Center Matheon, project C9: Numerical simulation and control of sublimation growth of semiconductor bulk single crystals.

1 Department of Mathematics Humboldt-University of Berlin, Unter den Linden 6, 10099 Berlin, Germany.

hint@math.hu-berlin.de

2 Institute of Mathematics and Scientific Computing, University of Graz, 8010 Graz, Austria.

3 Institut f¨ur Mathematik, Technische Universit¨at Berlin, Str. des 17. Juni 136, 10623 Berlin, Germany.

yousept@math.tu-berlin.de

Article published by EDP Sciences c EDP Sciences, SMAI 2009

(2)

where Ω RN is a bounded and sufficiently regular domain, A is a second order elliptic partial differential operator, andyd,α,ya,yb are given data which will be specified shortly.

In order to have a numerical technique at hand for solving (P) with stable iteration numbers as the mesh size of discretization is reduced, one can use an approach based on mixed control-state constraints as investigated in [12]. In fact, in order to overcome the measure-valuedness of the Lagrange multiplier associated with the pointwise inequality constraints in (P) one addsutoy in the set of pointwise inequality constraints and then uses the result in [12] which yields an L2-property of the Lagrange multiplier associated with the modified set of constraints. The regular multiplier therefore represents a smooth approximation of the measure-valued quantity. Using this technique, (P) becomes

⎧⎪

⎪⎪

⎪⎪

⎪⎩

minimizeJ(u, y) := 12y−yd2L2(Ω)+α2u2L2(Ω)

over (u, y)∈L2(Ω)×H01(Ω)

subject toAy=u in Ω, y= 0 on Γ, ya ≤u+y≤yb a.e. in Ω.

(P)

Let (¯y,u¯) denote the solution of (P). For solving (P) efficiently, in [4,8] a semismooth Newton method (SSN) was proposed on a function space as well as on a discrete level. Besides the locally superlinear convergence, the mesh-independence of SSN for >0 was established.

In view of (P) one is interested in studying the behavior of the solution algorithm as0. For the numerics in the case of vanishing regularization parameter, however, in [4,8] it turned out to be crucial to suitably tune as it tends to zero and, even more importantly, to initialize the algorithm for solving (P) appropriately along the sequence. Ignoring these issues typically makes the numerical algorithm suffer from ill-conditioning which is usually reflected by large iteration numbers and reduced numerical solution accuracy.

In this note we focus on this latter point and propose an numerical approach based on an extrapolation technique in order to overcome the aforementioned problems. For this, we first need to study the quality of the dependence of (¯y,u¯) on. For instance, under a strict complementarity assumption we prove differentiability of the solution to (P) with respect to. Let ( ˙y,u˙) denote the corresponding derivative. We then establish a system of sensitivity equations which characterize ( ˙y,u˙) uniquely. Subsequently, the theoretical findings are employed in our numerical approach.

The remainder of the paper is organized as follows: in the rest of this section we detail the problem under investigation and settle the notation (Sect.1.1), and we recall some results on (P) (Sect.1.2). The subsequent Section 2 is devoted to Lipschitz and differentiability properties of the solution to (P) with respect to >0.

Then, in Section 3 we define our semismooth Newton-type solver based on extrapolation in . We end this paper by a report on the numerical behavior including a comparison to a technique without extrapolation in Section 4. It turns out that our new method compares favorably to classical continuation methods without extrapolation.

1.1. General assumptions and notation

In connection with our model problem (P), throughout this paper we assume that Ω is an open bounded domain inRN,N ∈ {2,3}, with sufficiently smooth boundary Γ. The upper and lower boundsya, yb∈ C(Ω) on the state variableysatisfyya(x)< yb(x) for allx∈Ω and guarantee that the feasible set of (P) is non-empty.

Moreover, the desired stateyd∈L2(Ω) andα >0 are assumed to be fixed. Byuwe denote the control variable.

The second-order elliptic partial differential operatorA is defined by

Ay(x) =− N i,j=1

Di(aij(x)Djy(x)),

(3)

where the coefficient functionsaij ∈C0,1( ¯Ω) satisfy the ellipticity condition N

i,j=1

aij(x)ξiξj≥θξ2RN (ξ, x)RN×Ω

for some constant θ > 0. Furthermore, A stands for the associated adjoint operator. By G we denote the solution operator G : L2(Ω) H01(Ω)∩H2(Ω) that assigns to every u L2(Ω) the solution y = y(u) H01(Ω)∩H2(Ω) of the state equation

Ay =uin Ω, y= 0 on Γ.

We setS=ı0G, where ı0denotes the compact embedding operator fromH1(Ω) toL2(Ω).

Using the above notation and assumptions, the regularized problem (P) can be expressed as follows:

minimizef(u) := 12Su−yd2L2(Ω)+α2u2L2(Ω) overu∈L2(Ω)

subject toya (S+I)u≤yb a.e. in Ω, (P)

where I denotes the identity operator in L2(Ω). In [10] the name Lavrentiev regularized problem was coined for (P).

1.2. Standard results

For every >0 standard arguments guarantee the existence of a unique solution of (P). Throughout the paper we denote this solution by ¯uwith associated state ¯y. As in [10], first order optimality of (¯y,u¯) can be characterized as follows:

Theorem 1.1 (first-order optimal conditions for (P)). The pairy,u¯)is optimal for(P)if and only if there exist an adjoint statep∈H01(Ω)∩H2(Ω) and Lagrange multipliers μa, μb∈L2(Ω) such that

A¯y= ¯u inΩ, y¯= 0 onΓ, (1.1)

Ap= ¯y−yd+μb−μa inΩ, p= 0 onΓ, (1.2)

p+α¯u−(μa−μb) = 0, (1.3)

¯u+ ¯y≥ya, μa 0, (μa, ¯u+ ¯y−ya)L2(Ω)= 0, (1.4)

¯u+ ¯y≤yb, μb0, (μb, ¯u+ ¯y−yb)L2(Ω)= 0. (1.5) Using the maximum operator, the complementarity system (1.4)–(1.5) can be equivalently expressed as

μa = max

0, μa −μb+γ(ya−¯u−y¯) , (1.6) μb = max

0, μb−μa +γ(¯u+ ¯y−yb) , (1.7)

(4)

with an arbitrarily fixed γ >0,cf.[5,13]. Note that γ acts as a weight between the inequality constraints for the primal variable and the corresponding dual variables or Lagrange multipliers. For the choice γ := α/2 in (1.6)–(1.7) and using (1.3), a short computation yields

μa = max

0,1 p+ α

2(ya−y¯) , (1.8)

μb = max

0,1 p+ α

2y−yb) . (1.9)

This latter system will be useful in our subsequent analysis.

Finally, we mention that in [11], Theorem 3.3, the strong convergence inL2(Ω) of ¯uto ¯u, the optimal control of (P), is proven. From the results in [9], Lemma 2.4, the weak* convergence in C( ¯Ω) as 0 of μa and μb to ¯μa and ¯μb, the multipliers associated with the pointwise state constraints at the solution of (P), can be inferred. Further, in [4] the H¨older continuity (with exponent 12) of ¯u with respect to >0 is established.

2. Regularity of solutions to (P

)

As pointed out in the introduction, one of our main goals is to establish Lipschitz continuity and differen- tiability of the mapping→y¯. For this purpose, the transformation of (P) into an associated minimization problem with pure control constraints will be useful.

We start our investigation by considering the operator (I+S). It is well known that the linear operator S=ı0G:L2(Ω)→L2(Ω) is positive-definite. For this reason, the equation

(I+S)z= 0

admits only the trivial solutionz= 0. Thus, by the Fredholm alternative, the compactness property ofSensures the existence of the inverse operator (I+S)−1. Setting (I+S)−1v =u, or equivalentlyu= 1(v−ı0y(u)) in (P), we transform (P) into the following optimal control problem with (pointwise) box constraints imposed on the new control variablev:

minimizef(v) := 12Sv−yd2L2(Ω)+2α2v−Sv2L2(Ω) overv∈L2(Ω)

subject toya≤v≤yb a.e. in Ω. (Pv)

Here we useS=ı0G, where the operatorGassigns to eachv∈L2(Ω) the solutiony(v) =y∈H01(Ω)∩H2(Ω) of the following elliptic equation:

Ay+1 y= 1

v in Ω, y= 0 on Γ. (E)

Notice that the underlying ideas of converting (P) into (Pv) and considering the state equation in the form (E) were originally introduced in [10,11]. In what follows, we denote the unique solution of (Pv) by ¯vwith associated optimal state ¯y. Obviously, (Pv) and (P) are equivalent, i.e., ¯v solves (Pv) if and only if ¯u = (I+S)−1v¯

is the optimal solution of (P). We use the auxiliary problem (Pv) in the following analysis.

The first derivative offatv in an arbitrary directions∈L2(Ω) is given by f(v)s=

Sv−yd+ α

2(Sv−v), Ss

L2(Ω)

+ α

2(v−Sv, s)L2(Ω). By standard arguments, this can be equivalently expressed as

f(v) =q(v) α

2y(v) + α

2v, (2.1)

(5)

where the adjoint stateq=q(v)∈H01(Ω)∩H2(Ω) is defined as the solution of the following adjoint equation:

Aq+1 q=1

y(v)−yd+ α

2(y(v)−v) in Ω, q= 0 on Γ. (2.2)

2.1. Lipschitz continuity

Next we establish a Lipschitz property of y and various other quantities. Since one might by interested in (P) independently of (P), we reduce the regularity requirements on, e.g., Ω, as we do not assume to have H2(Ω)-regularity of the states and adjoints pertinent to (P). We only consider a Lipschitz-domain ΩRN with N ∈ {2,3}which is sufficient to obtainH1(Ω)∩ C(Ω)-regularity of the solutions to the adjoint and state equations (see the elliptic regularity results in [1]).

In what follows, letD= (l, u)R++ with 0< l< u. Lemma 2.1. There exists a positive real number cs(l)such that

y1(v)−y2(v)H01(Ω)∩C(Ω)≤cs(l)vL2(Ω)|12| for allv∈L2(Ω) andi∈ D,i= 1,2.

Proof. We note that standard elliptic regularity estimates yield a constantc >0 independent of andv such that

y(v)H10(Ω)∩C(Ω)≤c1

vL2(Ω) (2.3)

for all v L2(Ω) and ∈ D. Now, let i ∈ D, i = 1,2, andv L2(Ω). By definition, fori = 1,2 the states yi(v) =yi satisfy

Ayi+ 1 i

yi = 1 i

v in Ω, yi = 0 on Γ.

From this we infer

A(y1−y2) + 1

1(y1−y2) = 1

1 1

2 (v−y2) in Ω, y1−y2= 0 on Γ, and hence due to (2.3)

y1(v)−y2(v)H01(Ω)∩C(Ω) 1 1 1

2cv−y2(v)L2(Ω)

≤ |12| c

l2(vL2(Ω)+y2(v)L2(Ω))

≤ |12|c 2l

1 + c

l vL2(Ω). Setting cs(l) := c2

l

1 + c

l

yields the assertion.

We proceed by proving the Lipschitz continuity of the adjoint statesq. Lemma 2.2. There exists a positive real number cq(D) such that

q1(v)−q2(v)H1

0(Ω)∩C(Ω)≤cq(D)(vL2(Ω)+ 1)|12|, for allv∈L2(Ω) and1, 2∈ D.

(6)

Proof. An argument analogous to the one in the proof of Lemma2.1implies that there exists a constantc(D)>0 independent ofandv such that

q(v)H01(Ω)∩C(Ω)≤c(D)

vL2(Ω)+ 1 (2.4)

for all∈ Dandv∈L2(Ω). Let1, 2∈ D. By definitionq1(v) =q1 andq2(v) =q2 solve, fori= 1,2, Aqi+ 1

i

qi= 1 i

(yi(v)−yd+ α

2i(yi(v)−v)) in Ω, qi = 0 on Γ.

Hence, the differenceq1−q2 satisfies A(q1−q2) + 1

1(q1−q2) =r in Ω, q1−q2= 0 on Γ, where

r= 1

1(y1(v)−y2(v)) + 1

1 1

2 (y2(v)−yd) + α

31(y1(v)−y2(v)) +

α 31 α

32 (y2(v)−v) + 1

2 1 1 q2.

Since |3231| = |(22+21+21)(21)| ≤ 32u|21|, Lemma 2.1 and (2.4) yield a constant cq(D) > 0 independent of1, 2 such that

q1(v)−q2(v)H01(Ω)∩C(Ω)≤ rL2(Ω)≤cq(D)|12|(vL2(Ω)+ 1),

which ends the proof.

For arguing the Lipschitz continuity of the controls, we require uniform positive definiteness off, which we establish next.

Lemma 2.3. There exists a positive real number δ(D)such that for all ∈ D, we have

f(v)s2≥δ(D)s2L2(Ω) ∀v, s∈L2(Ω). (2.5) Proof. Letv∈L2(Ω) and∈ D be arbitrarily fixed. Then, it holds by definition thatf(v) =f((S+I)−1v).

Hence, for an arbitrary directions∈L2(Ω), the chain rule implies f(v)s2 = f((S+I)−1v)((S+I)−1s)2

= (S(S+I)−1s, S(S+I)−1s)L2(Ω)+α((S+I)−1s,(S+I)−1s)L2(Ω)

α((S+I)−1s,(S+I)−1s)L2(Ω)

= α(S+I)−1s2L2(Ω)

α

S+I2L2,L2

s2L2(Ω)

α

(SL2,L2+u)2s2L2(Ω).

Defining δ(D) :=α/(SL2,L2+u)2, the lemma is verified.

(7)

Theorem 2.1. The mapping v:D →L2(Ω),¯v, is Lipschitz-continuous.

Proof. Suppose that1, 2∈ D. By definition v(1) = ¯v1 and v(2) = ¯v2 solve (Pv

1) and (Pv

2), respectively.

Hence, first order optimality yields

f1v1)(v−v¯1) 0 ∀v∈Vad, f2v2)(v−v¯2) 0 ∀v∈Vad,

whereVad={v∈L2(Ω)|ya ≤v≤yb}. Since ¯v1 and ¯v2 are feasible,i.e., ¯v1,v¯2∈Vad, the inequalities above imply

(f

1v1)−f

2v2))(¯v2−v¯1)0, (2.6) which is equivalent to

(f

1v1)−f

1v2) +f

1v2)−f

2v2))(¯v2¯v1)0. (2.7) Sincef1 is Fr´echet-differentiable, there exists ˜v∈L2(Ω) such that

(f1v1)−f1v2))(¯v2¯v1) =−f1v)(¯v2−v¯1)2. Then (2.7) yields

(f1v2)−f2v2))(¯v2¯v1)≥f1v)(¯v2−v¯1)2 (2.8) and further

q1v2)−q2v2)L2(Ω)+ α

21y1v2)−y2v2)L2(Ω)+ α

2122|12| · |1+2|(y2v2)L2(Ω)+¯v2L2(Ω))

¯v2¯v1L2(Ω)≥f1v)(¯v2¯v1)2, where we used (2.1). Due to Lemmas2.1–2.3, there exists a constantc(D)>0 independent of1, 2such that

c(D)|21|(¯v2L2(Ω)+ 1)≥ ¯v2−v¯1L2(Ω). (2.9) Notice that the operator v(·) is uniformly bounded in L(Ω), as ¯v Vad for all R++. This together

with (2.9) proves the assertion.

We have the following immediate corollaries establishing the Lipschitz continuity of ¯y and ¯u. Corollary 2.1. The mapping →y¯ is Lipschitz-continuous from DtoH01(Ω)∩ C(Ω).

Proof. Theorem 2.1and Lemma2.1 guarantee the existence of a constantc(D)>0 independent of1, 2∈ D such that

y¯1−y¯2H01(Ω)∩C(Ω)≤ y1v1)−y2v1)H01(Ω)∩C(Ω)+y2v1¯v2)H01(Ω)∩C(Ω)≤c(D)|12|,

for all1, 2∈ D.

Corollary 2.2. The mapping →u¯ is Lipschitz-continuous from DtoL2(Ω).

Proof. The assertion is immediate since, by construction, ¯u= 1v−y¯) for all >0.

Clearly, Corollaries2.1and2.2further imply the following result.

Corollary 2.3. The mapping →p is Lipschitz-continuous from DtoH01(Ω)∩ C(Ω).

Proof. Utilizing (1.3) in (1.2) and exploiting the Lipschitz continuity of ¯u and ¯y, respectively, yields the

assertion.

(8)

2.2. Parameter sensitivity

We continue our investigation by studying the mappingy, p). Under a strict complementary assump- tion, we establish the differentiability of this mapping with respect to . First of all, we define the following operators:

ga:D → C(Ω), ga() = 1 p α

2y¯+ α

2ya, (2.10)

gb:D → C(Ω), gb() = 1 p+ α

2y¯ α

2yb. (2.11)

Note thatga andgb appear in the maximum operators in (1.8)–(1.9):

⎧⎪

⎪⎨

⎪⎪

μa = max

0, 1p+α2(ya−y¯) = max(0, ga()), μb = max

0,1p+α2y−yb) = max(0, gb()).

(2.12)

Therefore, in order to study the dependence of the Lagrange multipliersμa, μbon the regularization parameter, we need to consider the operatorsga, gb. Suppose that∈ Dis arbitrarily fixed. Then the Lipschitz-continuity of the mappingu,y¯, p) fromDtoL2(Ω)×H01(Ω)×H01(Ω) ensures the existence of a weak accumulation point ( ˙u,y˙,p˙)∈L2(Ω)×H01(Ω)×H01(Ω) of

u¯n−u¯

n ,y¯n−y¯

n ,pn−p

n (2.13)

as n→∈ Dforn→ ∞. Further, 1

n(ga(n)−ga()) = 1 n

1 n

pn α

2ny¯n+ α 2nya1

p+ α 2y¯ α

2ya

= 1

n 1

(pn−p) 1

1 n

pn+ α

2 α 2n y¯n

−α

2yn−y¯) α

2 α 2n ya

= 1

pn−p

n 1 n

pn+α(n+)

22n y¯n α 2

y¯n−y¯

n −α(n+) 22n ya

has a weak accumulation

g˙a() = 1 p˙ 1

2p+2α 3y¯ α

2y˙3ya

in H01(Ω) as n forn→ ∞. Analogously, there exists a weak accumulation point of 1

n(gb(n)−gb()), which is given by

g˙b() =1 p˙+ 1

2p3y¯+ α

2y˙+2α 3yb.

By the compact embedding H01(Ω)⊂L2(Ω) the accumulation points ˙ga() and ˙gb() are strong inL2(Ω).

Without any further assumption, in general there is a need to distinguish betweenupperandlower accumu- lation points depending on whethern orn , respectively. See [7] for a similar observation in a related context. However, under some type of strict complementarity condition this distinction is not required as all limits overn-sequences yield a unique accumulation point. This motivates the following assumption.

(9)

Assumption 2.1. We assume that the solution of (P)satisfies thestrict complementarity condition meas{x∈Ω|ga()(x) = p(x)αy¯(x) +αya(x) = 0 a.e.} = 0,

meas{x∈Ω|gb()(x) =−p(x) +αy¯(x)αyb(x) = 0 a.e.} = 0. (SC) Notice that by (1.8)–(1.9) the Lagrange multipliers for (P) are given by

μa = max

0,1 p+ α

2(ya−y¯) =ga+(), μb = max

0,1

p+ α

2y−yb) =g+b(),

whereg+a = max(0, ga) and analogously forg+b. Hence, using (1.3), the condition (SC) requires that meas{x∈Ω :μa(x) +γ(ya−¯u−y¯)(x) = 0}= 0,

or equivalently, that the set where the max-operation is non-differentiable is of measure zero; analogously for the set involvingμb andyb.

Next we study the behavior of

ga+(n)−g+a()

n and g+b(n)−gb+() n as nforn→ ∞. For this purpose we introduce

Sa,:={x∈Ω :ga()(x)>0}

and analogously Sb,. By Sa,c and Sb,c we denote the complement of Sa, and Sb, in Ω, respectively. Recall that Assumption2.1implies meas{x∈Ω :ga()(x) = 0}= 0. For this reason, we have

measSa,= meas{x∈Ω :ga()(x)>0}= meas{x∈Ω :ga()(x)0}

and analogously forSb,. Hence, forv∈L2(Ω) we infer

Ω

g+a(n)−ga+()

n vdx= 1 n

Sa,c

(ga+(n)−ga+())vdx

+ 1

n

Sa,

(ga+(n)−ga(n))vdx+ 1 n

Sa,

(ga(n)−ga())vdx.

From the strict complementary condition (SC) and the definition of the setSa,we obtain

ga()(x)<0 for a.a. x∈Sa,c and ga()(x)>0 for a.a. x∈Sa,. (2.14) In addition, Corollaries2.1and2.3yield

nlim→∞ga(n) =ga() inC(Ω). (2.15)

(10)

The latter uniform convergence together with (2.14) results in

nlim→∞

g+a(n)(x)−g+a()(x)

n = 0 for a.a. x∈Sa,c ,

nlim→∞

ga+(n)(x)−ga(n)(x)

n = 0 for a.a. x∈Sa,.

Hence, forn→ ∞, the Lebesgue dominated convergence theorem and Assumption2.1yield 1

n

Sa,c

(ga+(n)−ga+())vdx0, 1

n

Sa,

(ga+(n)−ga(n))vdx0, 1

n

Sa,

(ga(n)−ga())vdx

Ω

g˙a()χSa,vdx,

where χSa, is the characteristic function of Sa,. The analogous result holds true forg+b. These properties of ga+,gb+ are important for the proof of the following theorem.

Theorem 2.2. Suppose that the solution of (P)satisfies Assumption2.1. Then the mappingsy:D →L2(Ω), ¯y, and p:D →L2(Ω),→p, are strongly differentiable at ∈ D.

Proof. Let (δu, δy, δp) be the difference of two weak accumulation points of u¯n−u¯

n ,y¯n−y¯

n ,pn−p

n ,

asn →. Byδμa andδμb we denote the difference of the associated weak accumulation points of ga+(n)−g+a()

n

and g+b(n)−gb+()

n . Due to first order optimality, (δu, δy, δp, δμa, δμb) is characterized by

Aδy=δu in Ω, δy= 0 on Γ, (2.16)

Aδp=δy+δμb−δμa in Ω, δp= 0 on Γ, (2.17)

δp+αδu+(δμb−δμa) = 0, (2.18)

δμa= 1

δp− α

2δy χSa,, (2.19)

δμb=

1 δp+ α

2δy χSb,. (2.20)

Inserting equations (2.18)–(2.20) in (2.16) and (2.17) and then rearranging terms, we obtain Aδy+1

Sa,+χSb,)δy = 1

α(−I+χSa,+χSb,)δp in Ω, δy= 0 on Γ, (2.21) Aδp+1

Sa,+χSb,)δp =

I+ α

2χSa,+ α

2χSb, δy in Ω, δp= 0 on Γ. (2.22) Further, let us define the bilinear form associated with the differential operatorA:

a:H01(Ω)×H01(Ω)R, a(η, τ) =

Ω

N i,j=1

aij(x)Diη(x)Djτ(x) dx.

(11)

In view of (2.21)–(2.22),δy, δp∈H01(Ω) are given by the solutions of a(δy, z) +1

Sa, +χSb,)δy, z

L2(Ω)= 1

α(−I+χSa, +χSb,)δp, z

L2(Ω)

∀z∈H01(Ω), (2.23)

a(z, δp) +1

Sa, +χSb,)δp, z

L2(Ω)=

(I+ α

2χSa, + α

2χSb,)δy, z

L2(Ω)

∀z∈H01(Ω). (2.24) Insertingz=δpandz=δyin (2.23) and (2.24), respectively, we find that

0 1

α(−I+χSa, +χSb,)δp, δp

L2(Ω)

= a(δy, δp) +1

Sa, +χSb,)δy, δp

L2(Ω)

= a(δy, δp) +1

Sa, +χSb,)δp, δy

L2(Ω)

=

I+ α

2χSa, + α 2χSb,

δy, δy

L2(Ω)

0.

From this we inferδy=δp= 0, and hence by (2.18)–(2.19)δu=δμa=δμb= 0. Thus, asn→, y¯n−y¯

n ,pn−p

n

has a unique weak accumulation point in H01(Ω)×H01(Ω). By the compact-embedding ofH01(Ω) in L2(Ω), (y¯n y¯

n ,pn p

n ) admits a unique accumulation point inL2(Ω)×L2(Ω). Finally, since our arguments hold for

any convergent subsequences, a standard argument completes the proof.

Remark 2.1. Based on the optimality conditions, the derivativesy() andp() satisfy the following system:

Ay() = ˙u in Ω, y() = 0 on Γ, (2.25)

Ap() =y() + ˙μb,−μ˙a, in Ω, p() = 0 on Γ, (2.26)

p() +αu˙+( ˙μb,−μ˙a,) = 0; (2.27)

μ˙a,= 1

p() 1

2p+2α 3y¯ α

2y()

3ya χSa,; (2.28)

μ˙b,=

1

p() + 1

2p3y¯+ α

2y() +2α

3yb χSb,. (2.29)

Remark 2.2. It is also of interest to study the case where Assumption2.1 does not hold true. In this case, differentiability with respect to cannot be expected. However, the existence of weak accumulation points in H01(Ω) of, e.g., ¯yn−y¯/(n−) for n still holds true by the Lipschitz property of ¯y with respect to according to Corollary2.1. On the other hand, one can no longer expect uniqueness of these weak accumulation points. In our numerics, the term on the right hand side of (3.8) then has to be interpreted as a difference approximation of the directional derivative of ¯yn in direction (n+1n).

3. Extrapolation-based algorithm

Next we introduce a semismooth Newton (SSN) algorithm which utilizes the theoretical results of the previous section within an extrapolation framework. Conceptually, we exploit the differentiability property of the solution of (P) in order to predict a solution of (P2) given an approximate solution of (P1) with 1 > 2. Hence,

Références

Documents relatifs

By using the signal as the independent variable, we can express portfolio optimisation problems with conditioning information in an optimal control format. Necessity and

The aim of this work is to study an optimal control problem with state con- straints where the state is given by an age-structured, abstract parabolic differential equa- tion.. We

The first result shows that using the graph completion of the measure, the optimal solutions can be obtained by solving a reparametrized control problem of absolutely

Section 3 contains a notion of a local quadratic growth condition of the Hamiltonian at the presence of control constraints and a formulation of generalized strengthened

In the case when F is continuous, the characterization given in the above theorem amounts saying that the value function is the only lsc solution of the HJB equation

We have performed a partial extension of the theory of optimal control with run- ning and initial-final state constraints problems to the case of integral equations, obtaining a

Bergounioux, Optimal control of abstract elliptic variational inequalities with state constraints,, to appear, SIAM Journal on Control and Optimization, Vol. Tiba, General

The augmented Lagrangian algorithms with splitting into state and control variable can effectively be used to solve state and control constrained optimization problems. For the