• Aucun résultat trouvé

Definition of the SRASPEN method

Let us consider the boundary value problem posed in a Lipschitz domainΩ⊂Rd,d ∈ {1, 2, 3},

L(u)=f, inΩ,

u=0, on∂Ω. (6.2.1)

We assume that (6.2.1) admits a unique solution in some Hilbert spaceV. If the boundary value problem is linear, a discretization of (6.2.1) leads to the linear system

Au=f, (6.2.2)

while if the boundary value problem is nonlinear, we solve the nonlinear system

F(u)=0. (6.2.3)

In this section we introduce a new one-level nonlinear solver based on domain decom-position for the nonlinear system (6.2.3) called the SRASPEN (Substructured Restricted Additive Schwarz Preconditining Exact Newton) method. To introduce this method and to better specify the relations between SRASPEN and other existing linear and nonlinear solvers, we take a brief excursus on domain decomposition methods to solve (6.2.2) and (6.2.3).

Let us decompose the domainΩintoN overlapping subdomainsΩ0j, that isΩ=S

j∈J0j withJ :={1, 2, . . . ,N} (see Fig. 6.1). For each subdomainΩ0j, we defineVj as the restric-tion ofV ontoΩ0j. Further, we introduce the classical restriction and prolongation oper-atorsRj:VVj,Pj:VjV, and the restricted prolongation operatorsPej:VjV. We assume that these operators satisfy

RjPj=IVi, X

j∈J

PejRj=I, (6.2.4)

whereIVi is the identity onVi andIis the identity onV. A classical domain decomposi-tion method to solve a linear equadecomposi-tion (6.2.2) is the RAS method, which starting from an

approximationu0computes forn=1, 2, . . . un=un1+ X

j∈J

PejAj1Rj(fAun1), (6.2.5) whereAj are defined asAj:=RjAPj. Let us now rewrite the iteration (6.2.5) in an equiv-alent form using the hypothesis in (6.2.4) and the definition ofAj,

un= X Similarly, the RAS method can be used to solve the nonlinear equation (6.2.3). To show this, we introduce the solution operatorsGjwhich are defined through

RjF(PjGj(u)+(I−PjRj)u)=0. (6.2.7) The nonlinear RAS method then reads

un= X

j∈J

PejGj(un−1). (6.2.8)

It is possible to show that (6.2.8) reduces to (6.2.6) ifF(u) is a linear function. Infact as-sumingF(u)=Auf, equation (6.2.7) becomes,

, and thus (6.2.8) reduces to (6.2.6).

We remark that¡

PjRjI¢

un−1contains non-zero elements only outside subdomainΩj, and in particular A¡

PjRjI¢

un1 represents precisely the boundary condition for Ωj

given the old approximationun−1. This observation suggests that the RAS method, like most domain decomposition methods, can be written in a substructured formulation.

Infact, despite the iteration (6.2.6) is written in volume form, that is, it involves the whole vectorun−1, only very few elements ofun−1are needed to compute the new approxima-tionun. For further details about the classical parallel Schwarz method in a substructured formulation we refer to [80] for the two subdomain case, and [41] for a general decompo-sition into several subdomains with cross points at the continuous level.

We now define a substructured formulation for the RAS method both for the linear and the nonlinear case. In the following we use the notation introduced in [40]. For anyj∈J, we define the set of neighbouring indicesNj:={`∈J :Ωj∂Ω`6= ;}. Given a j ∈J,

asS:= j∈ISj. We now introduce the spaceV which can be interpreted as the space V restricted onto the substructureS. We can define it in two ways. EitherVS :=V|S, or VS:= ⊗j∈SjV|Sj. In the following sections we have used the first definition since we prefer to have full rank operatorsRSandPS. Remark that in Chapter 4, we used the second def-inition,VS:= ⊗jSjV|Sj, which doubles the unknowns in the overlap between interfaces and leads to operatorsRS andPSwhich are not full rank, and may cause problems while solving the Jacobian system inside a Newton’s iteration. Associated toVS, we consider the restriction operatorRS:VVSand a prolongation operatorPS:VSV. The restriction operatorRStakes an elementvV and restricts it onto the skeletonS. The prolongation operatorPSextends an elementvVSto the global spaceV. How this extension is done is not crucial as we will usePS inside a domain decomposition algorithm, and thus only the values on the skeletonSwill play a role. Hence, we only require thatRSPS=IS, where ISis the identity operator onVS. In the following,PSextends an elementvSVSto zero inΩ\S, but the same analysis can be adapted to any other choice ofPS.

Given a substructured approximationv0VS, forn=1, 2, . . . we define the Substructured RAS (SRAS) method as

vn=GSRAS(vn−1), (6.2.10)

whereGSRAS(v) :=RSGRAS(PSv). The RAS method and SRAS method are obviously tightly linked, but when are they equivalent? We must impose some conditions onPSandRS. It is sufficient to assume that the restriction and prolongation operators satisfy

RSGRAS(u)=RSGRAS(PSRSu),uV. (6.2.11) Heuristically, we need that the operatorPSRSpreserves all the information used byGRAS to compute the new iterate. The formal equivalence between RAS and SRAS is shown in the following theorem.

Theorem 6.2.1(Equivalence between RAS and SRAS). Assume that the operators RSand PS satisfy Assumption(6.2.11). Then given an initial guess u0V and its substructured restriction v0:=RSu0VS, define the sequences©

unª and©

vnª

such that un=GRAS(un−1), vn=GSRAS(vn−1).

Then for every n≥1, RSun=vn.

Proof. We prove the statement forn=1 through a direct calculation. Taking the restric-tion ofu1we have

RSu1=RSGRAS(u0)=RSGRAS(PSRSu0)=RSGRAS(PSv0)=GSRAS(v0)=v1,

where we used assumption (6.2.11) and the definition ofv0. The other cases follow by induction.

Similarly to the linear case, we can define a substructured RAS method in the nonlinear case. Defining

GSj(vn1) :=RSPejGj¡

Psvn1¢

, (6.2.12)

we obtain the nonlinear substructured iteration, vn=RS

X

j∈J

PejGj(PSvn1)= X

j∈J

GSj(vn1). (6.2.13) The same identical calculations of Theorem 6.2.1 allow one to obtain an equivalence re-sult between nonlinear RAS and nonlinear substructured RAS.

Theorem 6.2.2(Equivalence nonlinear RAS and SRAS). Assume that the operators RSand PS satisfy RSP

j∈JPejGj(u)=RSP

j∈JPejGj(PSRSu). Then given an initial guess u0V and its substructured restriction v0:=RSu0VS, define the sequences©

unª and©

vnª such that

un= X

j∈J

PejGj(un−1), vn= X

j∈J

GSj(vn−1).

Then for every n≥1, RSun=vn.

In the manuscript [59], it has been proposed to use the fixed point equation of the non-linear RAS method as a preconditioner for Newton’s method, in a spirit that goes back to [19, 18]. This method has been called RASPEN (Restricted Additive Schwarz Precondi-tioning Exact Newton) and it consists in applying the Newton method to the fixed point equation

F(u)=u− X

j∈J

PejGj(u)=0. (6.2.14) Here and in the article in preparation [29], we analyze a substructured version of the RASPEN method thus called SRASPEN. It consists in applying Newton’s method to the fixed point equation

FS(v)=RSF(PSv)=RSPSv− X

j∈J

RSPejGj(PSv)=v− X

j∈J

GSj(v)=0. (6.2.15)

6.2.1 Computation of the Jacobian and implementation details

To apply Newton’s method, we need to compute the Jacobian of SRASPEN. Since SRASPEN and RASPEN methods are closely related, indeedFS(v)=RSF(PSv), we can immedi-ately compute the Jacobian ofFSonce we have the Jacobian ofF, through the chain rule JFS(v)=RSJF(PSv)PS. The Jacobian ofF has been derived in [59] and we report the main steps for the sake of completeness. Differentiating equation (6.2.14) with respect to uleads to

JF(u) :=dF

d u(u)=I− X

j∈J

Pej

dGj

d u (u), (6.2.16)

whereJF(w) denotes the action of the Jacobian of RASPEN on a vectorw. Recall that the local inverse operatorsGj:VVj are defined in equation (6.2.7) as the solutions of RjF(PjGj(u)+(I−PjRj)u)=0. Differentiating this relation yields J is the Jacobian of the original nonlinear functionF. Combining the above equations (6.2.16)-(6.2.17) and definingue(j):=PjGj(PSv)+(I−PjRj)PSv, we get where we used the assumptionsP

j∈JPejRj=IandRSPS=IS. We remark that to assem-bleJF(u) or to compute its action on a given vector, one needs to calculateJ¡

u(j)¢ , that is evaluating the Jacobian of the original nonlinear functionFon the subdomain solutions uj. The subdomain solutionsuj are obtained evaluatingF(u), that is performing one step of the RAS method with initial guess equal tou. A smart implementation can use that the local Jacobian matricesRjJ¡

u(j)¢

Pj are already computed by the inner Newton solvers while solving the nonlinear problem on each subdomain, and hence no extra cost is required to assemble this term. Further, the matricesRjJ¡

u(j)¢

are different from the local Jacobian matrices at very few columns corresponding to the degrees of freedom on the interfaces and thus one could only modify those specific entries. In a lazier imple-mentation, one can directly evaluate the Jacobian ofF on the subdomain solutionsuj, without relying on already computed quantities. ConcerningJFS(v), we emphasize that ue(j)is the volume subdomain solution obtained by the substructured RAS method start-ing from a substructured functionv. Thus, asu(j),ue(j)is readily available in a Newton’s iteration after evaluating the functionFS.

From the computational point of view, (6.2.18) has several implications. First, the sub-structured JacobianJFs is a matrix of dimensionNS×NS whereNSis the number of un-knowns onS, and thus is a much smaller matrix thanJF, whose size isNv×Nv, with Nv the number of unknowns in volume. Hence, at each Newton iteration, the SRASPEN method must solve a much smaller system compared to the RASPEN method. This is even more important if one does not rely on some Krylov method, but prefers to use a direct solver as the assembly ofJFs is dramatically cheaper than forJF. Further implementa-tion details and a more extensive comparison are available in the numerical secimplementa-tion 6.5.