• Aucun résultat trouvé

Spectral coarse space based on the eigenvectors of G: convergence

4.2 S2S method

4.2.1 Spectral coarse space based on the eigenvectors of G: convergence

We start considering the case in which the functions© ψjªNc

j=1are eigenfunctions ofG, that isGψj=λjψj. We suppose that 1 is not an eigenvalue ofG, so that the operatorA=IG is invertible. We do not assume any orthogonality between the eigenfunctions. With such a choice ofVc, it holdsA(Vc)⊆Vcwhich impliesPVc(Av)6=0∀v∈Vc\ {0}. Thus, due to Lemma 4.2.2, the matrixAc is invertible. In the proof of the convergence Theorem 4.2.4 we will need a matrix representation of the orthogonal projector.

Lemma 4.2.3(Matrix representation of the orthogonal projector). Once fixed the basis

©ψjªNc

j=1for Vc, the action ofPVchas the matrix representation PVc=P(RP)−1R, Proof. Given a generalvV, we expressPVcv=PNc

i αiψi=Pα, with (α)j=αj. Inserting into the orthogonal conditions〈PVcvj〉 = 〈v,ψj〉we get

Nc

X

i

αi〈ψi,ψj〉 = 〈v,ψj〉, which can be rewritten asRPα=Rψand the result follows.

So far we have worked assumingV is an infinite dimensional space. To prove the main result of this subsection, we assumeV is finite dimensional.

Theorem 4.2.4(Convergence of the S2S method- Eigenfunctions ofG). Consider a finite dimensional inner product space V , an invertible operator A:VV , A =IG, and a coarse space Vc:=span©

ψ1,ψ2, . . . ,ψNc

ªwhereψjare eigenvectors of G associated to the eigenvaluesλj, j =1, . . . ,Nc. Furthermore we consider the operator R and P defined as (4.2.1)and Ac:=R AP . Then, defining T:=Gn2(I−P A−1c R A)Gn1, we have

ρ(T)=max|λ|n1+n2 such that λσ(G) \©

λ1, . . . ,λNc

ª.

Proof. We first introduce the operatorTe=(I−P A−1c R A)Gn1+n2. The operatorsTeandT have the same spectrum and thus we focus onTe. The proof of this theorem is divided into two parts. First we show that©

ψjªNc

j=1 are eigenvectors ofTeassociated to the zero eigenvalue. Second, we show that all the other eigenvalues ofGare still eigenvalues ofTe by constructing directly the corresponding eigenvector. Let us start with the first part. If we consider aψjVcwe have

(I−P A−1c R A)Gn1+n2ψj=λnj1+n2ψjλnj1+n2P A−1c R Aψj (4.2.4)

=λnj1+n2(ψj−(1−λj)P A−1c Rψj).

We now compute the action ofAcover a canonical vectorej,j=1, . . . ,Nc, R APej=R Aψj=(1−λj)Rψj,

which,R AP being invertible, impliesej=(1−λj)Ac1Rψ. Inserting this expression into (4.2.4) we obtain

(I−P A−1c R A)Gn1+n2ψj=λnj1+n2(ψjPej)=λnj1+n2(ψjψj)=0.

We now focus on the remaining eigenvalues. For every eigenpair (ψk,λk) ofGsuch that ψkVc, we show that (φknk1+n2), with

φk:=A1kPVcψk)= 1

(1−λk)ψkw, (4.2.5)

for somewVc, is an eigenpair ofTe. We claim thatwVcsinceVc, which is spanned by eigenvectors ofG, is invariant under the action ofA−1. Using thatVc⊂Ker(Te), we have

Teφk=Te 1

1−λkψk=λnk1+n2 µ 1

1−λkψkP Ac1Rψk

. (4.2.6)

If the eigenvectors were orthonormal, we would have finished the proof, sinceRψk =0.

In a more general case, we proceed as follows. We observe that P A−1c Rψk=P A−1c RP(RP)−1Rψk=P A−1c RPVcψk=

Nc

X

`=1γ`P A−1c Rψ`, such thatPNc

`=1γ`ψ`is the orthogonal projection of ψk ontoVc. Now, we recall that Rψ`=Ac((1−λ`)−1e`), for`=1, . . . ,Nc, and writeP A−1c Rψ`=Pm

`=1γ`(1−λ`)−1ψ`= PNc

`=1γ`A−1ψ`=A−1PVcψk. Replacing this equality into (4.2.6), we obtainTeφk=λnk1+n2φk.

Theorem 4.2.4 provides very interesting insights on the convergence of the S2S method.

The coarse spaceVcis such that the operatorT has the same eigenvalues of the one-level smootherG, except for those eigenvalues corresponding to eigenvectors which are inVc. These latter eigenvalues are actually mapped to zero. It follows that the choice of the coarse space is extremely important. On the one hand, if the one-level smootherGhas a large eigenvalue, approximately equal to 1, andVc does not contain the corresponding eigenvector, then the S2S will be as slow as the one-level smoother. On the other hand, even ifGis not converging, e.g. it has an eigenvalue larger than 1, then ifVc contains the corresponding eigenvector, then the S2S method will converge. In other words, the coarse correction can transform a divergent method into a converging one (see [33, 30] for a similar result concerning the Neumann-Neumann method and [94] for the AS method).

We will see in Section 4.6 that for high contrast jumping diffusion coefficients, the PSM has just few eigenvalues approximately equal to 1. Including these very few eigenvectors into the coarse spaceVcpermits to have a very fast domain decomposition solver. We also remark that if the coarse spaceVcis made of eigenvectors ofG, then theoretically only one coarse correction step would be sufficient to remove the error components related to the slow eigenvectors ofG.

Constructing a coarse space based on the eigenvectors ofGis not always feasible, since computing these eigenvectors can be even more expensive than solving the original lin-ear systemAu=b. In this paragraph we briefly discuss a randomized approach to ob-tain good estimates for the eigenvectors ofGand we suppose thatρ(G)<1, that is the smootherGis converging. The idea we present is to approximate the image of the smoother Gr for some positive integerr. Indeed taking a sufficiently large value ofr, the image of Grcontains information about the “slow” eigenvectors ofGwhich are responsible for the slow convergence of the one-level algorithm. Motivated by this observation, we use a principal component analysis to extract information about the slowest eigenvectors from the image ofGr. We propose the following procedure

1. Consider a set of q linearly independent randomly generated vectors {xk}kq

=1⊂RNs, whereNsis the number of degrees of freedom on the product⊗Nj=1Sj, and define the matrixX=[x1· · ·xq]. Here,qNc andNcis the desired dimension of the coarse space.

2. Use the vectorsxk as initial vectors and performr smoothing steps to create the matrixW =GrX. This computation can be performed in parallel and we assume thatr is “small”.

3. Compute the SVD ofW:W=UΣV>. This is cheap (O(q(Ns)2)) becauseW ∈RNs×q is “small”, sinceqis “small” andvkare interface vectors.

4. Since the left-singular vectors (corresponding to the non-zero singular values) span the image ofW, we defineVc:=span{uj}Nj=1c andP:=[u1,· · ·,u2m].

We emphasize that this procedure is numerically feasible since

q is small since it is around the size of the coarse space, which also correspond to the size of the linear coarse problem involving the coarse matrixAc.

• The number of smoothing steps r is small. Numerically we have observed that r≈1−2 for classical equations, andr ≈6 for problems with jumping diffusion co-efficients. Moreover, the smoothing steps can be done in parallel for theqcolumns ofX.

• The size of the vectorsxkis relatively small and it is equal to the number of degrees of freedom on the substructures.

• The PCA technique is thought to be done in an off-line phase, to generate a spectral coarse space which can then be used repeatedly to solve the original linear system in a many-query context.

Numerically, we have also explored the use of the randomized SVD algorithm which shares similarities with our approach. However, we have not observed any significant advan-tages in terms of iteration numbers. In section 4.6, we will show that the PCA procedure permits to construct spectral coarse spaces which are extremely efficient.

4.2.2 Spectral coarse space based on the eigenvectors ofGj: convergence analysis

In this subsection, we provide a convergence analysis for a spectral coarse space which consists of eigenfunctions of the operatorsGj. Our proof is restricted to the two subdo-main case and we consider the decomposition introduced in Section 1.2. Since we do not have cross points, we can simplify the notation described in 4.1. First, we haveS12,

S21andv=[v1,v2]>=[τ1(u1),τ2(u2)]>=[(u1)2, (u2)1]>H0022H0021). The linear systemAu=breads

µ I2G1 The-orem 4.2.2 and we refer the reader to Lemma 3.1 in [41] for a detailed proof. The S2S iteration in the error form reads as usual

enew=Teold, T :=Gn2(I−P Ac1R A)Gn1.

To provide a convergence analysis, we study the operatorTand we introduce the operator norm

We now make the further hypothesis that the interfacesΓ1andΓ2can be mapped one to the other by simple rotation, translation and scaling. This hypothesis implies thatH1= H2=:H0and we define the scalar product〈·,·〉:= 〈·,·〉1= 〈·,·〉2. Further, we assume that there exists a set of basis functions {ψ1,ψ2,ψ3, . . . }⊂H0, orthonormal with respect to the inner product〈·,·〉, that diagonalizes the operatorsGj:

G com-mon orthogonal eigenfunctions basis but they can have different eigenvalues. The coarse

space is defined asVc=(span{ψ1,ψ2,· · ·,ψm})2. Prolongation and restriction operators are defined as in (4.2.7). To analyze the convergence behavior, we expand the error as e0=

and study the operator norm ofT.

Theorem 4.2.5(Convergence of the S2S method- Eigenfunctions ofGj). Consider the coarse space

Vc=(span{ψ1,ψ2,· · ·,ψm})2and the operators P and R defined in(4.2.7). The S2S method applied to the model problem(4.1.1)is a direct method for all the error components(ψk,ψ`) with k,`m, that is T£

ψk,ψ`¤>

=0for all k,`m. Moreover, if the eigenvaluesρj(k), j =1, 2, are in absolute value non-increasing functions of k, the contraction factor of the S2S, defined asρS2S(T) := lim

n→∞(kTnkop)n1, is given by ρS2S(T)=

(|ρ1(m+1)ρ2(m+1)|n1+2n2 ,if n1,n2 are both even or odd,

1(m+1)ρ2(m+1)|n1+n22−1max{|ρ1(m+1)|,|ρ2(m+1)|},otherwise.

Proof. Let us suppose that bothn1andn2are even. The other cases can be treated simi-larly to this one. Forn1even we defineπn1(k) :=ρ1n21(k)ρ2n21(k) and study the action of the operatorT on a vector£

ψk,ψ`¤>

We begin with the casekmand`m. First, let us compute the action of the operator R AGn1 on£

ψk,ψ`¤>

. Since the operatorsGj are diagonalized by the basis {ψk}k using (4.2.8) one obtainsGn1 SinceAis invertible and has the form A=I−G, the eigenvaluesρj(k) must be different from one. Hence, the productA£

πn1(k)ψk,πn1(`)ψ`¤>

6=0. Now, the application of the restriction operatorRonA£

πn1(k)ψk,πn1(`)ψ`¤> identity matrix. We have then obtained

R AGn1

Now, by computing

Using (4.2.9) and (4.2.10) we have (I−P A−1c R A)Gn1 and`m. The result forn1odd follows by similar calculations.

Next, let us consider the casek>mand`m. Recalling that the basis {ψk}k is Similarly as before, we compute

Ac

`t herror component, which belongs to the coarse space. The componentkis not affected by the coarse correction and only affected by the smoothing steps. For the remaining case k>mand`>m, the same arguments as before imply that

T

We can now study the norm ofT. To do so, we first use (4.2.11), (4.2.12) and (4.2.13), and that {ψk,ψ`}k,`is a basis ofH, to write

Tv=T

·P

k=1ckψk

P

`=1d`ψ`

¸

=T

·P

k=m+1π(k)ckψk

P

`=m+1π(`)d`ψ`

¸ ,

for anyv∈H. Since|ρ1(k)|and|ρ2(k)|are non-increasing functions ofk,|π(k)|is also a non-increasing function ofk. Therefore, using that the basis {ψk,ψ`}k,`is orthonormal, we get

kTkop= sup

kvk2,∞=1

kTvk2,∞≤max¡

n1+n2(k)|,|πn1+n2(`)|¢

= |πn1+n2(m+1)|.

This upper bound is achieved atv=[ψm+1, 0]>. Hence,kTkop= |πn1+n2(m+1)|. Now, a similar direct calculation leads tokTnkop= |πn(n1+n2)(m+1)|, which implies thatρS2S(T)=

nlim→∞(kTnkop)1/n= |πn1+n2(m+1)|.

Theorem 4.2.5 shows that, similarly to the case of a coarse space based on the eigenfunc-tions ofG, the choice of the basis functionsψkj can affect drastically the convergence of the method. We conclude this section dedicated to the S2S method with two remarks and we refer the interested reader to [42] for further details. It is natural to pose the following question: given an integerNc, which is the coarse space of dimensionNc such that the spectral radius of the S2S method is minimized? In other words, which is the coarse space of dimensionNc leading to the fastest convergence? One would be tempted to say that the spectral coarse space based on the eigenfunctions ofGis optimal, as its convergence is determined by the largest eigenvalue associated to an eigenvector not included in the coarse space. However, we remark that the PCA and HEM coarse spaces, since they are not based on eigenfunctions ofG and A, lead to a two-level method with substantially different eigenvalues and eigenvectors. It can happen that these coarse spaces do not map exactly the “slowest” eigenvectors ofG into the kernel of the S2S method, but they can take care of them very efficiently, while still be faster on the remaining part of the spectrum. We finally remark that the substructured matrixAis not symmetric but it has eigenvalues strictly positive assumingσ(G)⊂[0, 1). In the case of highly jumping diffu-sion coefficients, the largest eigenvalues ofGtends to one, which means that the smallest eigenvalue ofAtends to zero. If one uses a coarse space which is not made of eigenfunc-tions ofA, it can be that the coarse matrixAc has some negative eigenvalue, which then lead to a divergent method. We studied the effects of including a perturbed eigenvector into the coarse space and we refer the interested reader to [42]. A simple numerical solu-tion to this problem is to apply few times the smootherGto the basis of the coarse space, in such a way to make each element of the basis closer to the eigenfunctions ofG.

In the following section we aim to answer the following questions: is it possible to use a two-level substructured method without providing directly a definition of the coarse spaceVc? Is it possible to define a multilevel substructured domain decomposition method?

The G2S method is a positive answer to these needs.