• Aucun résultat trouvé

The domain decomposition method of Bank and Jimack as an optimized Schwarz method

N/A
N/A
Protected

Academic year: 2022

Partager "The domain decomposition method of Bank and Jimack as an optimized Schwarz method"

Copied!
184
0
0

Texte intégral

(1)

Thesis

Reference

The domain decomposition method of Bank and Jimack as an optimized Schwarz method

MAMOOLER, Parisa

Abstract

The aim of this thesis is to introduce the Bank-Jimack domain decomposition method and study its convergence behavior. We are interested in understanding what the precise contribution of the outer coarse mesh is to the convergence behavior of the domain decomposition method proposed by Bank and Jimack. We show for a two subdomain decomposition that the outer coarse mesh can be interpreted as computing an approximation to the optimal transmission condition represented by the Dirichlet to Neumann map, and thus the method of Bank and Jimack can be viewed as an optimized Schwarz method, i.e. a Schwarz method that uses Robin or higher order transmission conditions instead of the classical Dirichlet ones.

MAMOOLER, Parisa. The domain decomposition method of Bank and Jimack as an optimized Schwarz method. Thèse de doctorat : Univ. Genève, 2019, no. Sc. 5361

DOI : 10.13097/archive-ouverte/unige:121394 URN : urn:nbn:ch:unige-1213949

Available at:

http://archive-ouverte.unige.ch/unige:121394

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

UNIVERSIT´E DE GEN`EVE FACULT´E DES SCIENCES

Section de Math´ematiques Professeur Martin J. Gander

The Domain Decomposition Method of Bank and Jimack as an Optimized

Schwarz method

TH` ESE

Pr´esent´ee `a la Facult´e des Sciences de l’Universit´e de Gen`eve pour obtenir le grade de Docteur `es Sciences, mention Math´ematiques

par

Parisa MAMOOLER de

T´eh´eran (Iran)

Th`ese N°5361

GEN` EVE

Atelier d’impression ReproMail

2019

(3)
(4)

A mes parents et ma soeur`

Parisa

(5)

Abstract

The aim of this thesis is to introduce the Bank-Jimack domain decomposition method and study its convergence behavior. We are interested in understanding what the precise contribution of the outer coarse mesh is to the convergence behavior of the domain decomposition method proposed by Bank and Jimack. The thesis is divided into seven di↵erent chapters.

In Chapter 1, we explain the numerical methods for solving Partial Di↵erential Equa- tions (PDEs), more precisely we explain the finite-di↵erence method and we study its convergence properties. Throughout this thesis, we use the finite-di↵erence method to discretize the partial di↵erential equations.

In Chapter 2, we introduce iterative methods and their convergence analysis. We also explain the convergence factor and convergence rate of the iterative methods.

In Chapter 3, we give an introduction to domain decomposition methods. We intro- duce the di↵erent Schwarz domain decomposition methods and we study the conver- gence behavior of the parallel Schwarz algorithm.

In Chapter 4, we introduce the Bank-Jimack domain decomposition method and we show for a two subdomain decomposition that the outer coarse mesh can be interpreted as computing an approximation to the optimal transmission condition represented by the Dirichlet to Neumann map, and thus the method of Bank and Jimack can be viewed as an optimized Schwarz method, i.e. a Schwarz method that uses Robin or higher order transmission conditions instead of the classical Dirichlet ones. In particular, we show that when applied to the Poisson equation in one spatial dimension, the algorithm of Bank and Jimack computes an optimal Robin parameter for any choice of the outer coarse mesh, and the method thus converges in two iterations in this case.

In Chapter 5, we study the Bank-Jimack algorithm applied to the⌘ equation in ii

(6)

ABSTRACT iii one spatial dimension for two di↵erent coarsening methods, uniform and stretched, and we show that having a uniform coarse mesh, the convergence factor of the method does not improve by increasing the number of the coarse mesh points whereas hav- ing a stretched coarse mesh, the convergence behavior improves by increasing the number of the coarse mesh points. We compare these results with the convergence behavior of the optimized Schwarz method with Robin parameter and we observe that the Bank-Jimack method with 2 coarse mesh points behaves same as the opti- mized Shwarz method and increasing the number of the coarse mesh points improves the convergence behavior and we have a better convergence factor.

In Chapter 6, we introduce the Bank-Jimack method applied to the Poisson equation in 2 spatial dimension and using Fourier analysis we show that it can be understood from the⌘ equation in 1D. We study its convergence behavior for two di↵erent coarsening methods. First we consider a coarse mesh only along thex axis, and then we consider a coarse mesh along thex and y axis.

Finally in Chapter 7, we study the Bank-Jimack algorithm applied to the ⌘ equation in one spatial dimension on an unbounded domain which greatly simplifies the analysis. We show the convergence behavior for two di↵erent coarsening methods, uniform and stretched. We observe that the convergence behavior in this case is slightly di↵erent from the one we found for the bounded domain in Chapter 5.

(7)

R´ esum´ e

Le but de cette th`ese est d’exposer la m´ethode de d´ecomposition de domaine de Bank- Jimack et d’´etudier sa convergence. Nous nous int´eressons en particulier `a l’apport du maillage grossier ext´erieur `a cette convergence. La th`ese se compose de sept chapitres.

Dans le chapitre 1, nous pr´esentons des m´ethodes num´eriques pour r´esoudre des Equations aux D´eriv´ees Partielles (EDPs). Plus pr´ecis´ement, nous examinerons la´ m´ethode des di↵´erences finies, ainsi que ses propri´et´es de convergence. Tout au long de cette th`ese, nous utiliserons des m´ethodes de di↵´erences finies pour discr´etiser nos

´equations aux d´eriv´ees partielles.

Dans le chapitre 2, nous introduisons les m´ethodes it´eratives ainsi que leur analyse de convergence. Nous expliquons aussi le facteur de convergence et le taux de con- vergence d’une m´ethode it´erative.

Dans le chapitre 3, nous introduisons les m´ethodes de d´ecomposition de domaine.

Nous pr´esentons les di↵´erentes m´ethodes de Schwarz de d´ecomposition de domaine, et nous ´etudions la convergence de l’algorithme Schwarz parall`ele.

Dans le chapitre 4, nous d´efinissons la m´ethode Bank-Jimack de d´ecomposition de domaine et nous montrons que dans le cas de deux sous-domaines, le maillage grossier ext´erieur peut ˆetre interpr´et´e comme une approximation de la condition de transmis- sion optimale donn´ee par l’application Dirichlet to Neumann, d’o`u nous d´eduisons que la m´ethode de Bank et Jimack peut ˆetre vue comme une m´ethode de Schwarz optimis´ee, i.e. une m´ethode de Schwarz qui utilise des conditions de transmission de Robin ou d’ordre plus ´elev´e en lieu et place des conditions classiques de Dirichlet.

En particulier, nous montrons dans le cas de l’´equation de Poisson `a une dimension iv

(8)

R ´ESUM ´E v spatiale que l’algorithme de Bank et Jimack calcule un param`etre de Robin optimal peu importe le choix du maillage grossier ext´erieur, et donc converge en exactement deux it´erations dans ce cas.

Dans le chapitre 5, nous ´etudions l’algorithme Bank-Jimack appliqu´e `a l’´equation

⌘ `a une dimension spatiale pour deux m´ethodes de construire un maillage grossier di↵´erentes: uniforme et non-uniforme. Alors que pour un maillage grossier non- uniforme, on peut choisir les pas de maillage d’une mani`ere g´eometrique telle que la convergence convergence de la m´ethode ne s’am´eliore pas lorsque le nombre de points du maillage grossier grandit, alors que pour un maillage grossier non-uniform, la con- vergence est am´elior´ee lorsque le nombre de points du maillage grossier augmente.

Nous comparons ces r´esultats avec la convergence de la m´ethode de Schwarz opti- mis´ee avec param`etre de Robin et nous observons que la m´ethode de Bank-Jimack avec deux points de maillage grossier se comporte comme la m´ethode de Schwarz optimis´ee. Aussi, augmenter le nombre de points de maillage grossier am´eliore la convergence ainsi que le facteur de convergence.

Dans le chapitre 6, nous pr´esentons la m´ethode Bank-Jimack appliqu´e `a l’´equation de Poisson en dimension deux spatiale, et nous montrons en utilisant l’analyse de Fourier que cette m´ethode peut ˆetre comprise `a partir de l’´equation ⌘ en di- mension un. Nous ´etudions sa convergence pour deux di↵´erentes m´ethodes de de choisir le maillage grossier. En premier lieu, nous consid´erons un maillage grossier uniquement le long de l’axex, puis nous prenons un maillage grossier selon les axes xety.

Finalement, dans le chapitre 7, nous ´etudions l’algorithme de Bank-Jimack appliqu´e

`

a l’´equation⌘ en dimension un spatiale sur un domaine non born´e, ce qui simplifie

´enorm´ement l’analyse. Nous montrons la convergence pour deux m´ethodes di↵´erentes de coarsening: uniforme et non-uniform. Nous observons que la convergence dans ce cas di↵`ere quelque peu de celle ´etudi´ee au chapitre 5.

(9)

Acknowledgment

I would like to thank my advisor, Professor Martin J. Gander, for his enthusiasm, patience and guidance. Working with him has been a great pleasure for me, he intro- duced me to a new mathematical world, and always supported me all the way through with great patience. I always remember the sparkle in his eyes after discovering the solution of a mathematical problem.

I would like to thank Dr Gabriele Ciaramella, for his wonderful guidance. It is a great pleasure for me to had the opportunity to discuss with him about my PhD research and for having an open door whenever I needed advice.

Special thanks to my colleagues and friends at the Department of Mathematics, Pascaline, Caroline, Sandie, Pratik, Marco, Tommaso, Ibrahim, Adrien, Guilllaume, Pablo, Conor, Justine, Thibaut, Michal, Aitor, Eiichi, Faycal, Roman, Jeremy, Do- minik, Caterina, Anthony, and Fathi for our insightful discussions during the co↵ee breaks at Z-bar.

Thanks a million to my parents for their unconditional love. This journey would not be possible without their support, patience and self-giving. They are the first teachers of my life and this journey was not possible without their worthy advice all the way long. I also would like to thank my little sister Sepideh for being a great sister who always listens to my dreams and supports me.

Finally, I would like to thank my love, Mohammadreza, for his unconditional support, care and kindness. As an expert in electrical engineering his mathematical knowledge is astonishing, I am grateful to have the opportunity to talk to him about my research problems and he was always present with great ideas.

vi

(10)

Contents

Abstract ii

R´esum´e iv

Acknowledgment vi

Contents vii

1 Numerical Methods for Solving PDEs 1

1.1 Introduction . . . 1

1.2 Partial Di↵erential Equations (PDEs) . . . 1

1.3 The Finite Di↵erence Method . . . 3

1.3.1 The Two Dimensional Poisson Equation . . . 4

1.3.2 Numerical Experiment . . . 9

1.4 Convergence Analysis of Finite Di↵erence Methods . . . 10

1.5 Conclusion . . . 15

2 Iterative Methods 16 2.1 Stationary Iterative Methods . . . 16

2.2 Convergence Analysis of Iterative Methods . . . 18

2.3 Convergence Factor and Convergence Rate . . . 21

2.4 Conclusion . . . 22

3 Domain Decomposition methods 24 3.1 Introduction . . . 24

3.2 The classical Schwarz algorithm . . . 25

3.3 The Parallel Schwarz algorithm . . . 25

3.4 The parallel Schwarz algorithm for a model problem . . . 30

3.5 Convergence Analysis . . . 31

3.5.1 1 dimensional case . . . 31 vii

(11)

CONTENTS viii

3.5.2 2 dimensional case . . . 32

3.6 The optimized Schwarz algorithm . . . 34

4 The Bank-Jimack Domain Decomposition Method 35 4.1 Introduction . . . 35

4.2 The Bank-Jimack Domain Decomposition Method . . . 36

4.3 The Bank-Jimack Domain Decomposition Method in 1D for the Pois- son Equation . . . 39

4.4 The Modified Bank-Jimack Method . . . 44

4.5 Three Equivalent Algorithms . . . 47

4.6 Optimized Schwarz Methods . . . 52

4.7 The Bank-Jimack Method as an Optimized Schwarz Method . . . 53

4.8 Numerical Experiments . . . 57

5 The Bank-Jimack Domain Decomposition Method in 1D for the ⌘ Equation 59 5.1 Model Problem: ⌘ Equation . . . 59

5.2 Optimized Schwarz Method for the⌘ Equation . . . 61

5.3 The Bank-Jimack Method as an Optimized Schwarz Method II . . . . 65

5.4 Uniform Coarse Mesh . . . 68

5.4.1 Uniform Coarse Mesh with 2 Mesh Points . . . 68

5.4.2 Uniform Coarse Mesh with 3 Mesh Points . . . 77

5.4.3 Uniform Coarse Mesh with 4 Mesh Points . . . 85

5.5 Stretched Coarse Mesh . . . 90

5.5.1 Stretched Coarse Mesh with 2 Mesh Points . . . 90

5.5.2 Stretched Coarse Mesh with 3 Mesh Points . . . 103

5.5.3 Stretched Coarse Mesh with 4 Mesh Points . . . 111

6 Bank and Jimack Domain Decomposition Method in 2D 116 6.1 Introduction . . . 116

6.2 Model Problem: The Poisson Equation in 2D . . . 116

6.3 The Bank-Jimack Method in 2D . . . 118

6.3.1 Coarsening Along the x Axis . . . 118

6.3.2 Coarsening Along the x andy Axis . . . 122

6.4 Convergence Analysis of the Bank-Jimack Method in 2D . . . 132

6.5 Numerical Experiments . . . 134

6.5.1 Numerical Experiment for Coarsening Along thex Axis . . . . 134

6.5.2 Numerical Experiment for Coarsening Along thex and y Axis 137 7 The Bank-Jimack Domain Decomposition Method for the ⌘ equation on an Unbounded Domain 141 7.1 The⌘ Equation on an Unbounded Domain . . . 141

7.2 Convergence Factor of the OSM for the⌘ Equation . . . 142

7.3 Stretched Coarse Mesh . . . 144

(12)

CONTENTS ix 7.3.1 Stretched Coarse Mesh with 2 Mesh Points . . . 144 7.3.2 Stretched Coarse Mesh with 3 Mesh Points . . . 153 7.3.3 Stretched Coarse Mesh with 4 Mesh Points . . . 161

8 Conclusion 166

Bibliography 168

(13)

Chapter 1

Numerical Methods for Solving PDEs

1.1 Introduction

Partial di↵erential equations (PDEs) are crucial in mathematical modeling. Virtu- ally any field in science and engineering uses di↵erential equations, for example the prediction of weather, the design of a jet engine or an internal combustion engine, the safety of a nuclear reactor, the exploration for oil, and so on. More recently their use has spread into economics, financial forecasting, image processing and other fields.

There is an infinity of partial di↵erential equations, and new ones are discovered by modeling every day. To investigate the predictions of PDE models of such phe- nomena it is often necessary to approximate their solution numerically. There are di↵erent numerical techniques for solving PDEs, including finite-di↵erence method, finite-element method, finite-volume method, etc.

1.2 Partial Di↵erential Equations (PDEs)

Definition 1.2.1. A partial di↵erential equation (PDE) is a relation of the following type:

F(x1, . . . , xn, ux, . . . , uxn, ux1x1, . . . , uxnxn, ux1x1x1, . . .) = 0, (1.2.1) where the unknownu=u(x1, . . . , xn) is a function ofnvariables anduxj, . . . , uxixj, . . . are its partial derivatives.

For second-order equations of the form

auxx+buxy+cuyy=f(x, y, u, ux, uy), (1.2.2) 1

(14)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 2

there is a classification based on the discriminantb2 4ac:

• b2<4ac: elliptic PDE, e.g. a Poisson equation uxx+uyy=f;

• b2>4ac: hyperbolic PDE, e.g. wave equation utt c12uxx= 0;

• b2= 4ac: parabolic PDE, e.g. (adevection-)di↵usion ut+cux=uxx.

In a more general case when there are more than two independent variables, equation (1.2.2) becomes

Xm j=1

Xm k=1

ajkuxjxk =f(x1, . . . , xm, u, ux1, . . . , uxm). (1.2.3) In this case, the classification into elliptic, parabolic, and hyperbolic PDEs can be generalized by looking at the eigenvalues of the matrixAjk = [ajk]:

• The PDE is elliptic if all eigenvalues are of the same sign and not vanishing;

• The PDE is parabolic if all eigenvalues are of the same sign except one which vanishes;

• The PDE is hyperbolic if the matrixAhas at least two eigenvalues of opposite sign.

To obtain a unique solution, we must specify a physical domain on which we con- sider the equation,⌦⇢Rd, d= 1,2,3, add appropriate boundary conditions, and also initial conditions, see [22]. For having a better understanding, we consider the following example:

Example 1.2.1. Laplace Equation: Laplace’s equation is named after Pierre- Simon Laplace, a French mathematician. In1799, he proved that the solar system was stable over astronomical timescales-contrary to what Newton had thought a century earlier. In the course of proving Newton wrong, Laplace investigated the equation that bears his name. The Laplace equation appears in mathematical modeling of many phenomena, for example electricity, magnetism, fluid mechanics, gravity, soap films, etc [18]. The Laplace equation is

(uxx+uyy) = u= 0 in ⌦, (1.2.4) where = r2 is the Laplace operator, and u is a scalar function. The Laplace equation is an elliptic partial di↵erential equation.

(15)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 3

(a) Pierre-Simon Laplace (1749–1827) [1] (b) Sim´eon Denis Poisson (1781–1840) [2]

Example 1.2.2. Poisson Equation: Poisson’s equation is a generalization of Laplace’s equation and it is named after the French mathematician Sim´eon Denis Poisson. In 1812, Poisson discovered that Laplace’s equation is valid only outside of a solid. Poisson’s equation is an elliptic partial di↵erential equation and has a broad utility in mechanical engineering and theoretical physics. For example it arises to describe the potential field caused by a given charge or mass density distribution, see [18]. The Poisson equation is

u=f in ⌦. (1.2.5)

There are very few partial di↵erential equations with closed form solutions. Also the theoretical study of the existence and uniqueness of solutions is currently an active research area.

1.3 The Finite Di↵erence Method

The finite di↵erence approximation is one of the simplest and of the oldest methods to solve di↵erential equations. It was known by Euler in 1768, in one dimension of space and was extended to two dimensions by Runge in 1908 to understand the torsion in a beam of arbitrary cross section, which results in having to solve the Poisson equation. The advent of finite di↵erence techniques in numerical applications began in the early 1950s and their development was stimulated by the emergence of computers that o↵ered a convenient framework for dealing with complex problems of science and technology. Theoretical results have been obtained during the last five decades regarding the accuracy, stability and convergence of the finite di↵erence

(16)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 4 method for partial di↵erential equations. The finite di↵erence method is based on an approximation of the di↵erential operators in the equation by finite di↵erences, which is natural, since the derivatives themselves are defined to be the limit of a finite di↵erence,

f0(x) := lim

h!0

f(x+h) f(x)

h .

The method was immediately put to use by Richardson, who tried a retroactive forecast of the weather 20th 1910 by direct computation [38]. The forecast failed dramatically because of roughness in the initial data that led to unphysical surges in pressure, but the method essentially was correct and led to Richardson famous book about numerical weather prediction in 1922. A first convergence proof for the finite di↵erence method was given in 1928 in the seminal paper by Courant, Friedrichs and Lewy, and the first error estimate is do to Gerschgorin [39].

To understand the finite di↵erence method, we use it to solve the Poisson equation in 2 dimensions, see [22].

1.3.1 The Two Dimensional Poisson Equation

We want to compute an approximate solution of the Poisson equation

u=f in⌦= (0,1)⇥(0,1), (1.3.1) u=g on@⌦.

For simplicity we assume the domain⌦is the unite square. The conditionu=g on the boundary@⌦is called a Dirichlet boundary condition. Another condition that we can consider on the boundary is imposing the flux, which in the case of the Poisson equation is the normal derivative at the boundary, @u@n := ru·n=g. Herenis the unit outer normal vector to the boundary@⌦. This is called a Neumann boundary condition.

The idea of the finite di↵erence method is to use a truncated Taylor series in each variable for approximating the derivatives involved in the problem. For example for thex variable, we obtain

u(x+h, y) =u(x, y)+ux(x, y)h+uxx(x, y)h2

2 +uxxx(x, y)h3

3!+uxxxx(⇠1, y)h4

4!, (1.3.2) where⇠1 lies in betweenx and x+h. We substituteh by h, and we obtain

u(x h, y) =u(x, y) ux(x, y)h+uxx(x, y)h2

2 uxxx(x, y)h3

3!+uxxxx(⇠2, y)h4

4!, (1.3.3) where ⇠2 lies in between x h and x. We can approximate a first derivative of u using (1.3.2):

u(x+h, y) u(x, y)

h =ux(x, y) +O(h).

(17)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 5 We neglect the error termO(h) which gives a first order finite di↵erence approxima- tion of the first partial derivative ofu with respect tox,

ux(x, y)⇡ u(x+h, y) u(x, y)

h . (1.3.4)

This is called a forward approximation.

Similarly, we can obtain a backward approximation of the first partial derivative with respect toxfrom (1.3.3),

ux(x, y)⇡ u(x, y) u(x h, y)

h . (1.3.5)

A better approximation can be obtained using the di↵erence of (1.3.2) and (1.3.3), namely

ux(x, y) = u(x+h, y) u(x h, y)

2h +O(h2). (1.3.6)

This is called the centered approximation that is second order accurate. We can use the same idea to obtain also forward, backward and centered approximations for the partial derivative with respect to y. For the second derivative with respect to x, which appears in the Poisson equation, we add equations (1.3.2) and (1.3.3) and obtain

u(x+h, y) 2u(x, y) +u(x h, y) =uxx(x, y)h2+ (uxxxx(⇠1, y) +uxxxx(⇠2, y))h4 4!. (1.3.7) Isolating the second derivative term, and assuming that the fourth derivative ofu is continuous, gives

uxx(x, y) = u(x+h, y) 2u(x, y) +u(x h, y)

h2 uxxxx(⇠, y)h2

12. (1.3.8) We neglect the term uxxxx(⇠, y)h122 on the right-hand side which leads to a second- order approximation of the second partial derivative ofuwith respect to x,

uxx(x, y)⇡ u(x+h, y) 2u(x, y) +u(x h, y)

h2 . (1.3.9)

Similarly, for they variable we obtain the approximation

uyy(x, y)⇡ u(x, y+h) 2u(x, y) +u(x, y h)

h2 . (1.3.10)

Adding these two approximations, we can define a discrete approximation of the Laplace operator:

Definition 1.3.1. (Discrete Laplacian) The discrete Laplacian h applied to u is given by

h(x, y) := u(x+h, y) +u(x, y+h) 4u(x, y) +u(x h, y) +u(x, y h)

h2 ,

(1.3.11)

(18)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 6 which is also called the five point star approximation of the Laplacian.

If we apply the discrete Laplacian tou from the Poisson equation (1.3.1), we get

hu(x, y) = u(x+h, y) +u(x, y+h) 4u(x, y) +u(x h, y) +u(x, y h) h2

(1.3.12)

= (uxx(x, y) +uyy(x, y)) +O(h2)

=f(x, y) +O(h2).

Thus the solution u of the Poisson equation satisfies the discrete Poisson equation

hu(x, y) =f(x, y) at each point (x, y)2⌦up to a truncation error term O(h2).

The finite di↵erence method is based on neglecting the truncation error term, and to compute an approximation ofu at given grid points in the domain. In our case where the domain⌦is the unite square (0,1)⇥(0,1), we discretize the domain with a uniform mesh with n grid points in each direction, and we obtain a mesh size h= n+11 , and the grid is given byxi =ihandyi=jhfori, j = 1,2, . . . , n. Ifnis big, the mesh sizeh is small, and thus the truncation error O(h2) should also be small.

Neglecting the truncation error in (1.3.12), and denoting byui,j an approximation of the solution at grid point (xi, yj), and lettingfi,j := f(xi, yj), we need to solve the system of equations

hui,j =fi,j, i, j= 1,2, . . . , n. (1.3.13) The indices i= 1, i=n,j = 1 and j =n involves the boundary values atx = 0,1 and y = 0,1. More precisely, in the particular case where for example i = 1 and j= 1, equation (1.3.13) becomes

u2,1+u1,2 4u1,1+u0,1+u1,0

h2 =f1,1, (1.3.14)

whereu0,1 and u1,0 are on the boundary. So their values are given by the boundary conditionu=g of problem (1.3.1). If we denote this boundary condition on each of the sides of the unite square by

u(x,0) =p(x), (1.3.15)

u(x,1) =r(x), u(0, y) =q(y), u(1, y) =s(y),

then the equation (1.3.14) becomes,

u2,1+u1,2 4u1,1+q1+p1

h2 =f1,1, (1.3.16)

(19)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 7

Figure 1.2: Discretization of the unit square domain ⌦= (0,1)⇥(0,1) [22]

.

wherep1 := p(x1) andq1 := q(y1). Since these boundary values are known, we can put them on the right hand side of the equation, and obtain

u2,1+u1,2 4u1,1

h2 =f1,1 1

h2(q1+p1). (1.3.17) Similarly, we obtain for the indicesi= 1 andj= 2 the discrete equation

u1,3+u2,2 4u1,2+u1,1

h2 =f1,2 1

h2q2. (1.3.18) Continuing this way for all the nodes connected to the boundary, we gather all equations obtained for all the points on the grid in the linear system of equations

Au=f, (1.3.19)

(20)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 8 where the matrixA is given by

A= 1 h2

2 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 4

4 1 1

1 4 . .. 1

. .. ... 1 . ..

1 4 1

1 4 1 . ..

1 1 4 . ..

. .. . .. ... 1

1 1 4 . ..

. .. . .. 1

1 . ..

. .. . .. 1

1 4 1

1 1 4 . ..

. .. . .. ... 1

1 1 4

3 77 77 77 77 77 77 77 77 77 77 77 77 77 77 77 77 77 5 ,

(1.3.20) and the vectorsu and f in the linear system of equationsAu=f we obtained are

u= 2 66 66 66 66 66 66 66 66 66 66 66 66 66 66 64

u1,1 u1,2 ... u1,n u2,1 u2,2

... u2,n

... ... un,1

un,2 ... un,n

3 77 77 77 77 77 77 77 77 77 77 77 77 77 77 75

, f :=

2 66 66 66 66 66 66 66 66 66 66 66 66 66 66 64

f1,1 h12(p1+q1) f1,2 h12q2

...

f1,n h12(qn+r1) f2,1 h12p2

f2,2

... f2,n 1

h2r2

... ... fn,1 1

h2(pn+s1) fn,2 h12s2

...

fn,n h12(rn+sn) 3 77 77 77 77 77 77 77 77 77 77 77 77 77 77 75

. (1.3.21)

(21)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 9 1.3.2 Numerical Experiment

In this section we show some numerical experiments to observe how the finite di↵er- ence method works for Poisson’s equation. To do so we write a MATLAB function calledSolve2dPoisson as below:

f u n c t i o n u = S o l v e 2 d P o i s s o n (f , ai , bi , gl , gr )

% S o l v e 2 d P o i s s o n solves the 2 d

% Poisson e q u a t i o n using a finite

% d i f f e r e n c e a p p r o x i m a t i o n .

% u = S o l v e 2 d P o i s s o n (f , eta , ai , bi , gg , gd )

% solves the two d i m e n s i o n a l e q u a t i o n

% ( - Delta ) u = f on the domain

% Omega =( ai *h , bi * h ) x (0 ,1)

% with D i r i r c h l e t b o u n d a r y c o n d i t i o n s

% u = gl at x = ai * h and u = gr at x = bi * h .

% and u =0 at y =0 and y =1 using a

% finite d i f f e r e n c e a p p r o x i m a t i o n

% with i n t e r i o r grid points ( bi - ai )

% times length ( gl ) using the same mesh

% size h =1/( length ( gl ) -1) in both x and y . nx = bi - ai -1;

ny = length ( gl ) ; h =1/( ny +1) ;

A =1/ h ^2* L a p l a c i a n _ k r o n _ 2 d ( nx , ny ) ; f (1: ny ,1) = f (1: ny ,1) + gl / h ^2;

f (1: ny , end ) = f (1: ny , end ) + gr / h ^2;

u = A \ f (:) ;

u = reshape (u , ny , nx ) ; u =[ gl u gr ];

x =0:1/( ny +1) :1;

y = x ;

ufinal =[ zeros (1 , ny +2) ; u ; zeros (1 , ny +2) ];

mesh ( x (1: end ) ,y , ufinal ) ; end

Executing Solve2dPoisson for n = 10,25,50,100, ai=0, bi=n+1, f=ones(n,n), gl=zeros(n,1), and gr=zeros(n,1), we obtain the following plots for the solution of Poisson’s equation:

(22)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 10

0 1 0.02

1 0.04

0.06

y 0.5

x 0.08

0.5

0 0

(a)n= 10 grid points

0 1 0.02

1 0.04

0.06

y 0.5

x 0.08

0.5 0 0

(b)n= 25 grid points

0 1 0.02

1 0.04

0.06

y 0.5

x 0.08

0.5

0 0

(c)n= 50 grid points

0 1 0.02

1 0.04

0.06

y 0.5

x 0.08

0.5 0 0

(d)n= 100 grid points

The matrix A is a block tridiagonal matrix, and in each row there are at most 5 non-zero entries. Such matrices are called structured, sparse matrices. This kind of systems can be solved by using Gaussian elimination, but it destroys the sparsity.

There are more e↵ective iterative methods available to solve the Poisson equation, for example preconditioned Krylov methods [49], or multi-grid methods [27].

We have to be careful here because we approximated the derivatives in the Poisson equation by finite di↵erences, but this does not automatically imply that the solution uof the large sparse system of equations we obtained is an approximation of the solu- tion of the Poisson equation (1.3.1). Runge just stated that this is so. One important property of numerical methods related to the truncation error isconvergence, and in the following we study the convergence of this approximation.

1.4 Convergence Analysis of Finite Di↵erence Methods

When we approximate the solution of PDEs numerically, there are two primary sources of error, rounding (or floating point) error and truncation errors. Round- ing errors are associated to the floating-point arithmetic that our computers use to perform calculations. Truncation errors, on the other hand, are errors we incur based on the numerical method; these errors would exist even in the absence of rounding errors. To some extent, we cannot control rounding errors (they are determined by

(23)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 11 the precision of the machine), but we can control truncation errors. In this section we want to analyse the convergence behavior of the finite di↵erence method.

Convergence of the finite di↵erence approximation to the solution of the underlying partial di↵erential equation was first proved by Courant, Friedrich and Lewy using a maximum principal and compactness [15]. This was still done in the spirit of proving that the PDE actually has a solution, so nowhere it is assumed that the solution of the PDE exists. The first error estimate was given two years later by Gerschgorin [24].

He proved the result for a general second-order elliptic operator, and we basically follow the same steps in the simpler case for the Poisson equation with Dirichlet boundary conditions on the unite square (1.3.1), discretized by a finite di↵erence approximation on the uniform grid given byxi =ihandyj =jhfori, j= 1,2, . . . , n andh= n+11 ,

hui,j =fi,j i, j= 1, . . . , n, (1.4.1) ui,j =gi,j (i, j)2B,

where the set of boundary nodes is given by

B:={(i, j) :i= 0, n+1 and j= 1, . . . , n, or j= 0, n+1 and i= 1, . . . , n}. (1.4.2) Following Gershgorin, but for our simpler case, we now prove that the discrete solu- tionui,j is indeed an approximation of the continuous solution u of (1.3.1).

Definition 1.4.1. (Maximum norm) Let u := {ui,j}ni,j=1 defined by (1.4.1), we define the maximum norm in the interior by

||u||1= max

i,j=1,...,n|ui,j|, and also a maximum norm on the boundary,

||u||1,@⌦ = max{ui,j : (i, j)2B}. (1.4.3) In order to prove convergence of the finite di↵erence approximation, we need three main ingredients:

• a truncation error estimate,

• a discrete maximum principle,

• a Poincar´e type estimate.

These results are given in the following lemmas [22].

(24)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 12 Lemma 1.4.1. (Truncation Error Estimate). If the solution u of the Poisson equa- tion (1.3.1), is inC4((0,1)⇥(0,1))and satisfies

|@4u

@x4|M1, |@4u

@y4|M2, 8x, y2(0,1), thenu, defined by (1.4.1), satisfies

|| hu hu||1 M1+M2

12 h2.

Proof. Using the definition of the maximum norm, the Taylor expansion (1.3.2) from the previous subsection, and the fact thatfi,j =f(xi, xj), we obtain

|| hu hu||1= max

i,j2{1,...,n}| hu(xi, yj) hui,j| (1.4.4)

= max

i,j2{1,...,n}| u(xi, yj) + 1 12(@4u

@x4(⇠i, yj) +@4u

@y4(xi,⌘j))h2 fi,j|

= max

i,j2{1,...,n}|1 12(@4u

@x4(⇠i, yj) +@4u

@y4(xi,⌘j))h2|

 M1+M2

12 h2, (1.4.5)

which concludes the proof.

Lemma 1.4.2. (Discrete maximum principle). Solutions of the five point finite dif- ference discretization h of the Laplace equation satisfy a discrete maximum princi- ple:

• If hu = 0, then the approximation ui,j attains its maximum and minimum values on the boundary of the domain, i.e. for (i, j)2B.

• If hu  0, then the maximum of ui,j is on the boundary, and if hu 0, then the maximum ofui,j is on the boundary.

Proof. The equation hu = 0 implies that for all i, j = 1,2, . . . , n of the grid we have

ui+1,j+ui,j+1 4ui,j+ui 1,j+ui,j 1

h2 = 0.

Thus the numerator must be zero. Solving forui,j, we obtain ui,j = ui+1,j+ui,j+1+ui 1,j+ui,j 1

4 ,

which means thatui,j equals the average of its grid neighbors, and hence can not be a local maximum or minimum. Thus, the maxima and minima are attained

(25)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 13 on the boundary. If hu 0, following the same reasoning than before, ui,j

must be greater than the average of its grid neighbors, and hence it can not be a local minimum. Similarly, when hu 0, ui,j must be smaller than its neighbors, and hence it can not be a local maximum.

We call a grid functionwi,j a discrete function associating a value to every point of the grid.

Lemma 1.4.3.(Poincar´e Type Estimate). For any grid functionwi,j, i, j= 0, . . . , n+

1, such thatwi,j = 0 on the boundary, i.e. for (i, j)2B, we have

||w||1 1

8|| hw||1. Proof. We consider a particular grid function defined by

vi,j = 1 4((xi

1

2)2+ (yj

1 2)2).

Applying the discrete Laplacian to this grid function at any point of the grid (xi, yj) = (ih, jh), i, j= 1, . . . , n and h= n+11 , we obtain

hvi,j = 1

4h2((ih+h 1

2)2+ (ih h 1

2)2+ 2(jh 1

2)2 4(ih 1

2)2). . . (1.4.6)

= 1

4h2(2(ih 1

2)h+h2 2(ih 1

2)h+h2. . .) (1.4.7)

= 1, (1.4.8)

independently ofi, j. Furthermore, since the grid functionv is a parabola centered at (12,12), it attains its maxima at the corners of the unite square domain, where its value equals 18, and thus we get for the maximum norm on the boundary

||v||1,@⌦ = 1

8. (1.4.9)

Now we consider for any grid functionw the inequality

hwi,j || hw||10, (1.4.10) which trivially holds, since we subtract the maximum over all i, j = 1, . . . , n. Then using (1.4.6), we multiply the norm term by 1 = hvi,j and obtain

hwi,j || hw||1= hwi,j || hw||1 hvi,j

= h(wi,j || hw||1vi,j)0.

Now using the discrete maximum principle from Lemma 1.4.2, we know that the maximum of wi,j || hw||1vi,j is on the boundary. Since by assumption the grid

(26)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 14 functionwi,j equals zero on the boundary, and we found in (1.4.9) that the maximum value of v on the boundary is 18, we obtain

|| hw||1

1

8 wi,j || hw||1vi,j wi,j,

where the second inequality holds trivially, since || hw||1vi,j 0 by definition.

With a similar argument for the inequality

hwi,j+|| hw||1 0, we get the relations

|| hw||1

1

8 wi,j+|| hw||1vi,j wi,j. We therefore proved that the grid functionwi,j lies in between

1

8|| hw||1wi,j  1

8|| hw||1, i, j= 1, . . . , n, and thus the modulus of the grid functionwi,j is bounded by

|wi,j| 1

8|| hw||1, i, j= 1, . . . , n.

We therefore obtain the norm estimate

||w||1 1

8|| hw||1, as desired.

Theorem 1.4.1. (Convergence). Assume that the solutionuof the Poisson equation (1.3.1) is in C4(⌦). Then the finite di↵erence approximationu of (1.3.12) converges tou when h tends to zero, and we have the error estimate

||u(xi, xj) ui,j||1Ch2, where C is a constant andh is the mesh size.

Proof. We simply apply Lemma 1.4.3 to the norm of the di↵erence, and then Lemma 1.4.1 to obtain

||u(xi, xj) ui,j||1 1

8|| h(u(xi, xj) ui,j)||1 M1+M2 96 h2, which concludes the convergence proof.

(27)

CHAPTER 1. NUMERICAL METHODS FOR SOLVING PDES 15

1.5 Conclusion

In this chapter we introduced PDEs and the approximation of their solution using finite di↵erence method which is based on an approximation of the di↵erential opera- tors, which one finds using Taylor expansion. The finite di↵erence method is easy to understand and to program, and we utilize this method throughout this thesis. How- ever, finite di↵erence method has difficulties to adapt to non-rectangular geometries with various boundary conditions. In 1971, McDonald [35] proposed a new technique called finite volume method which is based on integrating the equation over a small so calledcontrol volume, then reduce the volume integral of the di↵erential operator to a boundary integral on the boundary of the volume using the divergence theo- rem, and only then starts to approximate fluxes across the boundaries [34]. Another method for the numerical approximation of partial di↵erential equation is finite el- ement method which is based on the techniques of the calculus of variations. The essential ideas go back to Walther Ritz who was the first in 1908 to introduce the systematic use of finite dimensional approximations of the function spaces where the solution lives, see [41] and [40].

(28)

Chapter 2

Iterative Methods

Gaussian elimination can be used to solve an n by n linear system of equations Ax= b, for n on the order of a few hundred or perhaps a few thousand. However when n is large, on the order of hundreds of thousands or even more, Gaussian elimination is prohibitive, it requires about 23n3 operations (additions, subtractions, multiplications, and divisions) to solve an n by n linear system of equations. For n= 106, this is 23⇥1018 operations, and on a computer that performs 109 operations per second, this would require 23 ⇥109 seconds, about 21 years. Moreover, storing a matrix of the size n by n for largen is another problem. To store a dense n by n matrix, wheren= 106 requires 1012 words of storage. This is far more storage than is typically available on a single processor. To overcome both of these difficulties, iterative methods can be used to solve linear system of equations. These methods require only matrix-vector multiplication and solution of a preconditioning system [26].

2.1 Stationary Iterative Methods

To derive an iterative method for solving the linear system of equations

Au=f, A2Rnn, f 2Rn, (2.1.1) one splits the matrix intoA =M N. If M is invertible, this splitting induces the stationary iterative method

Muk+1=Nuk+f, k= 0,1,2, . . . , (2.1.2) and we need an initial guessu0to start the iteration. This method is called stationary because bothM and N are independent of the iteration indexk.

This method is efficient if the matrix splitting is such that solving linear systems with the matrixM is cheap, and the iteration (2.1.2) converges fast [23].

16

(29)

CHAPTER 2. ITERATIVE METHODS 17 Definition 2.1.1. (Iteration Matrix): We can write iterative methods based on the splittingA=M N in the standard form by multiplying both sides by M 1

uk+1=M 1Nuk+M 1f, (2.1.3)

and for any matrix splittingA=M N withM invertible, the matrixG:=M 1N is called the iteration matrix.

We can write (2.1.3) as

uk+1=M 1Nuk+M 1f (2.1.4)

=M 1(M A)uk+M 1f (2.1.5)

=uk+M 1(f Auk),

and thus we can also write the iteration in the correction form

uk+1=uk+M 1rk, (2.1.6)

where rk = f Auk is called the residual, which is a measure of how good the approximation uk is. The matrix M is called a preconditioner. Subtracting the iteration from the split system,

Mu=Nu+f, Muk+1 =Nuk+f,

we findM(u uk+1) =N(u uk). We introduce the errorek :=u uk, and obtain a recurrence for the error,

Mek+1 =Nek+1,ek+1 =M 1Nek. (2.1.7) From (2.1.6), it follows that

f Auk+1=f Auk AM 1rk,

whererk+1 =f Auk+1 andrk=f Auk. We obtain a recurrence for the residual rk+1 = (I AM 1)rk= (I AM 1)kr0. (2.1.8) SinceM A=N, we can find

I AM 1=N M 1 =M(M 1N)M 1, (2.1.9) therefore the iteration matricesI AM 1 in (2.1.8) andM 1N in (2.1.7) are similar, and have the same eigenvalues [23].

Consider the di↵erence of consecutive iterates,

dk =uk+1 uk, (2.1.10)

(30)

CHAPTER 2. ITERATIVE METHODS 18 we obtain

dk =uk+1 uk=M 1Nuk+M 1f M 1Nuk 1 M 1f

=M 1N(uk uk 1)

=M 1Ndk 1.

This shows that the di↵erences of consecutive iterates obey the same recurrence as the error. Moreover, from

dk=M 1N(uk u+u uk 1)

=M 1N( ek+ek 1)

= M 1Nek+ek

= (I M 1N)ek,

we obtain a relation between the di↵erence of consecutive iterates and the true error,

dk =M 1Aek. (2.1.11)

From 2.1.6 we obtainMdk =rk which shows the relation between the di↵erence of consecutive iterates and the residual. Finally, we have the relation

Mdk =rk=Aek. (2.1.12)

The solution of Au = f can be obtained by solving Aek = rk with u = uk +ek. However, we cannot solve systems with the matrixAdirectly because we assume that the matrix is too large to be stored and factored. Therefore we replace the problem by an easier one and solve Mdk=rk and iterate uk+1 =uk+dk.

2.2 Convergence Analysis of Iterative Methods

An important question now is if the iteration of the form (2.1.2) or (2.1.6) converges.

To answer this question assume (2.1.1) has a unique solution and A =M N is a splitting with an invertible matrixM. Consider the vector norm||.||:Rn!R+ and the corresponding induced matrix norm

||A||:= sup

||u||=1||Au||. (2.2.1)

By taking norms for the error recurrence (2.1.7) we find

||ek+1||||M 1N||||ek||· · ·||M 1N||k+1||e0||. (2.2.2) This shows that convergence happens when||M 1N||<1 for the chosen norm. This condition is sufficient, but it is not necessary, because for example for a triangular matrix R with zero diagonal we may have a norm ||R|| > 1, but R is nilpotent, we have Rk ! 0 when k ! 1. Hence, we need another property that describes convergence. This leads us to the following definition [14].

(31)

CHAPTER 2. ITERATIVE METHODS 19 Definition 2.2.1. (Spectral Radius): The spectral radius of a matrixA2Rnn is

⇢(A) := max

j=1,...,n| j(A)|, (2.2.3)

where j(A) denotes thej-th eigenvalue ofA.

Theorem 2.2.1. Let A2Rnn be non-singular, A=M N with M non-singular andf 2Rn. The stationary iterative method

Muk+1 =Nuk+f, (2.2.4)

converges for any initial vector u0 to the solutionu of the linear system Au =f if and only if ⇢(M 1N)<1, see [14].

Proof. We first show the ”only if” part with a proof by contraposition and assume that | m|= ⇢(M 1N) 1. Choosing u0 such that e0 =u u0 is a corresponding eigenvector, and applying the error recurrence (2.1.7), we get

ek+1 =M 1Nek=· · ·= (M 1N)k+1e0 = k+1m e0. (2.2.5) Thus, if| m|>1, then| k+1m |! 1, so the error cannot converge to zero. If| m|= 1, then we also have no convergence since the error does not decrease.

For the ”if” part, we assume that ⇢(M 1N) < 1. We then consider the Jordan decomposition (see [25], page 317).

M 1N =V JV 1, with V, J2Cn⇥n and V nonsingular. (2.2.6) The matrixJ is block-diagonal

J = 2 66 66 64

Jm1( 1) 0 0 . . . 0

0 Jm2( 2) 0 . . . 0

... . .. . .. . .. ...

0 . . . 0 Jms 1( s 1) 0 0 . . . 0 Jms( s)

3 77 77 75

where

Jmi( i) = 2 66 66 64

i 1 0 . . . 0 0 i 1 . . . 0 ... ... . .. ... ...

0 0 . . . i 1 0 0 . . . 0 i

3 77 77 75

2Cm1⇥mi, i= 1, . . . , s,

and sis the number of distinct eigenvalues i and mi is the multiplicity of i. Now notice that the matrixJmi( i) can be written as the sum of a diagonal matrix and a nilpotent matrix:

Jmi( i) = ( iI+ ¯N), (2.2.7)

(32)

CHAPTER 2. ITERATIVE METHODS 20 whereI is themi⇥mi identity and

N¯ = 2 66 66 64

0 1 0 . . . 0 0 0 1 . . . 0 ... ... . .. ... ...

0 0 . . . 0 1 0 0 . . . 0 0 3 77 77 75 .

Now, since iIandN commute, we can use the Binomial Theorem for matrices (Jmi( i))k = ( iI N¯)k=

X1 r=0

✓r k

k r ir=

min(k,mXi 1) r=0

✓r k

k r

ir. (2.2.8) Furthermore, it is easy to see that

2= 2 66 66 66 64

0 0 1 . . . 0 0 0 0 0 . . . 0 0 ... ... . .. ... ... ...

0 0 . . . 0 0 1 0 0 . . . 0 0 0 0 0 . . . 0 0 0 3 77 77 77 75

. . . N¯mi 1 = 2 66 66 66 64

0 0 0 . . . 0 1 0 0 0 . . . 0 0 ... ... . .. ... ... ...

0 0 . . . 0 0 0 0 0 . . . 0 0 0 0 0 . . . 0 0 0 3 77 77 77 75

. (2.2.9)

Notice that (M 1N)k=V JkV 1, and since J is block diagonal, we get

Jk= 2 66 66 64

Jmk1( 1) 0 0 . . . 0 0 Jmk2( 2) 0 . . . 0

... . .. . .. . .. ...

0 . . . 0 Jmks 1( s 1) 0 0 . . . 0 Jmks( s)

3 77 77 75

. (2.2.10)

Using (2.2.8) we obtain that the well-known expression for the powers of a Jordan block is

Jmki( i) = 2 66 66 66 4

ki k 1

k 1

i k

2 k 2

i . . . mk

i 1

k mi+1 i

0 ki k1 ki 1 . . . mk

i 2

k mi+2

.. i

. ... . .. . .. ...

0 0 . . . ki k1 ki 1

0 0 . . . 0 ki

3 77 77 77 5

. (2.2.11)

Therefore, if⇢(M 1N)<1, then| i|<1 for alli, so that

limJmki( i) = 0, (2.2.12)

for all Jordan blocks. It follows that limk!1Jk = 0. But this implies

klim!1(M 1N)k= lim

k!1V JkV 1 =V( lim

k!1Jk)V 1= 0. (2.2.13)

(33)

CHAPTER 2. ITERATIVE METHODS 21 Now the question is whether we can find a relation between the spectral radius and the norm of a matrix. The following lemma shows this relation.

Lemma 2.2.1. For symmetric matrices A 2 Rnn, the spectral radius equals the 2-norm,⇢(A) =||A||2.

Proof. Using the definition of the 2-norm, we obtain

||A||22 = max(ATA) = max(A2) = max| (A)|2 =⇢(A)2. (2.2.14)

We should notice that the spectral radius is not a norm, since it follows from||A||22= 0 that A = 0. This is not true for the spectral radius, for example for an upper triangular matrixRwith zero diagonal, we have⇢(R) = 0, butR6= 0. Moreover, the triangle inequality

⇢(A+B)⇢(A) +⇢(B) does not hold.

Example 2.2.1. ConsiderA=

0 1

0 0 and B=

0 0

1 0 , see [14].

⇢(A+B) = 1 but ⇢(A) =⇢(B) = 0.

2.3 Convergence Factor and Convergence Rate

From (2.1.7) we obtain

ek= (M 1N)ke0.

Taking norms on both sides yields a bound for the error reduction

||ek||

||e0|| ||(M 1N)k||.

The question here is how many iterations are needed for the error reduction to reach some given tolerance✏,

||ek||

||e0|| ||(M 1N)k||<✏.

We can write the right-hand side as

||(M 1N)k||= (||(M 1N)k||1k)k<✏, and we take the logarithm

k > log(✏)

log(||(M 1N)k||1k). (2.3.1)

(34)

CHAPTER 2. ITERATIVE METHODS 22 Does this equation give us useful information aboutk? At first glance it does not seem useful, since the number of necessary iterationskappears on both sides, however, for largekwe can get a good estimate using is the following lemma:

Lemma 2.3.1. For any matrix G2Rnnwith spectral radius⇢(G) and any inducted matrix norm, we have

k!1lim ||Gk||1k =⇢(G). (2.3.2) Definition 2.3.1. (Convergence Factor): The mean convergence factor of an itera- tion matrixG is the number

k(G) =||Gk||1k. (2.3.3) The asymptotic convergence factor is the spectral radius

⇢(G) = lim

k!1k(G). (2.3.4)

Definition 2.3.2. (Convergence Rate): The mean convergence rate of an iteration matrixGis the number

Rk(G) = log(||Gk||1k) = log(⇢k(G)). (2.3.5) The asymptotic convergence rate is

R1(G) = log(⇢(G)). (2.3.6)

Let us go back to the question of how many iteration stepskare necessary until the error reduction reaches a given tolerance✏, or until the error decreases by a factor , with✏= 1. The answer of this question is

k⇡ log(✏)

R1(M 1N) = log( )

R1(M 1N). (2.3.7)

2.4 Conclusion

In this chapter we introduced the iterative methods and analyzed their convergence behavior. There is a better approach, which does not require the estimate of the spectral interval of the matrix, which leads to the so called Krylov method. These methods are called Krylov methods because they involve what is called a Krylov space, which also appears in a paper by Krylov from 1931. He studied vibration phenomena arising in linear systems of second-order ordinary di↵erential equations, see [29]. In 1952 Eduard Stiefel emphasized that the method of conjugate gradients gives successive approximations to the solution as well as the solution after a finite number of steps [46]. In 1952 in the publication with Magnus R. Hestenes the finite number of steps were more emphasized. This was misleading for the algorithm, which

(35)

CHAPTER 2. ITERATIVE METHODS 23 in finite precision arithmetic rarely converges in a finite number of steps, and so it took almost a generation of numerical analysts to expand research into more general Krylov methods like Generalized Minimal Residual method (GMRES). The GMRES method was proposed by Yousef Saad and Martin H. Schultz in 1986, see [42].

(36)

Chapter 3

Domain Decomposition methods

3.1 Introduction

In this chapter we will introduce the domain decomposition methods. In their most common version, domain decomposition methods can be used in the framework of any discretization method for partial di↵erential equations (such as, e.g. finite elements, finite volumes, finite di↵erences) to make their algebraic solution more efficient on parallel computer platforms. In addition, domain decomposition methods allow the reformulation of any given boundary-value problem on a partition of the computa- tional domain into subdomains. As such, they provide a very convenient framework for the solution of heterogeneous or multiphysics problems, i.e. those that are gov- erned by di↵erential equations of di↵erent kinds in di↵erent subregions of the com- putational domain. The basic idea behind domain decomposition methods consists in subdividing the computational domain⌦, on which a boundary-value problem is set, into two or more subdomains on which discretized problems of smaller dimen- sion are to be solved, with the further potential advantage of using parallel solution algorithms.

Hermann Amandus Schwarz was a German analyst of the 19th century. He was interested in proving the existence and uniqueness of the Laplace problem. At his time, there were no Sobolev spaces nor Lax-Milgram theorem. The only available tool was the Fourier transform, limited by its very nature to simple geometries. In order to consider more general situations, H. Schwarz devised an iterative algorithm for solving Laplace problem set on a union of simple geometries, see [43].

Let the domain⌦ be the union of a disk and a rectangle, see Figure 3.1. Consider the Poisson problem which consists in findingu:⌦!Rsuch that

u=f in⌦, (3.1.1)

u= 0 on @⌦.

24

Références

Documents relatifs

In overlapping Schwartz methods (3), the solutions are computed on overlapping domains and the boundary conditions for a given domain are defined at the

20 2.2 Optimized Schwarz Waveform Relaxation Methods For One Dimensional Heat Equation With First Order Transmission

Optimized Schwarz domain decomposition, Robin transmission conditions, finite element methods, non-conforming grids, error analysis, piecewise polynomials of high order, NICEM

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

The plan of our paper is as follows: in Section 2 we detail the coupled problem, corre- sponding artificial boundary conditions and provide a global stability estimate; in Section 3

We propose a Schwarz-based domain decomposition method for solving a dis- persion equation consisting on the linearized KdV equation without the ad- vective term, using simple

which mechanically means that we seek the surface traction to impose inside the Global coarse model (ie the stress discontinuity) such that the Fine model with Dirichlet

3 Explicit/implicit and Crank-Nicolson domain decomposition meth- ods for parabolic partial differential equations 49 3.1 Model problem and DDM finite element