• Aucun résultat trouvé

Advanced numerical tools, reordering techniques and iterative methods

N/A
N/A
Protected

Academic year: 2021

Partager "Advanced numerical tools, reordering techniques and iterative methods"

Copied!
66
0
0

Texte intégral

(1)

Advanced Numerical Tools

Reordering Techniques and Iterative Methods

B. Cerfontaine

ULg - ArGEnCo - Boursier FRIA

(2)

Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods 5 Conclusion

(3)

Introduction Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods 5 Conclusion

(4)

Introduction

Increasing complexity of studied phenomena (localization, anisotropy, complex 3D geometry) ⇒ increasing number of equations

X

(5)

Introduction

Assembling of global stiffness tangent matrix

Solving A · x = b 1 2 3 4 5 6 7 8 9 10 11 12 5 2 6 3 1 4 Assembling example : 6 elements 12 nodes 1dof/node

(6)

Introduction

Assembling of global stiffness tangent matrix

Solving A · x = b 1 2 3 4 5 6 7 8 9 10 11 12 5 2 6 3 1 4

Local stiffness matrix (ele-ment 1) :

Global stiffness matrix (12 dofs x 12 dofs) : 1 2 3 4 5 6 7 8 9 10 1112 1 2 3 4 5 6 7 8 9 10 11 12

(7)

Introduction

Assembling of global stiffness tangent matrix

Solving A · x = b 1 2 3 4 5 6 7 8 9 10 11 12 5 2 6 3 1 4

Local stiffness matrix (ele-ment 2) :

Global stiffness matrix (12 dofs x 12 dofs) : 1 2 3 4 5 6 7 8 9 10 1112 1 2 3 4 5 6 7 8 9 10 11 12

(8)

Introduction

Assembling of global stiffness tangent matrix

Solving A · x = b 1 2 3 4 5 6 7 8 9 10 11 12 5 2 6 3 1 4

Properties of the matrix : sparse

potentially symmetric potentially band

Global stiffness matrix (12 dofs x 12 dofs) : 1 2 3 4 5 6 7 8 9 10 1112 1 2 3 4 5 6 7 8 9 10 11 12

(9)

Direct methods Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods 5 Conclusion

(10)

Direct methods

Triangular systems

Let’s consider the following system L · x = b (N = 4) :     l11 0 0 0 l21 l22 0 0 l31 l32 l33 0 l41 l42 l43 l44     ·        x1 x2 x3 x4        =        b1 b2 b3 b4        exact solution ∼ O(N2) operations easy to implement

forward (L matrix) or backward (U matrix) substitution x1 = b1 l11 x2 = b2− l21· x1 l22 . . . x = bi− i −1 P k=1 likxk

(11)

Direct methods

Gaussian elimination

Let’s consider the following system A · x = b (n = 4)     a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44     ·        x1 x2 x3 x4        =        b1 b2 b3 b4       

(12)

Direct methods Gaussian elimination Elimination of a21:     a11 a12 a13 a14 0 a22(2) a23(2) a24(2) a31 a32 a33 a34 a41 a42 a43 a44     ·        x1 x2 x3 x4        =        b1 b(2)2 b3 b4        a(2)21 ← a21− a21 a11 · a11= 0 a22(2)← a22− a21 a11 · a12 a(2)23 ← a23− a21 a11 · a13 a24(2)← a24− a21 a11 · a14 b2(2)← b2− a21 a11 · b1

(13)

Direct methods Gaussian elimination Elimination of a31and a41      a11 a12 a13 a14 0 a22(2) a23(2) a24(2) 0 a32(2) a33(2) a34(2) 0 a42(2) a43(2) a44(2)      ·        x1 x2 x3 x4        =          b1 b(2)2 b(2)3 b(2)4          mi 1= ai 1 a11 for i = 3, 4 and j = 2, ., 4 a(2)ij ← aij − mi 1· a1j b(2)i ← bi − mi 1· b1

(14)

Direct methods Gaussian elimination Elimination of a32, a42 and a43      a11 a12 a13 a14 0 a22(2) a23(2) a24(2) 0 0 a33(3) a34(3) 0 0 0 a44(4)      ·        x1 x2 x3 x4        =          b1 b(2)2 b(3)3 b(4)4          then the remaining upper triangular matrix has to be solved.

exact solution ∼ O(N3) operations

pivoting if aii = 0 (or aii << for numerical stability)

best method for dense A matrices

(15)

Reordering Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods 5 Conclusion

(16)

Reordering

Definitions

Principle : rearranging the equations in order to minimize the number of operations to carry

Ordering efficiency parameter : σ =

N

P

i =1

Mi2

where Mi = number of off-diagonal term during the elimination of node i

1 2 3 4 5 6 7 8

1

2

3

4

5

6

7

7

33

5

4

1

6

8

(17)

Reordering

Definitions

Principle : rearranging the equations in order to minimize the number of operations to carry

Ordering efficiency parameter : σ =

N

P

i =1

Mi2

where Mi = number of off-diagonal term during the elimination of node i

1 2 3 4 5 6 7 8

1

2

3

4

5

6

7

Elimination of node 1 :

red : entries to be cancelled green : previously zero entries purple : modified entries Mi = 5 (in red)

(18)

Reordering

Definitions

Principle : rearranging the equations in order to minimize the number of operations to carry

Ordering efficiency parameter : σ =

N

P

i =1

Mi2

where Mi = number of off-diagonal term during the elimination of node i

1 2 3 4 5 6 7 8

1

2

3

4

5

6

7

Elimination of node 2 :

red : entries to be cancelled green : previously zero entries purple : modified entries Mi = 4 (in red)

(19)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8

(20)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes

if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8

(21)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes

if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8 7 33 5 4 1 8

(22)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8 7 33 5 4 1 8

(23)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8 7 33 5 4 1 8

(24)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8

(25)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8 2 7 33 5 4 1 6 8

(26)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes 2 7 33 5 4 1 6 8

(27)

Reordering

Basic Algorithm

1 Starting from one node to be eliminated

(blue, to be adequately chosen)

2 Chosing the next one

several possibilities (green)

for each possibility : how many new active nodes (red) ?

the next node is the one which leads to minimum new active nodes if equality chose the one which is active since the greater number of steps

3 Reorder following the sequence of

eliminated nodes Sequence : 5,3,2,1,6,4,7,8 Initial 2 7 33 5 4 1 6 8 Reordered 3 7 32 1 6 4 5 8

(28)

Reordering

Relation between adjacent nodes and number of operations

Number of adjacent nodes = Mi then minimizing it minimizes σ and cpu time 3 7 32 1 6 4 5 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 3 7 32 1 6 4 5 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7

(29)

Reordering

Comparison between old and new matrices

Without renum σ = 87 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 1 2 3 4 5 6 7 8 1 2 3 With renum σ = 40 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 1 2 3 4 5 6 7 8 1 2 3

(30)

Reordering

Non zero entries of matrices (before and after renum)

Types of reordering widely used in LAGAMINE :

1 oil stain (ITYREN= 1) efficient for every mesh ; parameters needed ;

2 directional reordering (ITYREN= 2) efficient for rectangular meshes ;

need a direction in which the structure has the greatest number of nodes.

3 based on Sloan method : improved oil stain (ITYREN= 3) includes algorithm for best start node ;

efficient for every mesh ; no parameter needed ; fastest than oil stain ( ?).

(31)

Reordering

Non zero entries of matrices (before and after renum)

(32)

Reordering

Example

Oedometer test : with/without reordering (and comparison directional/Sloan) 0 2000 4000 6000 8000 10000 12000 10-2 10-1 100 101 102 103 104 CPUDtim e D[s] 0 2000 4000 6000 8000 10000 120000 100 200 300 400 500 600 700 800 900 1000 Ra tio D[-] CPUDtimeDnormal CPUDtimeDDir. CPUDtimeDSloan Ratio

(33)

Reordering

Bibliography

E. Cuthill and J. McKee, ’Reducting the bandwidth of sparse symmetric matrices’, Proc. ACM Nat. Conf., Association of Computing Machinery, New-York (1969) I.P. King, ’An automatic reordering scheme for simultaneous equations derived from network systems’, International journal of numerical methods in engineering, 2, 523-533 (1970)

S.W. Sloan and M.F. Randolph, ’Automatic element reordering for finite element analysis with frontal solution schemes’, International journal of numerical methods in engineering, 19, 1153-1181 (1983)

S.W. Sloan, ’An algorithm for profile and wavefront reduction of sparse matrices’, International journal of numerical methods in engineering, 23, 239-251 (1986) S.W. Sloan and W.S. Ng, ’A direct comparison of three algorithms for reducing profile and wavefront’, Computers & Structures, 33, 411-4191 (1989)

(34)

Iterative methods Introduction and principles Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods

Introduction and principles

GMRES method Practical use

(35)

Iterative methods Introduction and principles

Example : Jacobi iterations

Principle : finding the new value of ξi(k) that annihilates the i-th component of the residual vector ri = (b − A · xk)i

ri ≡ aii · ξ(k+1)i + n P j =1 j 6=i aij · ξj(k)− βi = 0 for i = 1, . . . , n ri ai1... aii ...ain ξ1 ξi ξ βi ... ...

(36)

Iterative methods Introduction and principles

Example : Jacobi iterations

Principle : finding the new value of ξi(k) that annihilates the i-th component of the residual vector ri = (b − A · xk)i

ri ≡ 0 = aii · ξi(k+1) + n P j =1 j 6=i aij · ξ(k)j − βi ⇔ ξi(k+1) = 1 aii   βi − n P j =1 j 6=i aij · ξ(k)j    for i = 1, . . . n

(37)

Iterative methods Introduction and principles

Example : Jacobi iterations

Principle : finding the new value of ξi(k) that annihilates the i-th component of the residual vector ri = (b − A · xk)i

ξ(k+1)i = 1 aii     βi − n X j =1 j 6=i aij · ξj(k)     for i = 1, . . . n

D

-E

-F

A

Rewritten in vector form

xk+1= D−1· (E + F) · xk+ D−1· b

where

D : diagonal of A

(38)

Iterative methods Introduction and principles

Preconditioning

Principle : ”preconditioning is a means of transforming the original linear system into one which has the same solution, but which is likely to be easier to solve with an iterative solver” (Saad, 2003)

Preconditioning matrix M : close to A ;

nonsingular ;

inexpensive to solve linear system M · x = b.

One of the simplest ways of defining a preconditioner is to perform an incomplete factorization of the matrix A

(39)

Iterative methods Introduction and principles

Preconditioning

Principle : ”preconditioning is a means of transforming the original linear system into one which has the same solution, but which is likely to be easier to solve with an iterative solver” (Saad, 2003)

Applied to the left

M−1· A · x = M−1· b or applied to the right

(40)

Iterative methods GMRES method Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods

Introduction and principles

GMRES method

Practical use

(41)

Iterative methods GMRES method

Principle

Building iteratively a solution (˜x) to the problem A · x = b

from an initial guess (x0) of the solution

and a basis of linearly independent vectors Km

through the following linear combination :

˜ x = x0+ m X i =1 yi · vi vi ∈ Km

The size of the basis, m

is unknown, a priori (iterative concept) is much less than N, the number of equations.

(42)

Iterative methods GMRES method

Analogy with structural analysis

q(t)

1st eigenmode : y1

2nd eigenmode : y2

3rd eigenmode : y3

There is two ways for solving the problem of a beam submitted to an arbitrary time-dependent loading

1 Nodal base method : solving a

NxN system

2 Modal base method : projection

onto eigenmodes (increasing eigenfrequencies) ˜ y(t) = m X i =1 ηi(t) · yi m decoupled equations to be solved N

(43)

Iterative methods GMRES method

Analogy with structural analysis

q(t)

1st eigenmode : y1

2nd eigenmode : y2

3rd eigenmode : y3

There is two ways for solving the problem of a beam submitted to an arbitrary time-dependent loading

1 Nodal base method : solving a

NxN system

2 Modal base method : projection

onto eigenmodes (increasing eigenfrequencies) ˜ y(t) = m X i =1 ηi(t) · yi m decoupled equations to be solved N

(44)

Iterative methods GMRES method

Which basis ?

In the following iterative method, subspace Km is a Krylov subspace, i.e.

Km(A, v) ≡ spanv, Av, A2v, . . . , Am−1v

and specifically, GMRES uses

Km(A, r0) ≡ spanr0, Ar0, A2r0, . . . , Am−1r0

where

r0= b − A· x0

(45)

Iterative methods GMRES method

Which basis ?

This basis can be computed iteratively since there is a relation between two of its consecutive vectors, i.e. vj = A · vj −1

Then following a Gram-Schmidt procedure for orthogonal bases

1 v1 =

r0

kr0k2

2 actual basis of degree m, compute vm+1= A · vm 3 to be orthogonalized against vi for i = 1, . . . m

vm+1:= vm+1− m X i =1 (vm+1, vi) · vi 4 normalize vm+1= vm+1 kvm+1k2

(46)

Iterative methods GMRES method

Which basis ?

This basis can be computed iteratively since there is a relation between two of its consecutive vectors, i.e. vj = A · vj −1

Then following a Gram-Schmidt procedure for orthogonal bases

1 v1 =

r0

kr0k2

2 actual basis of degree m, compute vm+1= A · vm

3 to be orthogonalized against vi for i = 1, . . . m

vm+1:= vm+1− m X i =1 (vm+1, vi) · vi 4 normalize vm+1= vm+1 kvm+1k2

(47)

Iterative methods GMRES method

Which basis ?

This basis can be computed iteratively since there is a relation between two of its consecutive vectors, i.e. vj = A · vj −1

Then following a Gram-Schmidt procedure for orthogonal bases

1 v1 =

r0

kr0k2

2 actual basis of degree m, compute vm+1= A · vm 3 to be orthogonalized against vi for i = 1, . . . m

vm+1:= vm+1− m X i =1 (vm+1, vi) · vi ∼ O(m × n) operations ! 4 normalize vm+1= vm+1 kvm+1k2

(48)

Iterative methods GMRES method

Which basis ?

This basis can be computed iteratively since there is a relation between two of its consecutive vectors, i.e. vj = A · vj −1

Then following a Gram-Schmidt procedure for orthogonal bases

1 v1 =

r0

kr0k2

2 actual basis of degree m, compute vm+1= A · vm 3 to be orthogonalized against vi for i = 1, . . . m

vm+1:= vm+1− m X i =1 (vm+1, vi) · vi 4 normalize vm+1= vm+1 kvm+1k2

(49)

Iterative methods GMRES method

How to compute y ? How to choose the size of m ?

If Vm is the n × m matrix describing the basis Km then solution is

computed as

˜

x = x0+ Vm· y

where y minimizes

kb − A · ˜xk2 = kb − A · (x0+ Vm· y) k2= kr0− A · Vm· yk2

which is a least square problem of size mN

Practically the size m of Km increases iteratively up to achieve a residual

(50)

Iterative methods GMRES method

Principle

GMRES method implemented in LAGAMINE (KNSYM = ±6) approached solution (depends on the convergence norm !) no need to inverse the global matrix A

best method for very big number of equations ∼ O((m + 3 + 1/m) · N + NZ ) where

m is the number of iterations N is the number of unknowns

NZ is the number of nonzero terms in A

(51)

Iterative methods Practical use Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods

Introduction and principles GMRES method

Practical use

(52)

Iterative methods Practical use

Practical use : oedometer cube

Systematic comparison using a cubic example

oedometric boundary conditions purely mechanical/hydro-mechanical behaviour

8-nodes 3D blocs varying mesh size N3

N ∈ [10, 15, 20, 25, 30, 35, 40, 45] Blocked horizontal displacement N eleme nts N eleme nts N eleme nts Imposed vertical displacement Blocked vertical displacement

(53)

Iterative methods Practical use

Practical use : oedometer cube

0 0.5 1 1.5 2 2.5 xc 105 101 102 103 104 105 106

Numberc ofc equationsc [/]

CPU c tim ec [s] Minimumccpuctimecforcsolvingcsystem ITc M DIRc M ITc HM DIRc HM Insufficientcmemoryc forcmorecdofs 0 0.5 1 1.5 2 2.5 xI105 0 5 10 15 20 25 30 35 40 NumberIofIequationsI[/] Di re ctI CPUItim e /Ite ra tive ICP U Itim e I[/] CostIsavingsIusingIGMRES MECA HYDROMECA

(54)

Iterative methods Practical use

Practical use : gas injection

4500 5000 5500 6000 6500 7000 4000 4500 5000 5500 6000 6500 7000 7500 8000 Time [h] Ga s p re ssu re [kPa] Iterative Direct

Iterative and direct methods for gas injection : gas pressure at a given point

global convergence depends on LAGAMINE criteria : PRECU and PRECF identical for both methods

difference of convergence rate : different time steps

(55)

Iterative methods Practical use

Parameters

Preconditioning parameters : LFIL, DROPTOL

The incomplete factorization of LU follows the same steps as the gaussian elimination.

For a given off-diagonal term to be eliminated akk

mkl =

akl

akk

akj ← akj− mkl · akj bk ← bk − mkl · bk for j = 1, . . . , n

(56)

Iterative methods Practical use

Parameters

Preconditioning parameters : LFIL, DROPTOL

During the LU factorization process : sort terms of row i of L and U ; keep the lfil greatest in L and U ; next row. LU incomplete 20 30 40 50 60 70 80 90 100 110 120 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 S yst. dCPUdt im e /me a n uS yst. dCPUdt im e )d[ /] lfild[/] t101010 t151515 t202020 t252525 t303030 t353535 t404040 t454545 donwtdconvergedford lowerdlfildvalued CPU time(lfili) lfiln

(57)

Iterative methods Practical use

Parameters

Preconditioning parameters : LFIL, DROPTOL increasing LFIL

increasing memory storage required

increasing computational cost for computing the preconditioner increasing computational cost for solving the system

decreasing LFIL

risk of no convergence

risk of instability during the process (observed for large hydro-mechanical systems)

increasing DROPTOL

conditional decreasing of computational cost

decreasing DROPTOL

(58)

Iterative methods Practical use

Parameters

Resolution parameter : IM, MAXITS

If the size of the Krylov basis is greater than IM,

initial guess of the solution is deemed too far from the actual one, algorithm is restarted with m = 1 and x0 = xm,

to the actual size k of the Krylov process if nits < IM ;

to nrestart× IM + k if nits > IM.

(59)

Iterative methods Practical use

Parameters

Resolution parameter : IM, MAXITS

If the size of the Krylov basis is greater than IM,

initial guess of the solution is deemed too far from the actual one, algorithm is restarted with m = 1 and x0 = xm,

The total number of iterations (nits) is equal to the number of

orthogona-lization processes already done which is equal

to the actual size k of the Krylov process if nits < IM ;

to nrestart× IM + k if nits > IM.

If no convergence occurs after nits = MAXITS

(60)

Iterative methods Practical use

Parameters

There is a relationship between the LFIL parameter and the number of iterations necessary to reach convergence

0 5 10 15 20 25 30 35 0 100 200 t404040/MlfilM=M30 cpuMti me M[s ] 0 5 10 15 20 25 30 35 0 100 200 lfilM=M45 cpuMti me M[s ] 0 5 10 15 20 25 30 35 0 100 200 lfilM=M60 cpuMti me M[s ] 0 5 10 15 20 25 30 35 0 100 200 lfilM=M75 cpuMti me M[s ] StepMn°M[/] ILUT GMRES 0 5 10 15 20 25 30 35 0 100 200 300 t404040/ lfil = 30 its [/] 0 5 10 15 20 25 30 35 0 100 200 300 lfil = 45 its [/] 0 5 10 15 20 25 30 35 0 100 200 300 lfil = 60 its [/] 0 5 10 15 20 25 30 35 0 100 200 300 lfil = 75 its [/] Step n° [/]

(61)

Iterative methods Practical use

Parameters

Convergence parameters : EPS, RESTOL

The internal iterative process ends if (relative convergence)

krik2 < EPS · kr0k2

or if (absolute convergence)

krik2< RESTOL

The smaller the EPS the higher the number of iterations to reach conver-gence !

(62)

Iterative methods Practical use Example of configurations RESTOL is imposed to 10−40 NDOFS TYPE 283544 cube M 220940 gallery M 262236 cube HM 752226 loca M

LFIL DROPTOL IM MAXITS EPS

45 10−6 200 500 10−5

130 10−6 500 4000 10−5

20 10−6 200 500 10−5

200 10−8 500 1000 10−5

(63)

Iterative methods Practical use

Bibliography

Mainly for GMRES and iterative methods (available online)

Y. Saad, ’Iterative methods for sparse linear systems’, Society for industrial and applied mathematics, (2003)

Or a summary

Y Saad and M.H. Schultz, ’GMRES : a generalized minimal residual algorithm for solving nonsymmetric systems’, Journal on Scientific Computing, 7, 856-869 (1986)

(64)

Conclusion Table of contents 1 Introduction 2 Direct methods 3 Reordering 4 Iterative methods

Introduction and principles GMRES method

Practical use

(65)

Conclusion

Increasing size of systems to be solved as a corollary effect of increasing complexity of modelling

High degree of sparsity of the systems to be solved leading to specific methods

Reordering techniques :

very efficient

coupled with direct solvers available in LAGAMINE

Iterative methods :

approximate solution

efficient for very large number of equations much complex algorithms

parameters : mix of art and science available in LAGAMINE

(66)

Conclusion

FSA, créatrice d'avenirs

Références

Documents relatifs

The Newton’s method can be modified in order to avoid evaluating the derivative function.. Set the accuracy threshold ε as well as the maximal number

A number of iterative methods for determining the hydraulic solution of water and natural gas pipeline networks which take ring-like form, such as, Hardy Cross,

It is inspired by Jain’s iterative rounding method for designing approximation algorithms for survivable network design problems, and augmented with a relaxation idea in the work

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

In summary, we create a general framework for iterative scaling and coordinate descent methods for maximum entropy8. Based on this framework, we discuss the convergence,

We introduce an iterative method for finding a common fixed point of a semigroup of nonexpansive mappings in a Hilbert space, with respect to a sequence of left regular means defined

q-S-time ‘‘trajectories’’ of the (bottom) LSW 1987 – 1994 and (top) LSW 2000 classes in the Labrador, Irminger, Iceland, Rockall Trough, and ‘‘Ellett Line’’ basins (left

The new techniques we propose in section 1 rely on the following idea, influenced by [2]: if the system we consider is reversible in time, then the initial state to reconstruct can