• Aucun résultat trouvé

Symplecticity and symmetry of general integration methods

N/A
N/A
Protected

Academic year: 2022

Partager "Symplecticity and symmetry of general integration methods"

Copied!
120
0
0

Texte intégral

(1)

Thesis

Reference

Symplecticity and symmetry of general integration methods

LEONE, Pierre

Abstract

Analysis of the appropriate definition and characteristic of symplecticity and symmetry for General Integration Methods.

LEONE, Pierre. Symplecticity and symmetry of general integration methods. Thèse de doctorat : Univ. Genève, 2000, no. Sc. 3174

URN : urn:nbn:ch:unige-345492

DOI : 10.13097/archive-ouverte/unige:34549

Available at:

http://archive-ouverte.unige.ch/unige:34549

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

UNIVERSIT(E DE GEN+EVE FACULT(E DES SCIENCES

Section de Math(ematiques Professeur E. Hairer

|||||||||||||||||||||||||||||||||||||||

Symplecticity and Symmetry of General Integration Methods

TH+ESE

pr(esent(ee +a la Facult(e des sciences de l'Universit(e de Gen+eve

pour obtenir le grade de Docteur +es sciences, mention math(ematiques

par Pierre LEONE

Gen+evede

Th+ese No 3174 Gen+eve

Atelier de reproduction de la Section de Physique 2000

(3)

1 Numerical Methods 5

1.1 Introduction . . . 5

1.2 Runge-Kutta Methods . . . 5

1.3 B-Series and Applications . . . 8

1.3.1 B-Series, Trees and Elementary DiBerentials . . . 9

1.3.2 First Application of B-Series . . . 12

1.3.3 Backward Error Analysis . . . 13

1.4 Multistep Methods . . . 16

1.4.1 Relationship between One-Step and Linear Multistep Integrators . . 17

1.4.2 Partitioned Multistep Methods . . . 22

1.4.3 P-Series,P-Trees and Elementary DiBerentials . . . 23

1.4.4 One-Leg Methods . . . 25

1.5 General Linear Methods . . . 25

1.5.1 Underlying One-Step Method of a General Linear Method . . . 27

1.5.2 General Methods RMultiple Eigenvalue 1T . . . 29

1.5.3 Pr-Series,Pr-Trees and Elementary DiBerentials . . . 32

1.5.4 Formal Invariant Manifold Theorem . . . 37

2 Symplecticity 41

2.1 Introduction . . . 41

2.2 Symplecticity of Runge-Kutta Methods . . . 44

2.2.1 Some Properties of Symplectic Runge-Kutta Methods . . . 45

2.2.2 Motivation . . . 46

2.2.3 Preliminaries . . . 47

2.2.4 Proof of the Theorem 2.2.6 . . . 49

2.2.5 Locations of the Poles of the Stability Function . . . 51

2.3 Symplecticity of B-Series . . . 55

2.4 Symplecticity of General Linear Methods . . . 55

2.4.1 First Approach of Symplecticity . . . 56

2.4.2 Second Approach of Symplecticity . . . 61

2.5 Numerical Experiments . . . 63

3 Symmetry 67

3.1 Introduction . . . 67

3.2 Symmetry of Runge-Kutta Methods . . . 70

3.3 Symmetry of B-Series . . . 72 3

(4)

4

3.4 Symmetry of General Linear Methods . . . 72

3.5 Numerical Experiments . . . 78

3.6 Partitioned Symmetric Methods . . . 80

4 Conjugacy of B-Series 89

4.1 Introduction . . . 89

4.2 General Conjugacy . . . 91

4.3 Symplectic Conjugacy . . . 96

5 Appendix 103

5.1 Invariant Manifold Theorem . . . 103

6 R>esum>e de la thCese en franDcais 107

6.1 Introduction . . . 108

6.2 IntJegration symplectique . . . 110

6.2.1 MJethodes La un pas . . . 110

6.2.2 SymplecticitJe des mJethodes multipas et associJees . . . 111

6.2.3 PremiLere dJeMnition de la symplecticitJe des mJethodes multipas . . . 112

6.2.4 Seconde dJeMnition de la symplecticitJe des mJethodes multipas . . . . 113

6.3 IntJegration symJetrique . . . 115

6.4 Conjugaison . . . 117

(5)

Chapter 1

Numerical Methods

1.1 Introduction

In this chapter we introduce some classical numerical methods devised to integrate ordi- nary di5erential equations of the form

y

08t9 = f;y8t9!# y809 =y0 81.19 where y2Rd and f :Rd!Rd satis?es a Lipschitz condition.

After a brief introduction of the Runge-Kutta schemes in section 1.2, we discuss mul- tistep formulas in section 1.4. In particular, we focus our attention on the study of the relationships between one-step and multistep methods. We show that, given a multistep method, there formally exists a manifold, which is invariant under the numerical scheme, and the restricted dynamics of the multistep method to this manifold is equivalent to the dynamics of a one-step method. Moreover, the one-step method acts on the same space as the exact solution of 81.19, i.e. the one-step method has to be seen as an application Lh : Rd ! Rd as the Mow 'h : Rd ! Rd solution of 81.19. This will give us a natural insight into the geometrical properties of the multistep methods in the following chapters.

Results are also extended to partitioned multistep methods and one-leg methods.

Section 1.5 of this chapter is devoted to general linear methods. After brieMy re- calling the classical theory, we extend the approach we have developed in section 1.4 to general linear methods following two di5erent ways, each one corresponding to di5erent assumptions on the coeQcients of the methods.

First, we restrict our attention to the dynamics of the general linear method on an invariant manifold. As for multistep methods, we reduce the study of the dynamics of the general linear method to the study of the dynamics of a one-step method which acts on the same space as the space on which the original problem 81.19 is de?ned.

Secondly, we consider the general linear method as a one-step method but in a higher dimensional space than the space on which the problem 81.19 is de?ned.

1.2 Runge-Kutta Methods

Given a di5erential equation 81.19 and a step-sizeh, we are searching for a numerical value

y

1 which approximates the exact solution sayy8h9 of 81.19 ath. We start by considering some simple examples of numerical methods before going into Runge-Kutta methods.

5

(6)

6

We know that the exact solution y0h1 satis2es

y0h1;y0 =

Z

h

0

f0y0u11du: 01.21

In order to produce a numerical valuey1 which approximatesy0h1, i.ey1 !y0h1, we look for an approximation of the integral, the simplest being

y0h1;y0 !hf0y0011: 01.31

If we substitute y0h1 by y1 and change the ! sign by an = sign in 01.31, we obtain the particularly simple numerical scheme

y

1 =y0 +hf0y01&

called theexplicit Euler method. This method is called explicit because given the initial valuey0, a direct calculation enables us to computey1. We can also use the value ofy0h1 to approximate the integral 01.21, this leads to

y0h1;y0 !hf0y0h11&

and then to the numerical scheme

y

1 =y0 +hf0y11&

called the implicit Euler method. The method is called implicit because y1 is implicitly de2ned.

In order to study how a given numerical method approximates the exact solution of 01:11, we compare the Taylor expansion aroundy0. The 2rst terms of the Taylor expansion of the exact solution are given by

y0h1 = y0 +hf0y01 + h2!2f00y01f0y01 +::: 01.41 The Taylor expansion of the explicit Euler method is y1 = y0+hf0y01, the method itself then coincides with 01.41 up to 0and including1 the terms of order h.

The 2rst terms of the Taylor expansion of the implicit Euler method are y1 = y0 +

hf0y01+h2f00y01f0y01+:::. As in the explicit Euler case, the Taylor expansion coincide with 01.41 up to 0and including1 the term of order h.

The comparison of the Taylor expansion of both explicit and implicit Euler methods shows that the numerical solution satis2es

y

1

;y0h1 =O0h21: Both methods are said to be of order of accuracy one.

We also consider the scheme

y

1 =y0 +h2

;

f0y01 +f0y111&

called the trapezoidal rule, because we approximate the integral 01.21 by the area of a trapeze. Comparing its Taylor series expansion with 01.41 shows us that the terms of both

(7)

7 series coincide up to ,and including0 order two. Then, by considering a linear combination of f,y00 and f,y10 f,y,h00 we obtain a method which is of order of accuracy two, i.e.

y

1

;y,h0 =O,h30:

Runge-Kutta methods generalize the above examples. These methods are given by

k

i =f,yn+hXs

j=1 a

ij k

j0& i= 1&::: &s

y

n+1 =yn+hXs

j=1 b

j k

j

&

,1.50 where ,ai$j0i$j=1$:::$s and ,bi0i=1$:::$s are coeDcients of the method,h is the step size andki are called the internal stages. Notice that the internal stages are implicitly deFned and satisfy

k

i f

;

y,cih0"&

where ci = Pjaij and the numerical approximation y1 is obtained by the sum of the initial data y0 and a linear combination of theki's.

The reader interested by a deeper insight into the historical development of Runge- Kutta methods can consult the paper H9J.

Since their Frst introduction, the study of Runge-Kutta methods has never ceased and this in many directions such as the local order of accuracy, the study of the global error, stability, and so on. We do not want to treat Runge-Kutta in such directions and we refer the reader to the textbooks of Hairer & NPrsett & Wanner H20J , Hairer & Wanner H23J and Butcher H8J for a wide and deep insight into the state of the art in Runge-Kutta theory.

In the following, in order to be self-consistent, we recall some deFnitions and results related to Runge-Kutta schemes which are relevant for our further developments.

De"nition 1.2.1 A Runge-Kutta method y1 = Wh,y00 is said to be of local orderp if

y

1

;'

h,y00 =O,hp+10&

where 't,y00 = y,t0 is the exact solution at time t of the problem y0 = f,y0 with initial condition y,00 = y0. This means that the Taylor series of y1 and 'h,y00 coincide up to

;and including< the term hp.

Because we will be more concerned about long-time integration, the concept of global order is important.

De"nition 1.2.2 A Runge-Kutta method y1 = Wh,y00 is said to be of global order p if

y

n

;'

nh,y00 = O,hp0&

with nh=T a constant and h!0.

(8)

8

Concerning the order investigations, it is su1cient to deal only with local order. This is so, because both de8nitions are consistent with one another in the sense that local order

p implies global order p. The idea to deal with local order is to expand the exact and the numerical solutions of the di=erential equation in Taylor series and to compare both series term by term. The 8rst terms of the series of the exact solution of ?1:1A are given by ?withy =y?tA2RdA

y?t+hA =y+hf?yA + h2!2f0?yAf?yA + h3!3

;

f

00?yA?f?yA&f?yAA +f0?yAf0?yAf?yA!+::: &

and the 8rst terms of the series of the numerical solution ?1:5A by

y

1 =y0 +h;Xs

i=1 b

i

!

f?y0A +h2;Xs

i=1 b

i c

i

!

f

0?y0Af?y0A+

h 3

2

; s

X

i=1 b

i c

2

i

!

f

00?y0A?f?y0A&f?y0AA +h3;Xs

i#j=1 b

i a

ij c

j

!

f

0?y0Af0?y0Af?y0A +::: ?1.6A We see that the condition

s

X

i=1 b

i = 1

is su1cient for order one, a condition which is also called consistency condition because it means that the numerical scheme approximate the solution of the di=erential equation

?1:1A. For order two, the additional su1cient condition is

s

X

i=1 b

i c

i = 12&

and 8nally

s

X

i=1 b

i c

2

i = 13&

s

X

i=1 s

X

j=1 b

i a

ij c

j = 13!&

for order three.

Actually, the conditions above are also necessary conditions for order three ?see exercise 4, chapter II.2 of N20PA.

Although the continuation of this process is theoretically clear it leads to inextricable computational di1culties when we try to deal with higher orders. The introduction of a new formalism is needed, the B-series theory. Notations are similar to the one in the book of Hairer, NUrsett & Wanner N20P.

1.3 B-Series and Applications

B-series are not only interesting to investigate the order conditions of Runge-Kutta meth- ods, but they are also a powerful tool to establish theoretical results and they are the basis for many further developments.

(9)

9

1.3.1

B

-Series, Trees and Elementary Di4erentials

The B-series originate from the derivation of order conditions for Runge-Kutta methods and the computation of successive derivatives with respect to time t of the function

f7y7t88, where y7t8 is solution of the usual di;erential equationy07t8 =f7y7t88, previously numbered equation 71.18. We start by brieCy explaining this point. Notice that in order to simplify the notation, we avoid to repeat the time-dependance of y = y7t8 when the notation remains clear.

We saw in section 1.2 that the Grst derivatives of y7t8, and of y1, the numerical solution obtained from a Runge-Kutta scheme applied to 71.18, are given by some linear combinations of terms like

f7y8$

f

07y8f7y8$

f

007y87f7y8$f7y88$

f

07y8f07y8f7y8$

:::

called elementary di*erentials. The idea is to introduce a graphical representation of these derivatives. We start by associating the tree & = to the function f7y8, denoted

f7y8 =F7 87y8. Taking the derivative of the function f7y8 with respect to timetamounts to computing f07y8 and multiplying it by y0=f7y8, hence

d

dt f

;

y7t8!=f07y7t8!f7y7t8!:

Graphically speaking, we apply the convention that taking the derivative of the function

f7y8 amounts to adding an edge to the root of the tree & = and multiplying by f7y8 amounts to adding a vertex at the end of the new edge. So we associate the tree ! to the elementary di;erentialf07y8f7y8. Using the same notation as above we havef07y8f7y8 =

F7!87y8.

We continue one step further. To take the derivative of the functionf07y8f7y8 we have to take the derivative of f07y8 which leads to a term f007y8;f7y8f7y8! and the derivative of f7y8 which leads to f07y8f07y8f7y8. Using the same graphical convention as above we get

f

07y8f7y8

| #z %

!

;!f

007y87f7y8$f7y88

| #z %

! n

$

f

07y8f7y8

| #z %

!

;!f

07y8f07y8f7y8

| #z %

! n

$

where we have circled the vertex corresponding to the term to be derived.

By repeating the same argument, we see that the successive derivatives of the function

f7y8 can be written as a linear combination of expressions of the form f7y8, f07y8f7y8,

f

07y8f07y8f7y8$:::, and that each expression can be graphically expressed by a tree. More- over, we see that there are some expressions which appear more than once. For example,

(10)

10

consider the term f00-y.-f-y."f0-y.f-y.. = F-nn .-y.. We display below three distinct manners to obtain this tree. We have attached to each vertex a label in such a way that the vertex which appears ;rst is labelled by 1, the vertex which appears in second is labelled by 2 and so on.

f-y.

|!z#

1

;!f

0-y.f-y.

| !z #

1 2

;!f

00-y.-f-y."f-y..

| !z #

n

1 2 3

;!f

00-y.-f0-y.f-y."f-y..

| !z #

n

1 2 3

n 4

f-y.

|!z#

1

;!f

0-y.f-y.

| !z #

1 2

;!f

00-y.-f-y."f-y..

| !z #

n

1 2 3

;!f

00-y.-f0-y.f-y."f-y..

| !z #

n

1 3 2

n 4

f-y.

|!z#

1

;!f

0-y.f-y.

| !z #

1 2

;!f

0-y.f0-y.f-y.

| !z #

1 2 n 3

;!f

00-y.-f0-y.f-y."f-y..

| !z #

n

1 2 4

n 3

This example shows that the number of distinct labellings of a given tree plays an impor- tant role, because it counts the number of times a given tree appears during the process of derivation of f-y., f0-y.f-y., ::: with respect to timet.

In the following, we give a morerigorous insight into trees, labelling and all the material needed to introduce formally the B-series.

De"nition 1.3.1 The set of rooted treesT is recursively de/ned by

"#2T" and 2T

"If t1":::"tm all in T nf#g then t= Ct1"::: "tmD2T"

with t= Ct1"::: "tmD is the tree obtained by grafting the trees t1":::"tm to a new vertex called the root of the tree. For example C D = , C " D = n , CC DD = n where, for each tree, in order to illustrate what is the root of a tree this special vertex is surrounded by a circle.

Next, we de;ne some functions on the trees.

De"nition 1.3.2 The elementary di4erentialsF-t.-y., witht 2T are recursively de/ned by

"F-#.-y. =y" F- .-y. =f-y.

"F-t.-y. =f%m&-y.;F-t1.-y.":::"F-tm.-y.%"

with t = Ct1"::: "tmD and y2Rd and f :Rd !Rd. The ;rst elementary diHerentials are given by :

F- .-y. =f-y."

F- .-y. =f0-y.f-y."

F-n .-y. =f00-y.;f-y."f-y.%"

F-n.-y. =f0-y.f0-y.f-y.:

(11)

11

De"nition 1.3.3 The function !t"counts the number of vertices of a treet. It is recur- sively de6ned by

!!" = 0" ! " = 1

!%t1"::: "tm&" = 1 + !t1" +:::+ !tm"

The function $!t" counts the monotonic labellings of the tree t. It is recursively de6ned by

$!!" = 1" $! " = 1

$!t" = !t";1

!t1""::: !tm"

!

$!t1"###$!tm" 1

%

1!%2!###

with t = %t1"::: "tm& 2 T and %1"%2"::: count the number of identical trees among

t

1

"::: "tm. Although the tree ! does not admit any labelling, it is convenient to de6ne

$!!" = 1.

The monotonic labellings of a tree t are obtained by attaching to each vertex of the tree an integer 1"2"::: " !t" with the restriction that the root is labelled with 1 and the other vertices are labelled in such a way that following a branch from the root, the labels are monotonically increasing. For example, for the tree n"n we have $!n"n " = 3 and the diEerent labels are3n"n12

4

,4n"n12

3

,2n"n13

4

. The set of labelled !rooted" trees is denoted by LT. By convention, we have!2LT and all the expressions deGned above,F!t"!y" , !t" and$!t"

are extended to the labelled trees just by ignoring the labels.

De"nition 1.3.4 ;Hairer & Wanner ?20BC Leta :T !R. We call the series

B!a"y" =a!!"y+a! "f!y" + h2!2a! "F! "!y" +:::

= X

t2LT

h$%t&

!t"!a!t"F!t"!y" = X

t2T

h$%t&

!t"!$!t"a!t"F!t"!y"

a B-series. In the expression above the function a :T !Ris extended to the set LT, i.e.

is viewed as a:LT !Rjust by ignoring the labels.

Let t 2 LT be a labelled tree, we denote by si!t" the subtree formed by the vertices labelled by the Grstiindices anddi!t" the set of subtrees formed by the remaining indices.

For exemple, we have

s

0!3n"1n2

4

" =!" d0!3n"n12

4

" =fn"n g

s

1!3n"1n2

4

" = " d1!3n"1n2

4

" =f "n g

s

2!3n"1n2

4

" = "" d2!3n"1n2

4

" =f " g

s

3!3n"1n2

4

" = n"" d3!3n"n12

4

" =f g

s

4!3n"1n2

4

" = n"n " d4!3n"n12

4

" = ! We are now able to formulate an important result.

(12)

12

Theorem 1.3.5

Hairer & Wanner )20,-. Leta:T !Randb:T !Rbe two mappings such that a#!$ = 1. Then the composition of the two corresponding B-series is again a

B-series

B#b$B#a$y$$ =B#ab$y$ #1.7$

where the "product" ab:T !Ris given by

ab#t$ = 1

'#t$

X

label

!$ t!

X

i=0

!

(#t$

i

"

b

;

si#t$$ Y

z2di t!

a#z$

"

: #1.8$

In the last expression, the 7rst sum is over all '#t$ di<erent labelling oft.

Corollary 1.3.6

If a:T !Rwith a#!$ = 1, then

hf#B#a$y$$ =B#a0$y$ with

a

0#!$ =0$ a0# $ = 1

a

0#At1$:::$tmB$ =(#t$a#t1$":::"a#tm$

1.3.2 First Application of B-Series

The 7rst application of B-series is to derive the order conditions for a Runge-Kutta method #1:5$.

Theorem 1.3.7

B-series of exact solution- The exact solution of #1:1$ is a B-series

y#t0+h$ =B#e$y0$ where y0 =y#t0$ and

e#t$ = 1$ for all t2T:

Proof. If we try to compute the Taylor series of the exact solution of #1:1$, terms like

f#y$$f0#y$#f#y$$$f00#y$#f#y$$f#y$$$::: appear. Because they are elementary di<erentials in the sense of de7nition 1.3.2 this implies that the Taylor series can be expressed as a

B-series. We denote B#e$y$ this B-series and replace it in the equation hy0#t0 +h$ =

hf#y#t0+h$$. Using corollary 1.3.6, we obtain

X

t2Tnf$g

h$ t!

#(#t$;1$!'#t$e#t$F#t$#y0$ = X

t2Tnf$g

h$ t!

(#t$!'#t$e0#t$F#t$#y0$:

Since the elementary di<erentials F#t$#y$ are independent #see exercise II.2.4 in A20B$, we have

(#t$e#t$ =e0#t$ =(#t$e#t1$":::"e#tm$$ which enables us to determinee#t$ recursively #notice that e#!$ = 1$.

(13)

Theorem 1.3.8

Runge-Kutta methods as B-series1 The numerical solution of a Runge- Kutta method "1:5 $ is a B-series y1 =B"a$y0$, where a" $=1 and

a"t$=&"t$

s

X

i=1

bi&i"t$$ for t 2T$ "1.9$

where &i :T !Rand & :T !N are recursively de:ned by

&i" $=1$ &" $=1

and for t =-t1$::: $tm. it holds

&"t$=)"t$#&"t

1

$#:::#&"tm$

&i"t$=

X

j1 ::: jm

ai j1 j1!t1" ::: ai jm jm!tm":

Proof. Calculation are very similar to the ones of the preceding theorem. We refer the reader to the lecture notes of Hairer :16= for a complete proof.

Using both preceding theorems, we obtain order conditions for a Runge Kutta method in terms of the coeDcients of the method. A Runge-Kutta method has order pif and only if

s

X

i=1

bi i!t" = 1

%!t" for &!t"!p:

1.3.3 Backward Error Analysis

Backward error analysis is an interesting approach in many areas of numerical analy- sis. We limit ourselves to the introduction of some results in backward error analysis of ordinary diJerential equations which will be of interest in this work. We consider the application of a numerical method, of order at least one, to the ordinary diJerential equation

y

0=f!y") y!0" =y0) !1.10"

in order to get an approximation y1 of the exact solution of !1:10" with a Taylor series expansion

y

1 =y0+hf!y0" +h2!2Af0!y0"f!y0" +

h 3

3!:Bf00!y0"!f!y0")f!y0"" +Cf0!y0"f0!y0"f!y0"= +::: ) !1.11"

where h is the step size of integration, and A, B, C, ::: are real coeDcients !see for example equation !1.6"". We postpone the discussion about the convergence of the series

!1.11" or the subsequent series to the end of this section. Our expansions are only formal in this Urst stage.

(14)

14

We are looking for a di.erential equation

~

y

0=f4~y5 +hf14~y5 +h2f24~y5 +::: $ y~405 =y0$ 41.125 such that formally ~y4h5 = y1. Computing the Taylor series of ~y4h5 using 41.125, and grouping the same powers of h yields

~

y4h5 =y0+hf4y05 +h2;f14y05 + 12!f04y05f4y05!

+h3;f24y05 + 12!f04y05f14y05 + 12!f104y05f4y05

+ 13!f004y054f4y05$f4y055 + 13!f04y05f04y05$f4y05! +h4:::

41.135

Comparing powers of h in series 41.135 and 41.115 yields for the Frst terms

f

14y5 = 12!4A;15f04y5f4y5$

f

24y5 = 13!4B;15f004y54f4y5$f4y55 + 13!4C ;15f04y5f04y5f4y5

;

2!41 f04y5f14y5 +f104y5f4y55:

41.145

Actually, this process can be repeated in order to determine recursively the functions

f

j4y5 for j $ 1. To make this last point precise, we have to notice that the coeJcients of a term hq in the series 41.135 is of the formfq ;1+:::, where the dots account for an expression involving f4y5, fj4y5 for 1 %j %q;2 and derivatives of these functions. So, comparing the coeJcients of hq in the series 41.135 and 41.115 enables us to determine

f

q ;1 provided that f4y5,fj4y5 for 1% j %q;2 are already known. This shows that we can recursively Fnd the series 41.125. Moreover, the functions fj4y5 for j $ 1 are given by linear combinations of the elementary di.erentials of the function f4y5 of order j+ 1 4although the time-dependance has been avoided to simplify the notation, we still have

f4y5 = f4y4t55 and derivatives have to be taken with respect to time t5, and hence the series 41.125 can be written as aB-series. So, in order to fully determine the series 41.125, we make the natural hypothesis that y1 can be represented as a B-series,

y

1 =X

t2T h

$%t&

)4t5!*4t5a4t5F4t54y05$ 41.155 as it is the case for the numerical solution obtained by a Runge-Kutta scheme for example.

The process described just above shows that, if we suppose 41.155 holds, then the functions

f

j can be written as

f

j4y5 = 1

j!

X

$%t&=j

*4t5b4t5F4t54y5$

with the termsb4t5 recursivelydeFned. For example, using equation 41.145 withA =a4(5,

Références

Documents relatifs

We present a simple and ecient method to write short codes in vector languages for solving boundary value problem (BVP) by P 1 -Lagrange nite element meth- ods in any space

If the outcome of the inference from the initial arguments is that there is an agreement or potential agreement the protocol calls for the users to be informed that they

The article is structured in the following way: section 2 discusses the application of qualitative and quantitative methods, and establishes as a starting point, a possible

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

For a scalar diffusion problem, Hybrid High-Order methods (and, as we will see in Section 3.3, MHO methods after hybridization) use as intermediate unknowns polynomials of degree k

To assess if the fast dynamics of cognitive maps affects the quality of positional encoding we next re-use the neural activity pattern s t in theta bin t, this time to infer

We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation..

Integration of a specific function will consist in approximating the value of the surface area between this function determined in a given range and the OX axis.. Such