• Aucun résultat trouvé

FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION

N/A
N/A
Protected

Academic year: 2021

Partager "FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION"

Copied!
12
0
0

Texte intégral

(1)

HAL Id: hal-00081010

https://hal.archives-ouvertes.fr/hal-00081010v2

Preprint submitted on 3 Aug 2006

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION

Claire David

To cite this version:

Claire David. FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION. 2006. �hal-

00081010v2�

(2)

ccsd-00081010, version 2 - 3 Aug 2006

CLAIRE DAVID Abstract.

Finite difference schemes are here solved by means of a linear matrix equation. The theoretical study of the related algebraic system is exposed, and enables us to minimize the error due to a finite difference approximation.

Key words. Finite difference schemes, Sylvester equation AMS subject classifications. 15A15, 15A09, 15A23

1. Introduction: Scheme classes. Finite difference schemes used to approxi- mate linear differential equations are usually solved by means of a recursive calculus.

We here propose a completely different approach, which uses an equivalent matrix equation.

Consider the transport equation:

∂u

∂t + c ∂u

∂x = 0 , x ∈ [0, L], t ∈ [0, T ] (1.1)

with the initial condition u(x, t = 0) = u

0

(x).

Proposition 1.1.

A finite difference scheme for this equation can be written under the form:

α u

in+1

+β u

in

+γ u

in−1

+δ u

i+1n

+ε u

i−1n

+ζ u

i+1n+1

+η u

i−1n−1

+θ u

i−1n+1

+ϑ u

i+1n−1

= 0 (1.2)

where:

u

lm

= u (l h, m τ) (1.3)

l ∈ { i − 1, i, i + 1 } , m ∈ { n − 1, n, n + 1 } , j = 0, ..., n

x

, n = 0, ..., n

t

, h, τ denoting respectively the mesh size and time step (L = n

x

h, T = n

t

τ).

The Courant-Friedrichs-Lewy number (cf l) is defined as σ = c τ /h .

A numerical scheme is specified by selecting appropriate values of the coefficients α, β, γ, δ, ε, ζ, η, θ and ϑ in equation (1.2). Values corresponding to numerical schemes retained for the present works are given in Table 1.1.

The number of time steps will be denoted n

t

, the number of space steps, n

x

. In general, n

x

≫ n

t

.

Universit´e Pierre et Marie Curie-Paris 6, Laboratoire de Mod´elisation en M´ecanique, UMR CNRS 7607, Boˆite courriern0162, 4 place Jussieu, 75252 Paris cedex 05, France (david@lmm.jussieu.fr).

1

(3)

Table 1.1

Numerical scheme coefficient.

Name α β γ δ ǫ ζ η θ ϑ

Leapfrog 21τ 0 −12τ 2ch 2hc 0 0 0 0

Lax 1τ 0 0 −12τ +2ch

−1 2τ

c

2h 0 0 0 0

Lax-Wendroff 1τ

−1

τ +ch22τ 0 (1−2hσ)c

−(1+σ)c

2h 0 0 0 0

Crank-Nicolson 1τ +hc2

−1

τ +hc2 0 h2c c

h2 0 h2c c h2 0

The paper is organized as follows. The equivalent matrix equation is exposed in section 2. Specific properties of the involved matrices are exposed in section 3. A way of minimizing the error due to the finite difference approximation is presented in section 4.

2. The Sylvester equation.

2.1. Matricial form of the finite differences problem. Let us introduce the rectangular matrix defined by:

U = [u

in

]

1≤i≤nx−1,1≤n≤nt

(2.1)

Theorem 2.1. The problem (1.2) can be written under the following matricial form:

M

1

U + U M

2

+ L (U ) = M

0

(2.2)

where M

1

and M

2

are square matrices respectively n

x

− 1 by n

x

− 1, n

t

by n

t

, given by:

M

1

=

 

 

 

 

β δ 0 . . . 0 ε β . .. ... ...

0 . .. ... ... 0 .. . . .. ... β δ 0 . . . 0 ε β

 

 

 

 

M

2

=

 

 

 

 

0 γ 0 . . . 0 α 0 . .. ... ...

0 . .. ... ... 0 .. . . .. ... ... γ 0 . . . 0 α 0

 

 

 

  (2.3)

the matrix M

0

being given by:

M0=

 

 

−γ u01−ε u10−η u00−θ u20−ϑ u02 −ε u20−η u10−θ u30 . . . −ε unt0 −η unt−0 1

−γ u02−η u01−ϑ u03 0 . . . 0

. . .

. . .

. . .

. . .

. . .

−γ u0nx−2−η u0nx−2−ϑ u0nx−1 0 . . . 0

−γ u0nx−1−δ u1nx−η u0nx−2−ζ u2nx−ϑ u0nx −δ u2nx−ζ u3nx−ϑ u1nx . . . −δ untnx−ϑ unt−nx 1

 

 

(2.4)

and where L is a linear matricial operator which can be written as:

L = L

1

+ L

2

+ L

3

+ L

4

(2.5)

(4)

where L

1

, L

2

, L

3

and L

4

are given by:

L

1

(U ) = ζ

 

 

 

u

22

u

32

. . . u

n2t

0 u

23

u

33

. . . .. . .. . .. . .. . . .. .. . .. . u

2nx−1

u

3nx−1

. . . u

nntx−1

0

0 0 . . . 0 0

 

 

 

L

2

(U ) = η

 

 

 

0 0 . . . 0 0

0 u

11

u

21

. . . u

n1t−1

0 u

01

u

11

. . . u

n2t−1

.. . .. . .. . . .. .. . 0 u

1nx−2

u

2nx−2

. . . u

nntx−1−2

 

 

  (2.6)

L

3

(U ) = θ

 

 

 

0 . . . . . . . . . 0 u

21

u

31

. . . u

n1t

0 u

22

u

32

. . . u

n2t

0 .. . .. . .. . .. . .. . u

2nx−2

u

3nx−2

. . . u

nntx−2

0

 

 

 

L

4

(U ) = ϑ

 

 

 

0 u

12

u

22

. . . u

n2t−1

0 u

13

u

23

. . . u

n3t−1

.. . .. . . .. ... .. . 0 u

1nx−1

. . . . . . u

nntx−1−1

0 0 . . . . . . 0

 

 

  (2.7)

Proposition 2.2. The second member matrix M

0

bears the initial conditions, given for the specific value n = 0, which correspond to the initialization process when computing loops, and the boundary conditions, given for the specific values i = 0, i = n

x

.

Denote by u

exact

the exact solution of (1.1).

The corresponding matrix U

exact

will be:

U

exact

= [U

exacti

n

]

1≤i≤nx−1,1≤n≤nt

(2.8) where:

U

exactn

i

= U

exact

(x

i

, t

n

) (2.9)

with x

i

= i h, t

n

= n τ.

Definition 2.3. We will call error matrix the matrix defined by:

E = U − U

exact

(2.10)

Consider the matrix F defined by:

F = M

1

U

exact

+ U

exact

M

2

+ L (U

exact

) − M

0

(2.11)

Proposition 2.4. The error matrix E satisfies:

M

1

E + E M

2

+ L (E) = F

(2.12)

(5)

2.2. The matrix equation.

2.2.1. Theoretical formulation. Theorem 2.5. Minimizing the error due to the approximation induced by the numerical scheme is equivalent to minimizing the norm of the matrices E satisfying (2.12).

Note: Since the linear matricial operator L appears only in the Crank-Nicolson scheme, we will restrain our study to the case L = 0. The generalization to the case L 6 = 0 can be easily deduced.

Proposition 2.6.

The problem is then the determination of the minimum norm solution of:

M

1

E + E M

2

= F (2.13)

which is a specific form of the Sylvester equation:

AX + XB = C (2.14)

where A and B are respectively m by m and n by n matrices, C and X , m by n matrices.

The solving of the Sylvester equation is generally based on Schur decomposition: for a given square n by n matrix A, n being an even number of the form n = 2 p, there exists a unitary matrix U and a upper triangular block matrix T such that:

A = U

T U (2.15)

where U

denotes the (complex) conjugate matrix of the transposed matrix

T

U . The diagonal blocks of the matrix T correspond to the complex eigenvalues λ

i

of A:

T =

 

 

 

 

T

1

0 . . . . 0 0 . .. ... ... ...

.. . . .. T

i

. .. ...

.. . . .. ... ... 0 0 0 . . . 0 T

p

 

 

 

  (2.16)

where the block matrices T

i

, i = 1, ..., p are given by:

R e [λ

i

] I m [λ

i

]

− I m [λ

i

] R e [λ

i

] (2.17)

R e being the real part of a complex number, and I m the imaginary one.

Due to this decomposition, the Sylvester equation require, to be solved, that the

dimensions of the matrices be even numbers. We will therefore, in the following,

restrain our study to n

x

− 1 and n

t

being even numbers. So far, it is interesting to

note that the Schur decomposition being more stable for higher order matrices, it

perfectly fits finite difference problems.

(6)

Complete parametric solutions of the generalized Sylvester equation (2.13) is given in [2], [3].

As for the determination of the solution of the Sylvester equation, it is a major topic in control theory, and has been the subject of numerous works (see [1], [6], [8], [9], [10], [11], [12]).

In [1], the method is based on the reduction of the he observable pair (A, C) to an observer-Hessenberg pair (H, D), H being a block upper Hessenberg matrix. The reduction to the observer-Hessenberg form (H, D) is achieved by means of the staircase algorithm (see [4], ...).

In [9], in the specific case of B being a companion form matrix, the authors propose a very neat general complete parametric solution, which is expressed in terms of the controllability of the matrix pair (A, B), a symmetric matrix operator, and a parametric matrix in the Hankel form.

We recall that a companion form, or Frobenius matrix is one of the following kind:

B =

 

 

 

0 . . . . 0 − b

0

1 0 . . . . 0 − b

1

0 1 0 . . . .. . .. . .. . 0 . .. ... ... .. . 0 0 . . . 1 0 − b

p−1

 

 

 

 (2.18)

These results can be generalized through matrix block decomposition to a block com- panion form matrix:

M

2

=

 

 

 

 

M

2B1

0 . . . . 0 0 M

2B2

0 . . . 0 0 0 . .. ... .. . .. . 0 . .. ... 0 0 0 . . . 0 M

2Bk

 

 

 

  (2.19)

the M

2Bp

, 1 ≤ p ≤ k being companion form matrices.

Another method is presented in [14], where the determination of the minimum-norm solution of a Sylvester equation is specifically developed.

The accuracy and computational stability of the solutions is examined in [5].

2.2.2. Existence condition of the solution . Theorem 2.7. In the specific cases of the Lax and Lax-Wendroff schemes, (2.14) has a unique solution.

Proof. (2.14) has a unique solution if and only if A and B have no common eigenvalues.

The characteristic polynomials P

M1

, P

M2

of M

1

and M

2

, can be respectively calculated as the determinants of respectively

nx2−1

by

nx2−1

,

n2t

by

n2t

diagonal block matrices:

P

M1

(λ) = ((λ − β )

2

− δ ε)

nx−21

, P

M2

(λ) = (λ

2

− α γ)

nt2

(2.20)

(7)

In the specific cases of the Lax and Lax-Wendroff schemes: α γ = 0 6 = ± β + √ δε.

Hence, (2.12) has a unique solution, which accounts for the consistency of the given problem.

Corollary 2.8. In the specific case of the Leapfrog scheme, (2.14) has a unique solution if and only if τ 6 =

hc

.

3. Specific properties of the matrices M

1

and M

2

.

3.0.3. Inversibility of the matrix M

1

. Theorem 3.1. In the specific case of the Lax and Leapfrog schemes, M

1

is inversible.

Theorem 3.2. In the specific case of the Lax-Wendroff scheme, M

1

is inversible if and only if

− 1 τ + c

2

τ

h

2

2

6

= (σ

2

− 1)c

2

4 h

2

(3.1)

Proof. The determinant D

1

of M

1

can be calculated as the determinant of a

nx2−1

by

nx2−1

diagonal block matrix:

D

1

= (β

2

− δ ε)

nx−21

(3.2)

3.0.4. Nilpotent components of the matrix M

2

. Proposition 3.3. M

2

can be written as:

M

2

= α

T

N + γ N (3.3)

where N is the nilpotent matrix;

T

N denotes the corresponding transposed matrix:

N =

 

 

 

 

0 1 0 . . . 0 0 0 . .. ... ...

.. . .. . . .. ... 0 .. . .. . . . . 0 1 0 0 . . . . . . 0

 

 

 

  (3.4)

Proposition 3.4.

For either α = 0, or γ = 0, M

2

will thus be nilpotent, of order n

t

. Proposition 3.5.

For γ = 0, if M

1

is inversible, the solution at t = n

t

dt can then be immediately determined.

Proof.

In such a case, multiplying (2.2) on the right side by M

2nt−1

leads to:

M

1

U M

2nt−1

+ U M

2nt

= M

0

M

2nt−1

(3.5)

(8)

i. e.:

M

1

U M

2nt−1

= M

0

M

2nt−1

(3.6)

which leads to:

U M

2nt−1

= M

1−1

M

0

M

2nt−1

(3.7)

Due to:

M

2nt−1

=

 

 

 

 

0 0 0 . . . 0 .. . 0 . .. ... ...

.. . .. . . .. ... 0 0 .. . . .. 0 0 1 0 . . . 0 0

 

 

 

  (3.8)

we have:

U M

2nt−1

=

 

 

 

 

u

1nt

0 0 . . . 0 u

2nt

0 . .. ... ...

.. . .. . . .. ... 0 .. . .. . . .. 0 0 u

nxnt

0 . . . 0 0

 

 

 

  (3.9)

The solution at t = n

t

dt can then be immediately determined.

4. Minimization of the error.

4.1. Theory. Calculation yields:

 

M1T

M1 = diag

β22 β(δ+ε) β(δ+ε) ε22

, . . . ,

β22 β(δ+ε) β(δ+ε) ε22

M2TM2 = diag

γ2 0 0 α2

, . . . ,

γ2 0 0 α2

(4.1)

The singular values of M

1

are the singular values of the block matrix

β

2

+ δ

2

β (δ + ε) β (δ + ε) ε

2

+ β

2

, i. e.

1

2 (2β

2

+ δ

2

+ ε

2

− (δ + ε) p

2

+ δ

2

+ ε

2

− 2δ ε) (4.2)

of order

nx2−1

, and 1

2 (2β

2

+ δ

2

+ ε

2

+ (δ + ε) p

2

+ δ

2

+ ε

2

− 2δ ε) (4.3)

of order

nx2−1

.

The singular values of M

2

are α

2

, of order

n2t

, and γ

2

, of order

n2t

.

(9)

Consider the singular value decomposition of the matrices M

1

and M

2

:

U

1T

M

1

V

1

=

M f

1

0

0 0

, U

2T

M

1

V

2

=

M f

2

0

0 0

(4.4)

where U

1

, V

1

, U

2

, V

2

, are orthogonal matrices. M f

1

, M f

2

are diagonal matrices, the diagonal terms of which are respectively the nonzero eigenvalues of the symmetric matrices M

1T

M

1

, M

2T

M

2

.

Multiplying respectively 2.13 on the left side by

T

U

1

, on the right side by V

2

, yields:

U

1T

M

1

E V

2

+ U

1T

E M

2

V

2

= U

1T

F V

2

(4.5)

which can also be taken as:

T

U

1

M

1

V

1T

V

1

E V

2

+

T

U

1

E

T

U

2T

U

2

M

2

V

2

= U

1T

F V

2

(4.6) Set:

T

V

1

E V

2

= E g

11

E g

12

E g

21

E g

22

!

,

T

U

1

E

T

U

2

=

 E g g

11

E g g

12

g g E

21

E g g

22

 (4.7) 

T

U

1

F V

2

= g F

11

F g

12

g F

21

F g

22

! (4.8)

We have thus:

M f

1

E g

11

M f

1

E g

12

0 0

+

 E g g

11

M f

2

0 g g E

21

M f

2

0

 = g F

11

g F

12

g F

21

g F

22

! (4.9)

It yields:

 

 

 

M f

1

E g

11

+ E g g

11

M f

2

= g F

11

M f

1

E g

12

= g F

12

g g

E

21

M f

2

= g F

21

(4.10)

One easily deduces:

( E g

12

= M f

1−1

g F

12

g e

E

21

= F g

21

M f

2

(4.11)

−1

The problem is then the determination of the E g

11

and E g g

11

satisfying:

M f

1

E g

11

+ E g g

11

M f

2

= g F

11

(4.12)

(10)

Denote respectively by e f

ij

, e f f

ij

the components of the matrices E, e E. ee The problem 4.12 uncouples into the independent problems:

minimize

X

i,j

f e

ij2

+ e f f

ij

(4.13)

2

under the constraint

M f

1ii

e f

ij

+ M g

2ii

e f f

ij

= F g

11ij

(4.14)

This latter problem has the solution:

 

  f

e

ij

=

M

g

1iiF

g

11ij

g

M1ii 2

+M

g

2jj 2

f f

e

ij

=

M

g

2jjF

g

11ij

g

M1ii 2

+M

g

2jj 2

(4.15)

The minimum norm solution of 2.13 will then be obtained when the norm of the matrix F g

11

is minimum.

In the following, the euclidean norm will be considered.

Due to (4.8):

k g F

11

k ≤ k F e k ≤ k U

1

k k F k k V

2

k ≤ k U

1

k k V

2

k k M

1

U

exact

+ U

exact

M

2

− M

0

k (4.16)

U

1

and V

2

being orthogonal matrices, respectively n

x

− 1 by n

x

− 1, n

t

by n

t

, we have:

k U

1

k

2

= n

x

− 1 , k V

2

k

2

= n

t

(4.17) Also:

k M

1

k

2

= n

x

− 1

2 2 β

2

+ δ

2

+ ε

2

, k M

2

k

2

= n

t

2 α

2

+ γ

2

(4.18)

The norm of M

0

is obtained thanks to relation (2.4).

This results in:

k F g

11

k ≤ p

n

t

(n

x

− 1) (

k U

exact

k

r n

x

− 1 2

p 2 β

2

+ δ

2

+ ε

2

+ r n

t

2

p α

2

+ γ

2

+ k M

0

k )

(4.19)

k F g

11

k can be minimized through the minimization of the right-side member of (4.19), which is function of the scheme parameters.

4.2. Numerical example: the specific case of the Lax scheme. For the Lax scheme: γ = 0. The remaining coefficients (see Table 1.1) can be normalized through the following change of variables:

 

 

 

α = h α

β = h β

δ = h δ

ε = h ε

(4.20)

(11)

Set:

M

0

= h M

0

(4.21) Thus:

k M

0

k

2

= δ

2

nt

X

n=1

u

nnx2

+ ε

2

nt

X

n=1

u

i02

(4.22)

Advect a sinusoidal signal

u = cos [ 2 π

λ (x − c t) ] (4.23)

through the Lax scheme, with Dirichlet boundary conditions.

The right-side member of (4.19) is then:

1 2 + 1

2 cf l

2

n

t2

u

02

+ 1

2 − 1 2 cf l

2

n

t2

u

L2

+

r n

x

− 1 2

s 1 2 + 1

2 cf l

2

+ 1

2 − 1 2 cf l

2

+

√ n

t

√ 2 cf l (4.24)

where u

0

, u

L

respectively denote the Dirichlet boundary values at x = 0, x = L.

It is minimal for the admissible value cf l = 1.

The value of the L

2

norm of the error, for two significant values of the cf l number (Case 1: cf l = 0.9; Case 2: cf l = 0.7), is displayed in Figure 4.1. The error curve corresponding to the value of the cf l number closest to 1 is the minimal one.

20 40 60 80 100

t

€€€€€€

n

0.2 Ž

0.4 0.6 0.8 1 1.2

Case 2 Case 1

Fig. 4.1.Value of theL2 norm of the error for different values of thecf l number.

5. Conclusion. Thanks to the above results, we here propose:

1. to study the intrinsic properties of various schemes through those of the re- lated matrices M

1

and M

2

;

2. to optimize finite difference problems through minimization of the symbolic

expression of the error as a function of the scheme parameters.

(12)

REFERENCES

[1] P. Van Dooren,Reduced order observers: A new algorithm and proof, Systems Control Lett., 4(1984), pp. 243-251.

[2] A. Berman and R. J. Plemmons,Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, PA (1994).

[3] H. R. Gail, S.L. Hantler and B. A. Taylor,Spectral Analysis of M/G/1 and G/M/1 type Markov chains, Adv. Appl. Probab., 28(1996), pp. 114-165.

[4] D. L. Boley,Computing the Controllability algorithm / Observability Decomposition of a Lin- ear Time-Invariant Dynamic System, A Numerical Approach, PhD. thesis, Report STAN- CS-81-860, Dept. Comp. i, Sci., Stanford University, 1981.

[5] A. S. Deif, N. P. Seif, and S. A. Hussein,Sylvester’s equation: accuracy and computational stability, Journal of Computational and Applied Mathematics, 61(1995), pp. 1-11.

[6] J. Z. Hearon,Nonsingular solutions ofT A−BT =C, Linear Algebra and its applications, 16(1977), pp. 57-63.

[7] C. H. Huo,Efficient methods for solving a nonsymmetric algebraic equation arising in stochastic fluid models, Journal of Computational and Applied Mathematics, pp. 1-21 (2004).

[8] C. C. Tsui,A complete analytical solution to the equationT A−F T=LCand its applications, IEEE Trans. Automat. Control AC, 32(1987), pp. 742-744.

[9] B. Zhou and G. R. Duan,An explicit solution to the matrix equationAX−XF =BY, Linear Algebra and its applications, 402(2005), pp. 345-366.

[10] G. R. Duan,Solution to matrix equationAV +BW =EV F and eigenstructure assignment for descriptor systems, Automatica, 28(1992), pp. 639-643.

[11] G. R. Duan,On the solution to Sylvester matrix equationAV+BW=EV F and eigenstructure assignment for descriptor systems, IEEE Trans. Automat. Control AC, 41(1996), pp. 276- 280.

[12] P. Kirrinnis,Fast algorithms for the Sylvester equationAX−XBT =C, Theoretical Computer Science, 259(2000), pp. 623-638.

[13] M. Konstantinov, V. Mehrmann, V. and P. Petkov,On properties of Sylvester and Lya- punov operators, Linear Algebra and its applications, 312(2000), pp. 35-71.

[14] A. Varga,TA numerically reliable approach to robust pole assignment for descriptor systems, Future Generation Computer Systems, 19(2003), pp. 1221-1230.

[15] G. B. Witham,Linear and Nonlinear Wave, Wiley-Interscience (1974).

[16] S. Wolfram,The Mathematica book, Cambridge University Press (1999).

Références

Documents relatifs

Finite difference equations used to approximate the solutions of a differential equa- tion generally do not respect the symmetries of the original equation, and can lead to

To insert these spring-mass conditions in classical finite- difference schemes, we use an interface method, the Explicit Simplified Interface Method (ESIM).. This insertion is

The aim of this work is to develop general optimization methods for finite difference schemes used to approximate linear differential equations.. The specific case of the

In order to suppress the round-off error and to avoid blow-up, a regularized logarithmic Schr¨ odinger equation (RLogSE) is proposed with a small regularization parameter 0 < ε ≪

We now show that, under one of its three forms and according to the choice of its parameters, a scheme from the HMMF (Hybrid Mimetic Mixed Family) may be interpreted as a

Through their numerical investigation, they showed that for solving one-dimensional convection-diffusion equation with variable coefficient in semi- infinite media, the

It is explained in [15] that the proof of the comparison principle between sub- and super-solutions of the continuous Hamilton-Jacobi equation can be adapted in order to derive

We will get a rate of convergence for rough initial data by combining precise stability estimates for the scheme with information coming from the study of the Cauchy problem for the