HAL Id: hal-00081010
https://hal.archives-ouvertes.fr/hal-00081010v2
Preprint submitted on 3 Aug 2006
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION
Claire David
To cite this version:
Claire David. FINITE DIFFERENCE SCHEMES AS A MATRIX EQUATION. 2006. �hal-
00081010v2�
ccsd-00081010, version 2 - 3 Aug 2006
CLAIRE DAVID ∗ Abstract.
Finite difference schemes are here solved by means of a linear matrix equation. The theoretical study of the related algebraic system is exposed, and enables us to minimize the error due to a finite difference approximation.
Key words. Finite difference schemes, Sylvester equation AMS subject classifications. 15A15, 15A09, 15A23
1. Introduction: Scheme classes. Finite difference schemes used to approxi- mate linear differential equations are usually solved by means of a recursive calculus.
We here propose a completely different approach, which uses an equivalent matrix equation.
Consider the transport equation:
∂u
∂t + c ∂u
∂x = 0 , x ∈ [0, L], t ∈ [0, T ] (1.1)
with the initial condition u(x, t = 0) = u
0(x).
Proposition 1.1.
A finite difference scheme for this equation can be written under the form:
α u
in+1+β u
in+γ u
in−1+δ u
i+1n+ε u
i−1n+ζ u
i+1n+1+η u
i−1n−1+θ u
i−1n+1+ϑ u
i+1n−1= 0 (1.2)
where:
u
lm= u (l h, m τ) (1.3)
l ∈ { i − 1, i, i + 1 } , m ∈ { n − 1, n, n + 1 } , j = 0, ..., n
x, n = 0, ..., n
t, h, τ denoting respectively the mesh size and time step (L = n
xh, T = n
tτ).
The Courant-Friedrichs-Lewy number (cf l) is defined as σ = c τ /h .
A numerical scheme is specified by selecting appropriate values of the coefficients α, β, γ, δ, ε, ζ, η, θ and ϑ in equation (1.2). Values corresponding to numerical schemes retained for the present works are given in Table 1.1.
The number of time steps will be denoted n
t, the number of space steps, n
x. In general, n
x≫ n
t.
∗Universit´e Pierre et Marie Curie-Paris 6, Laboratoire de Mod´elisation en M´ecanique, UMR CNRS 7607, Boˆite courriern0162, 4 place Jussieu, 75252 Paris cedex 05, France (david@lmm.jussieu.fr).
1
Table 1.1
Numerical scheme coefficient.
Name α β γ δ ǫ ζ η θ ϑ
Leapfrog 21τ 0 −12τ 2ch −2hc 0 0 0 0
Lax 1τ 0 0 −12τ +2ch
−1 2τ −
c
2h 0 0 0 0
Lax-Wendroff 1τ
−1
τ +ch22τ 0 (1−2hσ)c
−(1+σ)c
2h 0 0 0 0
Crank-Nicolson 1τ +hc2
−1
τ +hc2 0 −h2c −c
h2 0 −h2c −c h2 0
The paper is organized as follows. The equivalent matrix equation is exposed in section 2. Specific properties of the involved matrices are exposed in section 3. A way of minimizing the error due to the finite difference approximation is presented in section 4.
2. The Sylvester equation.
2.1. Matricial form of the finite differences problem. Let us introduce the rectangular matrix defined by:
U = [u
in]
1≤i≤nx−1,1≤n≤nt(2.1)
Theorem 2.1. The problem (1.2) can be written under the following matricial form:
M
1U + U M
2+ L (U ) = M
0(2.2)
where M
1and M
2are square matrices respectively n
x− 1 by n
x− 1, n
tby n
t, given by:
M
1=
β δ 0 . . . 0 ε β . .. ... ...
0 . .. ... ... 0 .. . . .. ... β δ 0 . . . 0 ε β
M
2=
0 γ 0 . . . 0 α 0 . .. ... ...
0 . .. ... ... 0 .. . . .. ... ... γ 0 . . . 0 α 0
(2.3)
the matrix M
0being given by:
M0=
−γ u01−ε u10−η u00−θ u20−ϑ u02 −ε u20−η u10−θ u30 . . . −ε unt0 −η unt−0 1
−γ u02−η u01−ϑ u03 0 . . . 0
. . .
. . .
. . .
. . .
. . .
−γ u0nx−2−η u0nx−2−ϑ u0nx−1 0 . . . 0
−γ u0nx−1−δ u1nx−η u0nx−2−ζ u2nx−ϑ u0nx −δ u2nx−ζ u3nx−ϑ u1nx . . . −δ untnx−ϑ unt−nx 1
(2.4)
and where L is a linear matricial operator which can be written as:
L = L
1+ L
2+ L
3+ L
4(2.5)
where L
1, L
2, L
3and L
4are given by:
L
1(U ) = ζ
u
22u
32. . . u
n2t0 u
23u
33. . . .. . .. . .. . .. . . .. .. . .. . u
2nx−1u
3nx−1. . . u
nntx−10
0 0 . . . 0 0
L
2(U ) = η
0 0 . . . 0 0
0 u
11u
21. . . u
n1t−10 u
01u
11. . . u
n2t−1.. . .. . .. . . .. .. . 0 u
1nx−2u
2nx−2. . . u
nntx−1−2
(2.6)
L
3(U ) = θ
0 . . . . . . . . . 0 u
21u
31. . . u
n1t0 u
22u
32. . . u
n2t0 .. . .. . .. . .. . .. . u
2nx−2u
3nx−2. . . u
nntx−20
L
4(U ) = ϑ
0 u
12u
22. . . u
n2t−10 u
13u
23. . . u
n3t−1.. . .. . . .. ... .. . 0 u
1nx−1. . . . . . u
nntx−1−10 0 . . . . . . 0
(2.7)
Proposition 2.2. The second member matrix M
0bears the initial conditions, given for the specific value n = 0, which correspond to the initialization process when computing loops, and the boundary conditions, given for the specific values i = 0, i = n
x.
Denote by u
exactthe exact solution of (1.1).
The corresponding matrix U
exactwill be:
U
exact= [U
exactin
]
1≤i≤nx−1,1≤n≤nt(2.8) where:
U
exactni
= U
exact(x
i, t
n) (2.9)
with x
i= i h, t
n= n τ.
Definition 2.3. We will call error matrix the matrix defined by:
E = U − U
exact(2.10)
Consider the matrix F defined by:
F = M
1U
exact+ U
exactM
2+ L (U
exact) − M
0(2.11)
Proposition 2.4. The error matrix E satisfies:
M
1E + E M
2+ L (E) = F
(2.12)
2.2. The matrix equation.
2.2.1. Theoretical formulation. Theorem 2.5. Minimizing the error due to the approximation induced by the numerical scheme is equivalent to minimizing the norm of the matrices E satisfying (2.12).
Note: Since the linear matricial operator L appears only in the Crank-Nicolson scheme, we will restrain our study to the case L = 0. The generalization to the case L 6 = 0 can be easily deduced.
Proposition 2.6.
The problem is then the determination of the minimum norm solution of:
M
1E + E M
2= F (2.13)
which is a specific form of the Sylvester equation:
AX + XB = C (2.14)
where A and B are respectively m by m and n by n matrices, C and X , m by n matrices.
The solving of the Sylvester equation is generally based on Schur decomposition: for a given square n by n matrix A, n being an even number of the form n = 2 p, there exists a unitary matrix U and a upper triangular block matrix T such that:
A = U
∗T U (2.15)
where U
∗denotes the (complex) conjugate matrix of the transposed matrix
TU . The diagonal blocks of the matrix T correspond to the complex eigenvalues λ
iof A:
T =
T
10 . . . . 0 0 . .. ... ... ...
.. . . .. T
i. .. ...
.. . . .. ... ... 0 0 0 . . . 0 T
p
(2.16)
where the block matrices T
i, i = 1, ..., p are given by:
R e [λ
i] I m [λ
i]
− I m [λ
i] R e [λ
i] (2.17)
R e being the real part of a complex number, and I m the imaginary one.
Due to this decomposition, the Sylvester equation require, to be solved, that the
dimensions of the matrices be even numbers. We will therefore, in the following,
restrain our study to n
x− 1 and n
tbeing even numbers. So far, it is interesting to
note that the Schur decomposition being more stable for higher order matrices, it
perfectly fits finite difference problems.
Complete parametric solutions of the generalized Sylvester equation (2.13) is given in [2], [3].
As for the determination of the solution of the Sylvester equation, it is a major topic in control theory, and has been the subject of numerous works (see [1], [6], [8], [9], [10], [11], [12]).
In [1], the method is based on the reduction of the he observable pair (A, C) to an observer-Hessenberg pair (H, D), H being a block upper Hessenberg matrix. The reduction to the observer-Hessenberg form (H, D) is achieved by means of the staircase algorithm (see [4], ...).
In [9], in the specific case of B being a companion form matrix, the authors propose a very neat general complete parametric solution, which is expressed in terms of the controllability of the matrix pair (A, B), a symmetric matrix operator, and a parametric matrix in the Hankel form.
We recall that a companion form, or Frobenius matrix is one of the following kind:
B =
0 . . . . 0 − b
01 0 . . . . 0 − b
10 1 0 . . . .. . .. . .. . 0 . .. ... ... .. . 0 0 . . . 1 0 − b
p−1
(2.18)
These results can be generalized through matrix block decomposition to a block com- panion form matrix:
M
2=
M
2B10 . . . . 0 0 M
2B20 . . . 0 0 0 . .. ... .. . .. . 0 . .. ... 0 0 0 . . . 0 M
2Bk
(2.19)
the M
2Bp, 1 ≤ p ≤ k being companion form matrices.
Another method is presented in [14], where the determination of the minimum-norm solution of a Sylvester equation is specifically developed.
The accuracy and computational stability of the solutions is examined in [5].
2.2.2. Existence condition of the solution . Theorem 2.7. In the specific cases of the Lax and Lax-Wendroff schemes, (2.14) has a unique solution.
Proof. (2.14) has a unique solution if and only if A and B have no common eigenvalues.
The characteristic polynomials P
M1, P
M2of M
1and M
2, can be respectively calculated as the determinants of respectively
nx2−1by
nx2−1,
n2tby
n2tdiagonal block matrices:
P
M1(λ) = ((λ − β )
2− δ ε)
nx−21, P
M2(λ) = (λ
2− α γ)
nt2(2.20)
In the specific cases of the Lax and Lax-Wendroff schemes: α γ = 0 6 = ± β + √ δε.
Hence, (2.12) has a unique solution, which accounts for the consistency of the given problem.
Corollary 2.8. In the specific case of the Leapfrog scheme, (2.14) has a unique solution if and only if τ 6 =
hc.
3. Specific properties of the matrices M
1and M
2.
3.0.3. Inversibility of the matrix M
1. Theorem 3.1. In the specific case of the Lax and Leapfrog schemes, M
1is inversible.
Theorem 3.2. In the specific case of the Lax-Wendroff scheme, M
1is inversible if and only if
− 1 τ + c
2τ
h
2 26
= (σ
2− 1)c
24 h
2(3.1)
Proof. The determinant D
1of M
1can be calculated as the determinant of a
nx2−1by
nx2−1diagonal block matrix:
D
1= (β
2− δ ε)
nx−21(3.2)
3.0.4. Nilpotent components of the matrix M
2. Proposition 3.3. M
2can be written as:
M
2= α
TN + γ N (3.3)
where N is the nilpotent matrix;
TN denotes the corresponding transposed matrix:
N =
0 1 0 . . . 0 0 0 . .. ... ...
.. . .. . . .. ... 0 .. . .. . . . . 0 1 0 0 . . . . . . 0
(3.4)
Proposition 3.4.
For either α = 0, or γ = 0, M
2will thus be nilpotent, of order n
t. Proposition 3.5.
For γ = 0, if M
1is inversible, the solution at t = n
tdt can then be immediately determined.
Proof.
In such a case, multiplying (2.2) on the right side by M
2nt−1leads to:
M
1U M
2nt−1+ U M
2nt= M
0M
2nt−1(3.5)
i. e.:
M
1U M
2nt−1= M
0M
2nt−1(3.6)
which leads to:
U M
2nt−1= M
1−1M
0M
2nt−1(3.7)
Due to:
M
2nt−1=
0 0 0 . . . 0 .. . 0 . .. ... ...
.. . .. . . .. ... 0 0 .. . . .. 0 0 1 0 . . . 0 0
(3.8)
we have:
U M
2nt−1=
u
1nt0 0 . . . 0 u
2nt0 . .. ... ...
.. . .. . . .. ... 0 .. . .. . . .. 0 0 u
nxnt0 . . . 0 0
(3.9)
The solution at t = n
tdt can then be immediately determined.
4. Minimization of the error.
4.1. Theory. Calculation yields:
M1T
M1 = diag
β2+δ2 β(δ+ε) β(δ+ε) ε2+β2, . . . ,
β2+δ2 β(δ+ε) β(δ+ε) ε2+β2M2TM2 = diag
γ2 0 0 α2, . . . ,
γ2 0 0 α2(4.1)
The singular values of M
1are the singular values of the block matrix
β
2+ δ
2β (δ + ε) β (δ + ε) ε
2+ β
2, i. e.
1
2 (2β
2+ δ
2+ ε
2− (δ + ε) p
4β
2+ δ
2+ ε
2− 2δ ε) (4.2)
of order
nx2−1, and 1
2 (2β
2+ δ
2+ ε
2+ (δ + ε) p
4β
2+ δ
2+ ε
2− 2δ ε) (4.3)
of order
nx2−1.
The singular values of M
2are α
2, of order
n2t, and γ
2, of order
n2t.
Consider the singular value decomposition of the matrices M
1and M
2:
U
1TM
1V
1=
M f
10
0 0
, U
2TM
1V
2=
M f
20
0 0
(4.4)
where U
1, V
1, U
2, V
2, are orthogonal matrices. M f
1, M f
2are diagonal matrices, the diagonal terms of which are respectively the nonzero eigenvalues of the symmetric matrices M
1TM
1, M
2TM
2.
Multiplying respectively 2.13 on the left side by
TU
1, on the right side by V
2, yields:
U
1TM
1E V
2+ U
1TE M
2V
2= U
1TF V
2(4.5)
which can also be taken as:
T
U
1M
1V
1TV
1E V
2+
TU
1E
TU
2TU
2M
2V
2= U
1TF V
2(4.6) Set:
T
V
1E V
2= E g
11E g
12E g
21E g
22!
,
TU
1E
TU
2=
E g g
11E g g
12g g E
21E g g
22 (4.7)
T
U
1F V
2= g F
11F g
12g F
21F g
22! (4.8)
We have thus:
M f
1E g
11M f
1E g
120 0
+
E g g
11M f
20 g g E
21M f
20
= g F
11g F
12g F
21g F
22! (4.9)
It yields:
M f
1E g
11+ E g g
11M f
2= g F
11M f
1E g
12= g F
12g g
E
21M f
2= g F
21(4.10)
One easily deduces:
( E g
12= M f
1−1g F
12g e
E
21= F g
21M f
2(4.11)
−1The problem is then the determination of the E g
11and E g g
11satisfying:
M f
1E g
11+ E g g
11M f
2= g F
11(4.12)
Denote respectively by e f
ij, e f f
ijthe components of the matrices E, e E. ee The problem 4.12 uncouples into the independent problems:
minimize
X
i,j
f e
ij2+ e f f
ij(4.13)
2under the constraint
M f
1iie f
ij+ M g
2iie f f
ij= F g
11ij(4.14)
This latter problem has the solution:
f
e
ij=
Mg
1iiFg
11ijg
M1ii 2
+M
g
2jj 2f f
e
ij=
Mg
2jjFg
11ijg
M1ii 2
+M
g
2jj 2(4.15)
The minimum norm solution of 2.13 will then be obtained when the norm of the matrix F g
11is minimum.
In the following, the euclidean norm will be considered.
Due to (4.8):
k g F
11k ≤ k F e k ≤ k U
1k k F k k V
2k ≤ k U
1k k V
2k k M
1U
exact+ U
exactM
2− M
0k (4.16)
U
1and V
2being orthogonal matrices, respectively n
x− 1 by n
x− 1, n
tby n
t, we have:
k U
1k
2= n
x− 1 , k V
2k
2= n
t(4.17) Also:
k M
1k
2= n
x− 1
2 2 β
2+ δ
2+ ε
2, k M
2k
2= n
t2 α
2+ γ
2(4.18)
The norm of M
0is obtained thanks to relation (2.4).
This results in:
k F g
11k ≤ p
n
t(n
x− 1) (
k U
exactk
r n
x− 1 2
p 2 β
2+ δ
2+ ε
2+ r n
t2
p α
2+ γ
2+ k M
0k )
(4.19)
k F g
11k can be minimized through the minimization of the right-side member of (4.19), which is function of the scheme parameters.
4.2. Numerical example: the specific case of the Lax scheme. For the Lax scheme: γ = 0. The remaining coefficients (see Table 1.1) can be normalized through the following change of variables:
α = h α
β = h β
δ = h δ
ε = h ε
(4.20)
Set:
M
0= h M
0(4.21) Thus:
k M
0k
2= δ
2nt
X
n=1
u
nnx2+ ε
2nt
X
n=1
u
i02(4.22)
Advect a sinusoidal signal
u = cos [ 2 π
λ (x − c t) ] (4.23)
through the Lax scheme, with Dirichlet boundary conditions.
The right-side member of (4.19) is then:
1 2 + 1
2 cf l
2n
t2u
02+ 1
2 − 1 2 cf l
2n
t2u
L2+
r n
x− 1 2
s 1 2 + 1
2 cf l
2+ 1
2 − 1 2 cf l
2+
√ n
t√ 2 cf l (4.24)
where u
0, u
Lrespectively denote the Dirichlet boundary values at x = 0, x = L.
It is minimal for the admissible value cf l = 1.
The value of the L
2norm of the error, for two significant values of the cf l number (Case 1: cf l = 0.9; Case 2: cf l = 0.7), is displayed in Figure 4.1. The error curve corresponding to the value of the cf l number closest to 1 is the minimal one.
20 40 60 80 100
t
n
0.2
0.4 0.6 0.8 1 1.2
Case 2 Case 1
Fig. 4.1.Value of theL2 norm of the error for different values of thecf l number.
5. Conclusion. Thanks to the above results, we here propose:
1. to study the intrinsic properties of various schemes through those of the re- lated matrices M
1and M
2;
2. to optimize finite difference problems through minimization of the symbolic
expression of the error as a function of the scheme parameters.
REFERENCES
[1] P. Van Dooren,Reduced order observers: A new algorithm and proof, Systems Control Lett., 4(1984), pp. 243-251.
[2] A. Berman and R. J. Plemmons,Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, PA (1994).
[3] H. R. Gail, S.L. Hantler and B. A. Taylor,Spectral Analysis of M/G/1 and G/M/1 type Markov chains, Adv. Appl. Probab., 28(1996), pp. 114-165.
[4] D. L. Boley,Computing the Controllability algorithm / Observability Decomposition of a Lin- ear Time-Invariant Dynamic System, A Numerical Approach, PhD. thesis, Report STAN- CS-81-860, Dept. Comp. i, Sci., Stanford University, 1981.
[5] A. S. Deif, N. P. Seif, and S. A. Hussein,Sylvester’s equation: accuracy and computational stability, Journal of Computational and Applied Mathematics, 61(1995), pp. 1-11.
[6] J. Z. Hearon,Nonsingular solutions ofT A−BT =C, Linear Algebra and its applications, 16(1977), pp. 57-63.
[7] C. H. Huo,Efficient methods for solving a nonsymmetric algebraic equation arising in stochastic fluid models, Journal of Computational and Applied Mathematics, pp. 1-21 (2004).
[8] C. C. Tsui,A complete analytical solution to the equationT A−F T=LCand its applications, IEEE Trans. Automat. Control AC, 32(1987), pp. 742-744.
[9] B. Zhou and G. R. Duan,An explicit solution to the matrix equationAX−XF =BY, Linear Algebra and its applications, 402(2005), pp. 345-366.
[10] G. R. Duan,Solution to matrix equationAV +BW =EV F and eigenstructure assignment for descriptor systems, Automatica, 28(1992), pp. 639-643.
[11] G. R. Duan,On the solution to Sylvester matrix equationAV+BW=EV F and eigenstructure assignment for descriptor systems, IEEE Trans. Automat. Control AC, 41(1996), pp. 276- 280.
[12] P. Kirrinnis,Fast algorithms for the Sylvester equationAX−XBT =C, Theoretical Computer Science, 259(2000), pp. 623-638.
[13] M. Konstantinov, V. Mehrmann, V. and P. Petkov,On properties of Sylvester and Lya- punov operators, Linear Algebra and its applications, 312(2000), pp. 35-71.
[14] A. Varga,TA numerically reliable approach to robust pole assignment for descriptor systems, Future Generation Computer Systems, 19(2003), pp. 1221-1230.
[15] G. B. Witham,Linear and Nonlinear Wave, Wiley-Interscience (1974).
[16] S. Wolfram,The Mathematica book, Cambridge University Press (1999).