• Aucun résultat trouvé

Using Givens Rotations

Dans le document Scientifi c Computing (Page 122-133)

Tips and Tricks

Chapter 3. Linear Systems of Equations

3.6 Banded matrices

3.6.4 Using Givens Rotations

We have seen in Section3.5that Givens rotations can be used as an alterna-tive to LU decomposition to solve dense linear systems. This alternaalterna-tive is also available for banded systems: we show here how to proceed for tridiag-onal systems with coefficient matrixAas shown in Equation (3.62). We use Givens rotation matrices G(ik), which differ in only four elements from the identity,

gii = gkk = c = cosα, gik = −gki = s = sinα.

Multiplying the linear system from the left by G(ik) changes only two rows, ai: andak::

anewi: := cosα·aoldi: + sinα·aoldk:,

anewk: := sinα·aoldi: + cosα·aoldk:. (3.63) We can choose the angleα to zero elements in the matrix (see Section3.5).

We illustrate this forn= 5. In the first step we chooseG(12)which combines the two first rows, and chooseαsuch that anew21 = 0:

G(12)

x x

x x x

x x x

x x x

x x

=

x x X

0 x x

x x x

x x x

x x

A fill-in element a13 = X is generated. In the second step we take G(23) which changes the second and third row such that

G(23)

x x X

0 x x

x x x

x x x

x x

=

x x X

0 x x X

0 x x

x x x

x x

zeroing a32= 0, and generating the fill-in a24=X. The next rotation with

G(34)yields

Finally, we obtainAtransformed to an upper triangular matrixRwithG(45):

G(45)

The solution is then obtained by back-substitution. For the transformation we need only the three diagonals of the matrixA. They will be overwritten with the elements of the upper banded matrixR.

A=

The following functionThomasGivenssolves a tridiagonal system using Givens rotations. The right hand side is stored inband overwritten with the solution.

Algorithm 3.12.

Solving Tridiagonal Systems with Givens rotations function [b,d,e,c]=ThomasGivens(c,d,e,b);

% THOMASGIVENS solves a tridiagonal system of linear equations

% [b,d,e,c]=ThomasGivens(c,d,e,b) solves a tridiagonal linear system

% using Givens rotations. The coefficient matrix is

% A=diag(c,-1)+diag(d)+diag(e,1), and the right hand side b is

% overwritten with the solution. The R factor is also returned,

% R=diag(d)+diag(e,1)+diag(c,2).

n=length(d);

e(n)=0;

for i=1: n-1 % elimination

if c(i)~=0

t=d(i)/c(i); si=1/sqrt(1+t*t); co=t*si;

Problems 105

Problem 3.1. Consider the linear systemAx=b with A=

a) Solve the linear system withMatlaband compare the numerical solu-tion to the exact one. Explain the difference.

b) Now consider the perturbed linear system which is obtained by changing the right hand side slightly:

b=b+ 10−9 1

1

. Compute again the solution and discuss the results.

Problem 3.2. Consider the linear system Ax=bwith

A=

The element a24 = α has been lost. Assume, however, that before when α was available, the solution with Matlabturned out to be

> x=A\b

Can you determine with this information the missing integer matrix element α=a24?

Problem 3.3. Plot Figure 3.2. Generate for 2≤n≤8 linear systems Ax=b using A=hilb(n)and b=A*ones(n,1). Then compute the solutions using Cramer’s rule and also Gaussian elimination. Measure the solution time and compare the accuracy of the results. To compare the accuracy, plot the logarithm of the relative errors of both solutions as a function ofn. Also include in your plot the quantity log(cond(A)*eps)and discuss the results.

Problem 3.4. Prove that for two square matrices AandB, det(AB) = det(A) det(B). Hint: show first that the result holds for the elementary Gaus-sian elimination matrices Lj and that every matrix can be represented using products of such matrices.

Problem 3.5. (Laplace expansion) This problem shows that the Laplace expansion formula for calculating determinants requires O(n!) op-erations, wherenis the size of the matrix.

1. Let T(n) be the number of operations required for a matrix of size n.

Show thatT(n)satisfies the recurrence

T(1) = 0, T(n) = 2n+nT(n1).

2. Show by induction that forn≥2,

2·n!≤T(n)6(n!(n1)!), which implies T(n)grows likec·n!with 2≤c≤6.

Problem 3.6. Show that the product of two upper (respectively lower) triangular matrices is again an upper (respectively lower) triangular matrix.

Problem 3.7. Determine for the linear system

the three elimination matricesLiand also the triangular decomposition of the matrix. The elimination may be performed without permutation of the rows.

Problem 3.8. Rewrite Algorithm 3.3function BackSubstitution us-ingMatlab’s scalar product notation for computing thexi.

Problem 3.9. Write a function x=forwards(L,b) to solve the system Lx =b with the lower diagonal matrix Lby forward substitution using the SAXPY variant.

Problems 107

Problem 3.10. In theMatlab function Elimination (Algorithm3.5) we store the factors used for the elimination in the transformed matrix A instead of the emerging zeros. Change this function and write a function [L,U,P]=LU(A) to compute the triangular decomposition P A= LU. Com-pare your results with the Matlabbuilt-in function [L,U,P]=lu(X).

Problem 3.11. Modify function [L,U,P]=LU(A)from Problem 3.10 so that with [L,U,P,alpha]=LU(A)it also computes the largest element α :=

maxijk|a(k)ij |that occurs during the elimination process. This function is used to produce Figure 3.4.

Problem 3.12. Inverse iterationis an algorithm to compute the smallest eigenvalue (in modulus) of a symmetric matrix A:

Choosex0

fork= 1,2, . . . , m (until convergence) solveAxk+1=xk

normalizexk+1:=xk+1/||xk+1||

end

Then λ =xmAxm/xmxm is an approximation for the smallest eigenvalue.

A simple implementation of this algorithm is x=rand(n,1)

for k= 1:m x=A\x;

x=x/norm(x);

end

lambda=x’*A*x

For large matrices, one can save operations if we compute the LU decomposi-tion of the matrixAonly once. The iteration is performed using the factors L andU. This way, each iteration needs onlyO(n2) operations, instead of O(n3) with the program above. Use the programs LU from Problem 3.10, BackSubstitution from Problem 3.8, and forwards from Problem 3.9 to implement the inverse iteration. Experiment with a few matrices and com-pare your results with the correct eigenvalues obtained by eig(A).

Problem 3.13. Solving a linear system. We are given a linear system Ax = b. This time the matrix A is m×nwith possibly m = n and rank r min(m, n). Eliminating variables will lead to a reduced system which will show if the system has solutions. Depending on the rank there might be infinitely many solutions, no solution or a unique solution. In order to determine the rank we need to reorder equations and unknowns. We look for pivot elements in the whole remaining matrix. This is called complete pivoting.

Before an elimination step we search for the pivot with largest absolute value in the whole remaining matrix and move it to the diagonal by inter-changing rows and columns. If the largest pivot is very small, say if with norma=norm(A,1)we have abs(A(i,i)) < tol*normathen the elimination process should be terminated and the rank will be assumed to be r = i−1.

The reduced system will then have the form Ur×r Br×nr

We indicated the dimensions of the matrices in Equation (3.64) by sub-scripts. The right hand side is partitioned in the same way.

If now c2 = 0 then the linear system has no solutions. If on the other handc2= 0then the variablesx˜2 can be chosen arbitrarily. For each choice ofx˜2 the fist partx˜1 can be computed by back-substitution in

Ux˜1=c1−Bx˜2. Thus the general solution of Equation (3.64) is

x˜1

and we obtain finally from this the general solution ofAx=bby x=P

where P is the permutation matrix for the reordering of the columns (un-knowns).

Modify the function Elimination by introducing complete pivoting and compute the general solution as described above. Your Matlab function should follow the header

function [x, Xh,r,U,L,B,P,Q]=EliminationCompletePivoting(A,b,tol)

% ELIMINATIONCOMPLETEPIVOTING Linear system solve with complete pivoting

% [x,Xh,r,U,L,B,P,Q]=EliminationCompletePivoting(A,b,tol) computes a

% solution x to the possibly non-square linear system Ax=b using

% Gaussian elimination with complete pivoting, which produces the

% decomposition A=Q’*L*[U,B]*P’. Here Q and P are permutation

% matrices, L is r x r lower unit triangular, U is n x r upper

% triangular and B is r x (n-r), and r is the numerical rank: an

% element A(i,j) of the remaining matrix A’ during the elimination

% process is considered to be zero if abs(A’(i,j))<tol*norm(A,1). x

% is a particular solution such that Ax=b. Xh contains the

% nullspace, linear independent solutions of Ax=0.

Problems 109

The computation should stop with an error message if it turns out that Ax=bhas no solution. Check that you have computed the decomposition

A=QL[U, B]P.

Check your function with the exampleA=magic(10), b=ones(10,1)and b=rand(10,1).

Problem 3.14. (Diagonally dominant matrices) A matrix is said to be column diagonally dominantif, for each column j, the absolute value of the diagonal entry is greater than the sum of the absolute values of the off-diagonal entries, i.e., if

|ajj|>

i=j

|aij| forj= 1,2, . . . , n.

Show that after one step of Gaussian elimination with no pivoting, the re-maining (n1)×(n1) submatrix is also column diagonally dominant.

Deduce that no row exchanges will occur throughout the elimination process, even when partial pivoting is used.

Problem 3.15. Modify theMatlab function Elimination (Algorithm 3.5) to compute in a numerically stable way the determinant of a matrix.

Observe that with Gaussian elimination we obtain

P A=LU. (3.65)

Taking the determinant we get

det(P) det(A) = det(L) det(U).

Now since L has a unit diagonal, it follows that det(L) = 1. Furthermore det(P) = ±1, depending on if the number of row changes is even or odd.

Thus we obtain

det(A) = (1)# row changes

(n i=1

uii.

Problem 3.16. Write a Matlab-function BandGivens which solves a banded linear system using Givens rotations. The coefficient matrix B contains the non-zero diagonals as columns (see Section 3.6.1). The header of your function should look:

function x=BandGivens(p,q,B,b);

% BANDGIVENS solves a banded system of linear equations using

% Givens rotations. The diagonals (p upper, q lower) are

% stored as columns in B.

Problem 3.17. Write a Matlab-function B=luB(p,q,B) which over-writes the given matrix B which has as columns the nonzero diagonals of a banded matrix (see Section 3.6.1) with the LU decomposition using diagonal pivoting. Hint: It might be simpler if you adapt first the elimination algo-rithm for ann×nmatrix to the case of a banded matrix, and then use the transformation StoreBandMatrix.m (Algorithm3.9).

Problem 3.18. Compute the coefficients of a polynomialP(t) =at3+ bt2+ct+dsuch that P(1) = 17,P(1) = 3, P(0.5) = 7.125und P(1.5) = 34.875. Generate the linear system for the coefficientsa,b,canddand solve the system withMatlab. What is the condition number of this system ?

Problem 3.19. SupposeA∈Rn×n is a nonsingular matrix. Recall that the 2-norm condition number κ2(A)is defined as

κ2(A) =A2A−12.

Letybe a unit vector such thatA12=A1y2, and definex= A1y A12. Finally, letE=−AxxT.

1. Show that(A+E)x= 0. Conclude thatA+Eis singular.

2. Show that E2 1/A12. This implies the relative perturbation satisfies

E2

A2 1 κ2(A).

Problem 3.20. (Sherman-Morrison-Woodbury formula) LetA∈Rn×n be an invertible matrix, and b,u,vRn.

1. Show that if I+uv is invertible, then there exists aσ such that (I+uv)1=I+σuv.

What is a sufficient condition for I+uv to be invertible? Show that this is also a necessary condition.

2. Suppose we know the LU decomposition ofA, and also the solutions of the linear systems

Ay=b and Az=u. (3.66)

Find an efficient algorithm to solve

(A+uv)x=b, which uses only the solutions of (3.66).

Problems 111

Problem 3.21. TheMatlabfunctionhilbcomputes the Hilbert matrix.

For instance

>> A = hilb(4) A =

1.0000 0.5000 0.3333 0.2500 0.5000 0.3333 0.2500 0.2000 0.3333 0.2500 0.2000 0.1667 0.2500 0.2000 0.1667 0.1429

The matrix elements are given by aij=

1

0

ti−1tj−1dt= 1

i+j−1. (3.67)

Prove that for eachnthe matrix A=hilb(n)is positive definite. Hint: con-sider the expressionxAxand use Equation (3.67).

Problem 3.22. The condition number of a rectangular matrixA∈Rm×n can be defined by

κ(A) :=max||x||=1||Ax||

min||x||=1||Ax||. Show for the Euclidean norm || · ||2that the equality

κ(AA) =κ(A)2

holds. Hint: Note that the symmetric matrix AA can be diagonalized, AA = QΛQ with Λ = diag(λ1, λ2, . . . , λn), λ1 λ2 . . . λn, and show that max||x||2=1||Ax||22=λ1andmin||x||2=1||Ax||22=λn.

Problem 3.23. Ill-conditioned systems of linear equations. To solve this problem, you will have to write a Matlab program of about 12 lines using the functions rand, round,diag, eye, size, triu,tril, cond. The goal of this problem is to show that apparently harmless looking systems of linear equations may be very difficult to solve.

a) Generate an n×nmatrix B with random integer elements in the range bij [10,10]. Choose for instancen= 20.

b) Remove the diagonal of B, save the upper triangular part in U and the lower triangular part inL, and put ones on the diagonals: lii=uii= 1.

c) ComputeA=L·U. What is the value ofdet(A)and why? Compute the determinant with det(A)and confirm your prediction. In case that you have doubts about the result, compute separately det(L) and det(U).

d) Choose now an exact solution, for instancexe=ones(n,1), and compute the corresponding right hand sideb=Axe.

e) SolveAx=busingMatlaband compare the solution with the exactxe. f ) Explain the bad results by computing the condition number ofA.

Chapter 4. Interpolation

The question then arises as to how we can find the val-ues of the functionlog10(x)for values of the argument x which are intermediate between the tabulated values.

The answer to this question is furnished by the theory of interpolation, which in its most elementary aspect may be described as the science of “reading between the lines of a mathematical table.”

E. Whittaker and G. Robinson, The Calculus of Observa-tions: a Treatise on Numerical Mathematics, 1924.

We wish to repeat that interpolation is only one way to approximate data. [...] For data with significant errors, the least squares approach is preferred.

D. Kahaner, C. Moler, S. Nash, Numerical Methods and Software, 1988.

Prerequisites: Chapters2and3are required.

Interpolation means inserting or blending in a missing value. It is the art of reading between the entries of a tabulated function (see first quote above).

We start this chapter with several introductory examples in Section 4.1, through which we explain the interpolation principle. The most common interpolation technique is to use polynomials, and we show in Section 4.2 four classical techniques: using monomials, Lagrange polynomials, Newton polynomials, and orthogonal polynomials. The latter also leads naturally to a least squares approximation, which is more desirable if the data points are contaminated by errors (see second quote above, and Chapter 6). We then show that the representation in these different bases are related by the LU and QR factorizations of the corresponding matrices. We also explain the barycentric formula, give an estimate for the interpolation error, and discuss extrapolation, which is similar to interpolation, expect that the desired value lies outside the range of the given data. Section 4.3 is devoted to piecewise interpolation, which leads to the classical cubic splines. This section also contains the well-known Morrison-Woodbury formula. Section4.4addresses trigonometric interpolation and contains a detailed description of the fast Fourier transform.

W. Gander et al.,Scientific Computing - An Introduction using Maple and MATLAB, Texts in Computational Science and Engineering 11,

DOI 10.1007/978-3-319-04325-8 4,

©Springer International Publishing Switzerland 2014

Dans le document Scientifi c Computing (Page 122-133)