• Aucun résultat trouvé

Stable and Unstable Algorithms

Dans le document Scientifi c Computing (Page 52-57)

An algorithm for solving a given problem P : Rn −→ R is a sequence of elementary operations,

P(x) =fn(fn1(. . . f2(f1(x)). . .)).

In general, there exist several different algorithms for a given problem.

2.6.1 Forward Stability

If the amplification of the error in the operation fi is given by the corre-sponding condition number κ(fi), we naturally obtain

κ(P)≤κ(f1)·κ(f2)·. . .·κ(fn).

Definition 2.5. (Forward Stability) A numerical algorithm for a given problemP is forward stable if

κ(f1)·κ(f2)·. . .·κ(fn)≤Cκ(P), (2.10) whereC is a constant which is not too large, for exampleC=O(n).

Example 2.7. Consider the following two algorithms for the problem P(x) := x(1+x)1 :

1. x

x 1 +x

x(1 +x)→ x(1+x)1

2. x

1 x

1 +x→ 1+x1

1x1+x1 x(1+x)1

In the first algorithm, all operations are well conditioned, and hence the algo-rithm is forward stable. In the second algoalgo-rithm, however, the last operation

is a potentially very ill-conditioned subtraction, and thus this second algo-rithm is not forward stable.

Roughly speaking, an algorithm executed in finite precision arithmetic is calledstable if the effect of rounding errors is bounded; if, on the other hand, an algorithm increases the condition number of a problem by a large amount, then we classify it asunstable.

Example 2.8. As a second example, we consider the problem of calcu-lating the values

cos(1),cos(1 2),cos(1

4), . . . ,cos(2−12), or more generally,

zk= cos(2−k), k= 0,1, . . . , n.

By considering perturbations of the formzˆk= cos(2−k(1 +ε)), we can calcu-late the condition number for the problem using Definition2.4:

cos(2−k(1 +ε))−cos(2−k) cos(2k)

2−ktan(2−k4−kε = κ(P)4−k.

We consider two algorithms for recursively calculating zk:

1. double angle: we use the relationcos 2α= 2 cos2α−1to compute yn= cos(2−n), yk−1= 2yk21, k=n, n−1, . . . ,1.

2. half angle: we usecosα2 =

1+cosα

2 and compute x0= cos(1), xk+1=

1 +xk

2 , k= 0,1, . . . , n1.

The results are given in Table2.3. We notice that theyk computed by Algo-rithm 1 are significantly affected by rounding errors while the computations of the xk with Algorithm 2 do not seem to be affected. Let us analyze the condition of one step of Algorithm 1 and Algorithm 2. For Algorithm 1, one step is f1(y) = 2y21, and for the condition of the step, we obtain from

f1(y(1 +ε))−f1(y)

f1(y) = f1(y) +f1(y)·yε+O(ε2)−f1(y)

f1(y) = 4y2ε+O(ε2) 2y21 ,

(2.11) and from the fact that εis small, that the condition number for one step is κ1 = |2y4y2−1|2 . Since all the yk in this iteration are close to one, we have κ14. Now to obtainyk, we must performn−k iterations, so the condition

Stable and Unstable Algorithms 35

2k yk−zk xk−zk

1 -0.0000000005209282 0.0000000000000000 5.000000e-01 -0.0000000001483986 0.0000000000000000 2.500000e-01 -0.0000000000382899 0.0000000000000001 1.250000e-01 -0.0000000000096477 0.0000000000000001 6.250000e-02 -0.0000000000024166 0.0000000000000000 3.125000e-02 -0.0000000000006045 0.0000000000000000 1.562500e-02 -0.0000000000001511 0.0000000000000001 7.812500e-03 -0.0000000000000377 0.0000000000000001 3.906250e-03 -0.0000000000000094 0.0000000000000001 1.953125e-03 -0.0000000000000023 0.0000000000000001 9.765625e-04 -0.0000000000000006 0.0000000000000001 4.882812e-04 -0.0000000000000001 0.0000000000000001 2.441406e-04 0.0000000000000000 0.0000000000000001

Table 2.3.Stable and unstable recursions

number becomes approximately 4n−k. Thus, the constantC in (2.10)of the definition of forward stability can be estimated by

C 4n−k

κ(P) 4n−k 4−k = 4n.

This is clearly not a small constant, so the algorithm is not forward stable.

For Algorithm 2, one step isf2(x) = 1+x

2 . We calculate the one-step condition number similarly:

f2(y+ε)−f2(y)

f2(y) = 1

2(1 +x)ε+O(ε2).

Thus, forεsmall, the condition number of one step isκ2= 2|1+x|1 ; since all xk in this iteration are also close to one, we obtain κ2 14. To compute xk, we needk iterations, so the overall condition number is4−k. Hence the stability constantC in (2.10)is approximately

C≈ 4−k κ(P)1, meaning Algorithm 2 is stable.

We note that while Algorithm 1 runs through the iteration in an unstable manner, Algorithm 2 performs the same iteration, but in reverse. Thus, if the approach in Algorithm 1 is unstable, inverting the iteration leads to the stable Algorithm 2. This is also reflected in the one-step condition number estimate (4and 14 respectively), which are the inverses of each other.

Finally, using forn= 12 a perturbation of the size of the machine preci-sion ε= 2.2204e16, we obtain for Algorithm 1 that412ε= 3.7e9, which

is a good estimate of the error 5e10 of y0 we measured in the numerical experiment.

Example 2.9. An important example of an unstable algorithm, which motivated the careful study of condition and stability, is Gaussian elimina-tion with no pivoting (see Example 3.5 in Chapter3). When solving linear systems using Gaussian elimination, it might happen that we eliminate an unknown using a very small pivot on the diagonal. By dividing the other entries by the small pivot, we could be introducing artificially large coeffi-cients in the transformed matrix, thereby increasing the condition number and transforming a possibly well-conditioned problem into an ill-conditioned one. Thus, choosing small pivots makes Gaussian elimination unstable — we need to apply a pivoting strategyto get a numerically satisfactory algorithm (cf. Section3.2).

Note, however, that if we solve linear equations using orthogonal transfor-mations (Givens rotations, or Householder reflections, see Section3.5), then the condition number of the transformed matrices remains constant. To see this, consider the transformation

Ax=b QAx=Qb

where QQ =I. Then the 2-norm condition number ofQA (as defined in Theorem 3.5) satisfies κ(QA) = (QA)−12QA2 = A−1Q2QA2 = A−12A2=κ(A), since the 2-norm is invariant under multiplication with orthogonal matrices.

Unfortunately, as we can see in Example2.8, it is generally difficult and laborious to verify whether an algorithm is forward stable, since the condition numbers required by (2.10) are often hard to obtain for a given problem and algorithm. A different notion of stability, based on perturbations in the initial data rather than in the results of the algorithm, will often be more convenient to use in practice.

2.6.2 Backward Stability

Because of the difficulties in verifying forward stability, Wilkinson introduced a different notion of stability, which is based on the Wilkinson principle we have already seen in Section2.4:

The result of a numerical computation on the computer is the result of an exact computation with slightly perturbed initial data.

Definition 2.6. (Backward Stability) A numerical algorithm for a given problemP is backward stable if the resultyˆobtained from the algorithm with data x can be interpreted as the exact result for slightly perturbed data ˆ

x,yˆ=Px), with

|xˆi−xi|

|xi| ≤Ceps, (2.12)

Stable and Unstable Algorithms 37

whereC is a constant which is not too large, and eps is the precision of the machine.

Note that in order to study the backward stability of an algorithm, one does not need to calculate the condition number of the problem itself.

Also note that a backward stable algorithm does not guarantee that the erroryˆyis small. However, if the condition numberκof the problem is known, then the relative forward error can be bounded by

yˆy

Thus, a backward stable algorithm is automatically forward stable, but not vice versa.

Example 2.10. We wish to investigate the backward stability of an algo-rithm for the scalar productxy:=x1y1+x2y2. We propose for the algorithm

Using the fact that storing a real numberxon the computer leads to a rounded quantityx(1 +ε), |ε| ≤eps, and that each multiplication and addition again leads to a roundoff error of the size of eps, we find for the numerical result of this algorithm

Hence the backward stability condition (2.12) is satisfied with the constant C = 2, and thus this algorithm is backward stable.

Example 2.11. As a second example, we consider the product of two upper triangular matrices AandB,

A=

If we neglect for simplicity of presentation the roundoff error storing the en-tries of the matrix in finite precision arithmetic, we obtain using the standard algorithm for computing the product ofAandB on the computer

a11b11(1 +ε1) (a11b12(1 +ε2) +a12b22(1 +ε3))(1 +ε4)

0 a22b22(1 +ε5)

,

where|εj| ≤eps forj= 1, . . . ,5. If we define the slightly modified matrices Aˆ=

a11 a12(1 +ε3)(1 +ε4) 0 a22(1 +ε5)

, Bˆ =

b11(1 +ε1) b12(1 +ε2)(1 +ε4)

0 b22

,

their product is exactly the matrix we obtained by computingABnumerically.

Hence the computed product is the exact product of slightly perturbedAand B, and this algorithm is backward stable.

The notion of backward stability will prove to be extremely useful in Chapter 3, where we use the same type of analysis to show that Gaussian elimination is in fact stable when combined with a pivoting strategy, such as complete pivoting.

Dans le document Scientifi c Computing (Page 52-57)