• Aucun résultat trouvé

FORTRAN PROGRAM USING THE MODIFIED REGULA FALSI ALGORITHM 33

OUTPUT FOR EXAMPLE 3.3

EXERCISES

3.3-1 Verify that the iteration

will converge to the solution of the equation

only if, for some n0, all iterates xn with n > n0 are equal to 2, i.e., only “accidentally.”

3.3-2 For each of the following equations determine an iteration function (and an interval I) so that the conditions of Theorem 3.1 are satisfied (assume that it is desired to find the smallest positive root):

3.3-3 Write a program based on Algorithm 3.6 and use this program to calculate the smallest roots of the equations given in Exercise 3.3-2.

3.4 CONVERGENCE ACCELERATION FOR FIXED-POINT ITERATlON 95 3.3-4 Determine the largest interval I with the following property: For all fixed-point iteration with the iteration function

converges, when started with x0. Are Assumptions 3.1 and 3.3 satisfied for your choice of I ? What numbers are possible limits of this iteration? Can you think of a good reason for using this particular iteration? Note that the interval depends on the constant a.

3.3-5 Same as Exercise 3.3-4, but with g(x) = (x + a/x) /2.

3.3-6 The function satisfies Assumption 3.1 for and

Assumption 3.3 on any finite interval, yet fixed-point iteration with this iteration function does not converge. Why?

3.3-7 The equation ex - 4x2 = 0 has a root between x = 4 and x = 5. Show that we cannot find this root using fixed point iteration with the “natural” iteration function

Can you find an iteration function which will correctly locate this root?

3.3-8 The equation ex - 4x2 = 0 also has a root between x = 0 and x = l. Show that the iteration function 2 will converge to this root if x0 is chosen in the interval [0, 1].

3.4 CONVERGENCE ACCELERATION FOR FIXED-POINT ITERATION

In this section, we investigate the rate of convergence of fixed-point iteration and show how information about the rate of convergence can be used at times to accelerate convergence.

We assume that the iteration function g(x) is continuously differentia-ble and that, starting with some point x0, the sequence x1, x2, . . . gener-ated by fixed-point iteration converges to some point This point is then a fixed point of g(x), and we have, by (3.19), that

(3.21) for some between and xn, n = 1, 2, . . . . Since it then follows that hence

g’(x) being continuous, by assumption. Consequently,

(3.22) where Hence, if then for large enough n,

(3.23) i.e., the error en+1 in the (n + 1)st iterate depends (more or less) linearly on the error en in the nth iterate. We therefore say that x0, x1, x2, . . . converges linearly to

Now note that we can solve (3.21) for For

(3.24)

gives

Therefore

(3.25) Of course, we do not know the number But we know that the ratio (3.26) for some between xn and xn-1, by the mean-value theorem for deriva-tives. For large enough n, therefore, we have

and then the point

(3.27) should be a very much better approximation to than is xn or xn+1.

This can also be seen graphically. In effect we obtained (3.27) by solving (3.24) for after replacing by the number g[xn-1, xn] and

calling the solution Thus Since xn+1

= g(xn), this shows that is a fixed point of the straight line

This we recognize as the linear interpolant to g(x) at xn-1, xn. If now the slope of g(x) varies little between xn-1 and that is, if g(x) is approxi-mately a straight line between xn-1 and then the secant s(x) should be a very good approximation to g(x) in that interval; hence the fixed point of the secant should be a very good approximation to the fixed point of g(x); see Fig. 3.5.

In practice, we will not be able to prove that any particular xn is “close enough” to to make a better approximation to than is xn or xn+1. But we can test the hypothesis that xn is “close enough” by checking the ratios rn-1, rn. If the ratios are approximately constant, we accept the hypothesis that the slope of g(x) varies little in the interval of interest; hence we believe that the secant s(x) is a good enough approximation to g(x) to make a very much better approximation to than is xn. In particular, we then accept as a good estimate for the error |en|.

3.4 CONVERGENCE ACCELERATlON FOR FIXED-POINT ITERATION 97

Figure 3.5 Convergence acceleration for fixed-point iteration.

Example 3.4 The equation

has a root We choose the iteration function

(3.28)

and starting with x0 = 0, generate the sequence x1, x2, . . . by fixed-point iteration.

Some of the xn are listed in the table below. The sequence seems to converge, slowly but surely, to We also calculate the sequence of ratios rn. These too are listed in the table.

Specifically, we find

which we think is “sufficiently” constant to conclude that, for all is a better approximation to than is xn. This is confirmed in the table, where we have also listed the

Whether or not any particular is a better approximation to than is xn, one can prove that the sequence converges faster to than

does the original sequence x0, x1, . . . ; that is,

(3.29) [See Sec. 1.6 for the definition of o( ).]

This process of deriving from a linearly converging sequence x0, x1, x2, . . . a faster converging sequence

called Aitken’s process. Using the abbreviations

by (3.27) is usually

from Sec. 2.6, (3.27) can be expressed in the form

(3.30) therefore the name process.” This process is applicable to any linearly convergent sequence, whether generated by fixed-point iteration or not.

Algorithm 3.7: Aitken’s process Given a sequence x0, x1, x2, . . . converging to calculate the sequence by (3.30).

If the sequence x0, x1, x2, . . . converges linearly to that is, if

Furthermore, i f s t a r t i n g f r o m a c e r t a i n k o n , t h e s e q u e n c e of difference ratios is approximately constant, then can be assumed to be a better approximation to than is xk. In particular, is then a good estimate for the error

If, in the case of fixed-point iteration, we decide that a certain is a very much better approximation to than is xk, then it is certainly wasteful to continue generating xk+1, xk+2, etc. It seems more reasonable to start fixed-point iteration afresh with as the initial guess. This leads to the following algorithm.

Algorithm 3.8: Steffensen iteration Given the iteration function g(x)