• Aucun résultat trouvé

Convolutions Using FFT

Dans le document Scientifi c Computing (Page 187-200)

Tips and Tricks

Chapter 3. Linear Systems of Equations

4.4 Trigonometric Interpolation

4.4.4 Convolutions Using FFT

Let{yj}j∈Zand{zj}j∈Zbe twon-periodic sequences. Theconvolution ofy andz is also ann-periodic sequence whosekth element is defined as

(yz)k=

n−1

j=0

ykjzj. (4.83)

The convolution has many important applications in signal processing. As an example, let{yj}j∈Zbe a givenn-periodic sound signal to which we would like to add an echo effect. It is easiest to see how an echo works by looking at how it acts on theunit impulse δ, i.e., when

δk=

%

1, k divisible byn, 0 else.

When one adds an echo toδ, the impulse is repeated after a fixed delay d, but with an attenuation factor α after each repetition. In other words, the transformed signalz= looks like

zk=

%αm, k=md, m= 0,1, . . . ,nd, 0 0≤k≤n−1, k=md, zn+k=zk.

Trigonometric Interpolation 169

−500 0 50

0.5 1

−500 0 50

0.5 1

−500 0 50

0.5 1

δEδHδ

Figure 4.9.

Top: Unit impulseδfor 16-periodic sequences. Middle:

Shifted impulseEδ. Bottom: Impulse response of the echo operatorH, with delayd= 4and attenuation factor

α= 12.

The signalzis known as theimpulse response ofH, since it is obtained by applyingH to the unit impulseδ, see Figure4.9.

We now considerHy for a general signaly. Let us introduce the shifting operator Esatisfying (Ez)k =zk−1, i.e., it takes a sequence z and shifts it to the right by one position (see Figure4.9). Then a general sequenceycan be written as a sum of shifted impulses

y=

n−1

j=0

yjEjδ.

Assuming that H is linear and shift invariant (or time invariant in signal processing terminology), i.e., if we assumeHE=EH, then

Hy=H

wherez = is the impulse response of H. The kth element ofHy then satisfies

In other words,Hyis obtained by convolvingywith the impulse responsez, which completely characterizesH. SinceHis a linear operator on the space ofn-periodic sequences, one can also writeH in matrix form:

Hy =

The above matrix has constant values along each diagonal that “wraps around”

when it hits the right edge; such matrices are called circulant. Every circu-lant matrix can be written as aconvolution, and vice versa. The fundamental relationship between convolutions and discrete Fourier transforms is given by the following lemma.

Lemma 4.4. Lety andzbe two periodic sequences. Then Fn(yz) =n(Fn(y) Fn(z)), wheredenotes pointwise multiplication.

Note that the lemma implies that the convolution operator is symmetric, i.e., yz=zy, despite the apparent asymmetry in the definition.

Problems 171 A straightforward computation of the convolution using (4.83) requires O(n2) operations, but the above lemma suggests the following algorithm using the FFT:

Algorithm 4.12.

Computing the convolution of twon-periodic sequences yandz

1. Use the FFT to compute ˆy=Fny and ˆz=Fnz.

2. Compute ˆw=n(ˆyˆz).

3. Use the IFFT to computew=Fn−1w.ˆ

Since steps 1 and 3 each costO(nlogn) and step 2 costsO(n), the overall cost isO(nlogn), which is much lower thanO(n2) whennis large.

4.5 Problems

Problem 4.1. Interpolate Runge’s function in the interval[5,5], f(x) = 1

1 +x2,

using Chebyshev nodes (see Equation (11.52)). Compare the resulting inter-polation polynomial with the one with the equidistant nodes of Figure4.1.

Problem 4.2. Rewrite Algorithm 4.4 to compute the diagonal of the divided difference table using only O(n)storage. Hint: To see which entries can be overwritten at which stage, consider the diagram below:

x0 f[x0]

x1 f[x1]→f[x0, x1]

x2 f[x2]→f[x1, x2]→f[x0, x1, x2]

x3 f[x3]→f[x2, x3]→f[x1, x2, x3]→f[x0, . . . , x3]

Here, f[x3] can be overwritten by f[x2, x3], since it is no longer needed after the latter has been computed.

Problem 4.3. Given the “measured points”

x=[0:0.2:7]; % generate interpolation points y=exp(cos(x))+0.1*rand(size(x)); % with some errors

write aMatlabfunction

function [X,Y,n,rr]=fitpoly(x,y,delta)

% FITPOLY computes an approximating polynomial

% [X,Y,k,rr]=fitpoly(x,y,delta) computes an approximating

% polynomial of degree n to the points such that the norm of

% the residual rr <= delta. (X,Y) are interpolated points for

% plotting.

which computes the best polynomial in the least squares sense of lowest degree using the orthogonal basis such that the residual isr ≤δ.

Experiment with some values of delta. To plot the polynomial, take 10 times more equally spaced interpolation points as the given pointsxto evalu-ate the approximating polynomial. Store the nodes of the interpolation points in X and compute the corresponding values of the best polynomial and store them in the vector Y. You then should be able to plot the points and the approximating polynomial by

plot(x,y,’o’);

hold on plot(X,Y)

Compare your solution with the rather simple Matlabbuilt-in function polyfitby using the degreencomputed byfitpoly.

Problem 4.4. Solving a nonlinear equation with inverse interpolation (see also Chapter 5).

Consider the nonlinear scalar equationf(x) = 0. Starting with two func-tion valuesf(x0)andf(x1)(preferably bracketing the solution) we compute

Problems 173

the following Aitken-Neville-Scheme for the interpolation valuez= 0:

f(x1) x1

f(x2) x2 x3:=T22

f(x3) x3 T32 x4:=T33

f(x4) x4 T42 T43 x5:=T44

· · · · · · · · · · · · · · ·

The extrapolated value in the diagonal xi+1 := Tii is written as new value Ti+1,1 in the first column of the scheme. Then we compute the function value f(xi+1)and the new row i.e. the elements Ti+1,2, . . . , Ti+1,i+1. If the scheme converges (use good starting values!) then the diagonal entries con-verge quadratically to a simple zero off.

Write a program for inverse interpolation and solve the equations a)x−cosx= 0 b)x=e

sinx.

Problem 4.5. Assume you need to compute not only the function value Pn(z)but also the derivativePn(z)of an interpolation polynomial.

Investigate what would be the best way to computePn(z). Consider the following representations of the interpolation polynomial: Lagrange, Barycen-tric, Newton, Orthogonal Polynomials, Aitken-Neville.

Problem 4.6. Use extrapolation to compute the derivativef(1)for f(x) =x2ln

x3+ 1ex

x3+ sinx2+ 1 2 (sinx+ cos2x+ 3) + lnx

.

Problem 4.7. Extrapolation ofπ. We will approximate the circumfer-ence of the unit circle by regular polygons. The circumfercircumfer-ence of a regular polygon withncorners on the unit circle is

Un= 2nsinπ n

. (4.84)

We introduce the variable

h= 1 n and the function

T(h) =Un

2 =nsinπ n

= sin(hπ)

h . (4.85)

The Taylor series of T(h)is

T(h) =π−π3

3!h2+π5

5!h4∓ · · · (4.86)

Because of limh→0T(h) = π we can extrapolateπ from the half circumfer-ences of some regular polygons. Only even powers of h occur, therefore we can extrapolate using (4.39).

Write a program and extrapolateπusing the following table that contains the circumferences of polygons which can be computed by elementary mathe-matics: Problem 4.8. (Euler–Mascheroni constant). The sequence

sn= 1 +1 2+1

3+· · ·+ 1 n−lnn

converges. The limit has already been computed by Leonhard Euler and is denoted byγ. Compute an approximation forγ by extrapolation.

Problem 4.9. Compute the sum of the following series by extrapolation:

a)

Hint: use h= 1/nand extrapolate the limit from the partial sums T(h) =sn=

n k=0

ak. Choose the sequence (4.36) forhi.

Problem 4.10. Compute a table of the function f(x) =

( n=1

cos(x n)

forx= 0,0.1, . . . ,1. Extrapolate each function value from partial products.

Problem 4.11. The function f(x) = sinx is approximated by a poly-nomial of degree three P3(x) in such a way that the function values and derivatives match for x= 0and forx=π.

Compute the polynomial and determine the maximal interpolation error in the interval(0, π).

Problem 4.12. Compute a polynomial of degree three that interpolates the following data:

x 2 3

f(x) 1 2

f(x) 0.5 2

Problems 175

Compute the polynomial in two ways:

1. Make an ansatz with unknown coefficients and solve the resulting linear system.

2. Use Equation (4.49), expand and order the powers so that the result can be compared with the coefficients above.

Problem 4.13. The following table contains equidistantfunction values and derivatives of a function f(x).

x h 2h . . . nh f(x) y1 y2 . . . yn

f(x) y1 y2 . . . yn Compute an approximation of the integral

nh

h

f(x)dx

by interpolating the data with a cubic spline function and by integrating the spline function.

What quadrature rule is obtained?

Problem 4.14. Write aMatlabprogram to interpolate with a defective spline. Use it to plot a spline through the points

x 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 y 2.0 1.2 0.15 1.1 0.5 2.4 2.9 0.0 1.0

Program the three variants for the boundary values of the derivatives. For periodic boundaries sety(n) =y(1).

Problem 4.15. Write aMatlabfunction to compute the derivatives of a periodic defective spline function:

% function ys=DerivativesPeriodicSpline(x,y)

% DERIVATIVESPERIODICSPLINE derivative of a periodic spline

% ys=DerivativesPeriodicSpline(x,y) computes the

% derivatives for the defective periodic spline passing

% through the data (x,y). It is assumed that y(1)=y(n) The equations

x = sint

y = sin(2tπ4) (4.87)

define a closed curve. Compute with t=linspace(0,2*pi,8) n= 8 points of (4.87), interpolate them by a spline curve and compare the result with the exact curve.

Problem 4.16. Verify the Equation (4.49). Hint: make the ansatz Qi(t) =a+bt+ct2+dt3

and determine the coefficients a, b, candd by solving the Equations (4.48).

Maple can help you.

Problem 4.17. Compute the difference scheme (4.50) algebraically and verify that the expression that is obtained with Equation (4.51) is the same as in Equation (4.49).

Problem 4.18. Modify Algorithm4.11 to compute the forward FFT for vectors of length2m. Suggestion: To implement the division byncleanly, first create a driver programmyFFTthat calls the recursive algorithm myFFT rec, then divide the result returned bymyFFT recbynin the driver program.

Problem 4.19. Generalize Algorithm4.11to handle vectors of lengthrm for any integer r≥2. Hint: To obtain a recursive algorithm, split the input vector vinto r subvectorsv0, . . . ,vr−1 wherevkcontains all components at positions equal to kmodr.

Problem 4.20. For a set of samplesy= (y0, . . . , y2n1), consider the problem of calculating the trigonometric interpolating polynomialp2n(x)with p2n(xj) =yj,xj=jπ/n,j= 0, . . . ,2n1. Suppose the data is perturbed to

˜

y= (˜y0, . . . ,y˜2n1), wherey˜k=yk(1 +εk)with|εk| ≤ε. The interpolating polynomial then becomes

˜

p2n(x) =

|k|≤n

˜ ckeikx,

so that p˜2n(xj) = ˜yj. Thecondition number of the problem is defined to be the smallest constantκ >0satisfying

p˜2n−p2nL2(0,2π)≤κ·εy, where the L2norm is defined as

f2L2(0,2π)=

0 |f(x)|2dx.

(a) We have seen thaty=Vc, whereVV = 2nI. Conclude that ˜cc2 1

2ny˜y2≤εy. (b) Using the orthogonality relation

1 2π

0

eikxe−ilxdx=δkl,

Problems 177

deduce that

p˜2n−p2nL2(0,2π)≤√

˜cc2≤ε√

y.

What can one say about the conditioning of trigonometric interpolation?

Problem 4.21. Let f(x) be the 2π-periodic function shown in Figure 4.8, which satisfies

f(x) =π− |x| for|x| ≤π.

(a) Calculate its Fourier coefficientsfˆ(k)for allk. Hint: you should get fˆ(k) =

⎧⎪

⎪⎩

π

2, k= 0,

2

k2π, k odd, 0, else.

(b) Using either myFFT or the built-in Matlab function fft, calculate fˆn(k) for several n and k, and plot the difference |fˆn(k)−fˆ(k)| as a function ofnfork= 1. What is the decay rate?

(c) Verify Lemma4.3numerically forn= 8andk= 1.

(d) By lettingx= 0in f(x) =

k∈Zf(k)eˆ ikx, show that

k=0

1

(2k+ 1)2 = π2 8 .

Problem 4.22. Write aMatlabfunction to evaluate the trigonometric interpolantpn(x)for a given set of samplesy:

function yy=TrigonometricInterpolation(y,xx)

% TRIGONOMETRICINTERPOLIATION trigonometric interpolating polynomial

% yy=TrigonometricInterpolation(y,xx) computes p(x), the trigonometric

% interpolating polynomial through (x,y), x(j)=2*pi*(j-1)/length(y).

% It returns yy, the values of p evaluated at xx.

To test your program, use

f(x) = 10 cos(x)3 sin(3x) + 5 cos(3x)15 cos(10x)

and plot the maximum errormax|pn(x)−f(x)|forn= 4,8,16,32,64. Verify that the maximum error is close to machine precision for n= 32,64. What is the reason behind this?

Problem 4.23. Write aMatlabfunction to add echoes to a given signal y:

function y=Echo(x, Fs, d, alpha)

% ECHO produces an echo effect

% y=Echo(x,Fs,d,alpha) adds an echo to the sound vector x with a

% delay of d seconds. 0 < alpha < 1 is the strength of the echo

% and Fs is the sampling rate in Hz.

To test your program use one of the sound signals already available in Mat-lab (chirp, gong, handel, laughter, splatandtrain). For example, to load thetrainsignal, use

load train

which loads the variable ycontaining the actual signal andFs, the sampling rate. To play the signal, use

sound(y,Fs)

Play the original as well as the transformed signals (with echoes) and com-pare.

Problem 4.24. One way of compressing a sound signal is to remove frequency components that have small coefficients, i.e., for a relative threshold τ, we transform a given signaly of length ninto w, also of lengthn, whose discrete Fourier coefficients wˆn satisfy

ˆ wn(k) =

%yˆn(k), |yˆn(k)|> τ·maxj|yˆn(j)|,

0 else.

Write aMatlabfunction that implements this compression scheme and re-turns a sparse version of the n-vectorwˆn:

function w=Compress(y,thres)

% COMPRESS removes small frequency components and compresses the signal

% w=Compress(y,thres) removes all the frequencies whose amplitude is

% less than thres times the maximum amplitude and compresses the

% resulting sparse signal.

To play the compressed sound, use the command sound(ifft(full(w)),Fs)

Compare the sound quality and the amount of storage required by the origi-nal and compressed sound sigorigi-nals for different values of τ. To compare the memory usage between the uncompressed and compressed signals, use the commandwhosfollowed by a list of variable names.

Problem 4.25. Let

H=

⎜⎜

⎜⎝

z0 zn−1 · · · z1

z1 z0 · · · z2

... . .. ... zn−1 zn−2 · · · z0

⎟⎟

⎟⎠

Problems 179

be a circulant matrix. Show that every vector of the form v= (1, e2πik/n, e4πik/n, . . . , e2πi(n1)k/n)

is an eigenvector ofHand compute its corresponding eigenvalue. What is the relationship between the eigenvalues of Hand the discrete Fourier transform ofz= (z0, . . . , zn1)?

Nonlinear equations are solved as part of almost all sim-ulations of physical processes. Physical models that are expressed as nonlinear partial differential equations, for example, become large systems of nonlinear equations when discretized. Authors of simulation codes must ei-ther use a nonlinear solver as a tool or write one from scratch.

Tim Kelley, Solving Nonlinear Equations with Newton’s Method, SIAM, 2003.

Prerequisites: This chapter requires Sections2.5(conditioning),2.8(stopping criteria), Chapter 3 (linear equations), as well as polynomial interpolation (§4.2) and extrapolation (§4.2.8).

Solving anonlinear equationin one variable means: given a continuous func-tion f on the interval [a, b], we wish to find a value s [a, b] such that f(s) = 0. Such a value sis called azero or root of the function f or a so-lution of the equation f(x) = 0. For a multivariate function f :Rn Rn, solving the associated system of equations means finding a vector s Rn such that f(s) = 0. After an introductory example, we show in Section 5.2 the many techniques for finding a root of a scalar function: the funda-mental bisection algorithm, fixed point iteration including convergence rates, and the general construction of one step formulas, where we naturally dis-cover Newton’s method1, and also higher order variants. We also introduce Aitken acceleration and theε-algorithm, and show how multiple zeros have an impact on the performance of root finding methods. Multistep iteration methods and how root finding algorithms can be interpreted as dynamical systems are also contained in this section. Section5.3is devoted to the special case of finding zeros of polynomials. In Section5.4, we leave the scalar case and consider non-linear systems of equations, where fixed point iterations are the only realistic methods for finding a solution. The main workhorse for solving non-linear systems is then Newton’s method2, and variants thereof.

1Also called Newton-Raphson, since Newton wrote down the method only for a poly-nomial in 1669, while Raphson, a great admirer of Newton, wrote it as a fully iterative scheme in 1690

2First written for a system of 2 equations by Simpson in 1740

W. Gander et al.,Scientific Computing - An Introduction using Maple and MATLAB, Texts in Computational Science and Engineering 11,

DOI 10.1007/978-3-319-04325-8 5,

©Springer International Publishing Switzerland 2014

182 NONLINEAR EQUATIONS

5.1 Introductory Example

We use Kepler’s Equation as our motivating example: consider a two-body problemlike a satellite orbiting the earth or a planet revolving around the sun.

Kepler discovered that the orbit is an ellipse and the central bodyF (earth, sun) is in a focus of the ellipse. If the ellipse is eccentric (i.e., not a circle), then the speed of the satellite P is not uniform: near the earth it moves faster than far away. Figure5.1depicts the situation. Kepler also discovered

a b

P

F A

Figure 5.1. SatelliteP orbiting the earthF

the law of this motion by carefully studying data from the observations by Tycho Brahe. It is calledKepler’s second law and says that the travel time is proportional to the area swept by the radius vector measured from the focus where the central body is located, see Figure5.2. We would like to use this

t2 t1

F1 A F2

Figure 5.2.

Kepler’s second law: ifF1=F2thent2= 2t1

law to predict where the satellite will be at a given time.

Assume that at t = 0 the satellite is at point A, the perihelion of the ellipse, nearest to the earth. Assume further that the time for completing a full orbit isT. The question is: where is the satellite at time t(fort < T)?

We need to compute the area ΔF AP that is swept by the radius vector as a function of the angleE(see Figure5.3).Eis called theeccentric anomaly. The equation of the ellipse with semi-axisaandbis

x(E) = acosE, y(E) = bsinE.

To compute the infinitesimal area dIbetween two nearby radius vectors, we will use the cross product, see Figure5.4. The infinitesimal vector of motion

Dans le document Scientifi c Computing (Page 187-200)