• Aucun résultat trouvé

Chebyshev Polynomials

Dans le document in C++ and MPI (Page 128-134)

float + floatImplicit Casting

3.1.5 Chebyshev Polynomials

Spectral approximations and more specifically polynomial approximations using Chebyshev polynomials are a very effective means of representing relatively smooth data and also nu-merical solutions of partial differential equations. Just as before, we can write our polynomial approximation pN(x) as truncated series of the form

f(x)≈pN(x) =

N k=0

akTk(x),

where Tk(x) is the kth Chebyshev polynomial. The Chebyshev polynomial series converges very fast; the polynomials are determined from recursion relations such as:

T0(x) = 1; T1(x) =x; Tn+1(x) = 2xTn(x)−Tn1(x), n≥ 1. (3.7) Software

Suite

The following code implements this recursive formula; plots of Tk(x), k= 0,1,2,3,4 are shown in figure 3.6.

double ChebyshevPoly(int degree, double x){

double value;

switch(degree){

case 0:

value = 1.0;

break;

case 1:

value = x;

break;

default:

value = 2.0*x*ChebyshevPoly(degree1,x) -ChebyshevPoly(degree-2,x);

}

return value;

}

In this example, there are two things that we want to point out. First, notice that for this particular example, we have two explicit stopping conditions: when k = 0 and when k = 1.

This is because our recurrence relation contains references to (k1) and (k2), and hence we need both to be valid in order to get the kth term. The second thing to observe in this example is the use of a not previously mentioned C++ statement, the switchstatement.

The SWITCH Statement

The switch statement is a convenient way of replacing a collection of if-elsestatements. In the example above, we have the following logic: If the value of degree is 0, then return the number 1.0, else if the value of degree is 1, return the value of x, else the value is what is given by the recurrence relation. We could implement this usingif-elsestatements as follows:

if(degree == 0) value = 1.0;

else{

if(degree == 1) value = x;

else

value = 2.0*x*ChebyshevPoly(degree1,x) -ChebyshevPoly(degree-2,x);

}

However, C++ has a statement named switchwhich accomplishes this type of logic for you. The syntax of a switch statement is as follows:

switch( variable ){

case a:

statement 1;

break;

case b:

statement 2;

statement 3;

break;

default:

statement 4;

}

Here, ‘variable’ is the variable that you want to test; ‘a’, ‘b’, etc. are theconstant values that you want to test against (these cannot be variables, they must be constants). When the switch is executed, the first case in which the variable is equivalent to the case is where execution begins. All statements below that case statement are executed. Hence, when all the statements that you want done for a particular case have been executed, you must use a break statement to exit the switch. The defaultcase is the case that is executed if no other cases have matched the variable.

WARNING Programmer Beware!

Do not forget to put break statements between independent cases!

Because of the “flow-through” nature of the switch statement, one can group cases to-gether. Suppose, for example, that we wanted to implement a statement which executes statement 1 if cases 0 and 1 are true, and statement 2 otherwise. The following pseudo-code example demonstrates the implementation of this logic:

switch(degree){

case 0:

case 1:

statement 1;

break;

default:

statement 2;

}

In this example, if either case 0 or case 1 is true, then statement 1 (and only statement 1, due to the break statement) will be executed. For all other values of the variable degree, statement 2 will be executed.

Key Concept

Switch is a nice organizational tool for implementing if-else re-lationships.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Figure 3.6: Chebyshev polynomials of order 0 through 4 in the interval [1,1].

Properties of Chebyshev Polynomials

Next, we summarize some important properties of the Chebyshev polynomials:

Symmetry: Tn(−x) = (−1)nTn(x).

The leading coefficient is 2n1, n 1.

Zeros: The roots ofTn(x) are xk= cos2k+1n ·π2, k= 0,1, . . . , n1. These are called Gauss points and we will use them later in numerical integration. The roots of its derivative Tk(x), which are the locations of extrema for Tk(x), are the Gauss-Lobatto points and are given by xk= cosn. We also have that

Tn(xk) = (1)k, k= 0,1,2, . . .

Orthogonality in the continuous inner product is:

1 We often use orthogonality in the discrete inner product:

m

Lagrangian Interpolant: The Chebyshev Lagrangian interpolant through N Gauss points has a simple form:

hk(x) = TN(x)

TN (xk)(x−xk), x=xk.

Grid Transformation: The following grid transformation maps the Gauss-Lobatto points xk= cos(kπ/N), k = 0, . . . , N to a new set of grid points ξk obtained from:

ξk = sin1(αxk)

sin1(α) , (3.8)

where α∈(0,1] defines the exact distribution. For α→ 0 the new points are equidis-tant and the Chebyshev approximation resembles the Fourier method. However, for stability of the approximation the new points cannot be exactly equidistant, and thus α >0.

MiniMax Property: Of all thenth-degree polynomials with leading coefficient 1, the polynomial 21nTn(x) has the smallest maximum norm in the interval [1,1]. The value of its maximum norm is 21n.

Approximation Error and Convergence Rate

Let us assume that we are given the values of a function f(x) on a grid of (m+ 1) points, and we use a polynomial p(x) to represent the data on this grid. The error (or remainder r(x)) in the approximation of a function f(x) is then

|f(x)−p(x)|= (x−x0)(x−x1). . .(x−xm)|f(m+1)(ξ)|

(m+ 1)! ,

where ξ∈[x0, xm]. This error behaves like the polynomial curve r(x)∼(x−x0)(x−x1)(x−x2). . .(x−xm),

which oscillates, similar in fact to the least-squares approximation (section 3.1.7), and unlike the Taylor expansion approximation where the error increases exponentially as(x−x0)m+1. Now, we can attempt to find the optimum distribution of the grid points, which means that we seek to minimize the maximum magnitude of

q(x)≡(m+ 1)!p(x) = (x−x0)(x−x1). . .(x−xm).

To this end, we can use the minimax property to obtain q(x) = 2mTm+1(x),

thus {xk} are the zeros of the Chebyshev polynomial Tm+1(x), and thus the grid points xk

are the roots of Tm+1(x), i.e.,

xk = cos

2k+ 1 m+ 1

π L

, k = 0,1, . . . m

We now state Rivlin’s minimax error theorem:

MiniMax Error Theorem: The maximum pointwise error of a Chebyshev series expansion that represents an arbitrary function f(x) is only a small constant away from the minimax error, i.e., the smallest possible pointwise error of anyNth degree polynomial. The following inequality applies: where mm(x) is the best possible polynomial.

Note that for N = 128 the prefactor is less than 5 and for N = 2,688,000 the pre-factor is 41 + lnNπ2 10. Therefore, the Chebyshev expansion series is within a decimal point of the minimax approximation.

The convergence of Chebyshev polynomial expansions is similar to Fourier cosine series, as the following transformation applies

x= cosθ and Tn(cosθ) = cos(nθ). Assuming an infinite expansion of the form

f(x) =

The convergence rate of the expansion series is defined by the decaying rate of the coef-ficients ak. To this end, if :

This implies that for infinitely differentiable functions the convergence rate is extremely fast. This convergence is called exponential, and it simply means that if we double the number of grid points the approximation error will decrease by a two orders of magnitude (i.e., a factor of 100), instead of a factor of four which will correspond to interpolation with quadratic polynomials and second-order convergence rate. The above estimate also shows that in the Chebyshev approximation we can exploit the regularity, i.e., smoothness, of the function to accelerate the convergence rate of the expansion. Also, notice that unlike the

Fourier series (see section 3.2), the convergence of Chebyshev series does not depend on the values of f(x) at the end points, because the boundary terms vanish automatically.

Finally, an important consequence of the rapid convergence of Chebyshev polynomial expansions of smooth functions is that they can be differentiated normally term-wise, i.e.,

dpf(x)

In computing Chebyshev derivatives higher than the first, inaccurate results may be obtained due to round-off. In particular, it has been observed that round-off may be significant for the second derivative for N > 128, for the third derivative for N > 64, and for the fourth derivative for N > 32. This round-off can be reduced if the grid transformation given by equation (3.8) is employed.

Example: The following example, first presented in Gottlieb & Orszag [49], shows the fast convergence of Chebyshev discretization. The exact representation for the sine function corresponding to wave number M is

sinM π(x+α) = 2

whereJn(x) is the Bessel function of ordern. We can argue thatJn(M π)0 exponentially fast for n > M π, given that the Bessel function can be approximated by

Jn(M π) 1

This result leads to the following heuristic rule for Chebyshev series approximation, proposed by Gottlieb & Orszag:

Quasi-sinusoidal rule-of-thumb: In order to resolveM complete waves it is required that M π modes be retained, or in other words π polynomials should be retained per wavelength.

Although very good, such a resolution capability is less than that of a Fourier method that requires approximately two points per wave! In fact, the above is an asymptotic result, and a more practical rule for the total number of points N is:

N = 6 + 4(M 1), which has been verified in many numerical experiments.

Dans le document in C++ and MPI (Page 128-134)