• Aucun résultat trouvé

A finite difference scheme

Dans le document Numerical Sound Synthesis (Page 60-66)

The oscillator

3.2 A finite difference scheme

A FINITE DIFFERENCE SCHEME 49 systems, as many time-integration techniques, including the ubiquitous Runge– Kutta family of methods and wave digital filters [127], are indeed usually presented with reference to first-order systems. In the distributed case, the important “finite difference time domain” family of methods [351] is also usually applied to first-order systems, such as the defining equations of electromagnetism.

3.1.4 Coupled systems of oscillators

Coupled second-order ODE systems occur frequently in finite element analysis, and, in musical sound synthesis, directly as descriptions of the dynamics of lumped networks, as per Section 1.2.1;

what often remains, after spatial discretization of a distributed system, is a matrix equation of the form

Md2

dt2u= −Ku (3.9)

whereuis anN×1 column vector, andMandKare known as, respectively, theN×N mass and stiffness matrices, which are (at least for linear and time-invariant problems) constants.Mis normally diagonal, andKis usually symmetric, reflecting reciprocity of forces acting within the system defined by (3.9). If the productM−1Kexists and is diagonalizable, withM−1K=UU−1, then the system may be immediately decoupled as

d2

dt2v= −v (3.10)

where u=Uv, and is the diagonal matrix containing the eigenvalues of M−1K, which are normally real and non-negative. The system (3.9) will then haveN frequenciesωp(which are not necessarily distinct), given by

ωp=

p,p for p=1, . . . , N (3.11)

Energy analysis also extends to systems such as (3.9). Left-multiplying by the row vector duT/dt, where superscriptT indicates a transposition operation, gives

duT dt Md2u

dt2 +duT dt Ku=0 or, using the symmetry ofMandK,

dH

dt =0 with H=T+V and T=1 2

duT dt Mdu

dt V= 1

2uTKu If M and K are positive definite, then the energy Hwill also be non-negative, and bounds on solution size may be derived as in the case of a single oscillator.

Lumped network approaches to sound synthesis, mentioned in Section 1.2.1, are built, essen-tially, around such coupled oscillator systems. The numerical treatment of such systems will appear later in Section 3.4. See Problems 3.2 and 3.3 for some simple concrete examples of coupled oscillators.

second time derivative by a second time difference, as defined in (2.4). One has, in operator form,

δt tu= −ω20u (3.12)

Note the compactness of the above representation, and in particular the absence of the time index, which is assumed to be n for each occurrence of the variable u. The finite difference scheme (3.12) is a second-order accurate approximation to (3.1)— see Section 3.2.5. For reference, a code example in the Matlab programming language is provided in Section A.1.

3.2.1 As recursion

By expanding out the behavior of the operatorδt t, and reintroducing the time indexn, one loses compactness of representation, but has a clearer picture of how to program such a scheme on a computer. One has

1 k2

un+1−2un+un−1

= −ω20un (3.13)

which relates values of the time series at three levelsn+1,n, andn−1. Rewriting (3.13) so that un+1 is isolated gives

un+1=

2−ω20k2

unun−1 (3.14)

which is a two-step recursion in the time series un. This difference equation is identical in form to that of a two-pole digital filter under transient conditions [261].

3.2.2 Initialization

The difference equation (3.14) must be initialized with two values, typicallyu0andu1. This pair of values is slightly different from what one would normally use to initialize the continuous-time harmonic oscillator, namely the values u(0) and du/dt|t=0. In the distributed case, especially under striking conditions, it is often the initial velocity which is of interest. Supposing that one has, instead of the two valuesu0andu1, an initial valueu0and an initial velocity condition (call itv0), a very simple way of proceeding is to write

u0=u0 and δt+u0=v0 −→ u1=u0+kv0

This approximation is only first-order accurate. More accurate settings may be derived —see Prob-lem 3.4 — but, in general, there is not much point in setting initial conditions to an accuracy greater than that of the scheme itself. Since many of the schemes used in this book will be of relatively low accuracy, the above condition is wholly sufficient for most sound synthesis applications, especially if one is operating at an elevated audio sampling rate (i.e., for a small value ofk), which is usually the case. In many situations, in fact, an initial condition is not employed, as the physical model is excited by an external source (such as a bow, or reed mechanism).

3.2.3 Numerical instability

It is easy enough to arrive at a numerical scheme such as (3.12) approximating the SHO, and, indeed, under some conditions it does generate reasonable results, such as the sinusoidal output shown in Figure 3.3(a). But this is not necessarily the case— sometimes, the solution exhibits explosive growth, as shown in Figure 3.3(b). Such explosive growth definitely does not correspond, in any sense, to an approximation to the SHO, and is referred to as unstable.

Stability analysis is a key aspect of all numerical methods, and is given an involved treatment throughout the rest of this book. It is of great importance in the case of physical models of musical

A FINITE DIFFERENCE SCHEME 51

0 0.5 1 1.5 2

× 10−3 × 10−3

(a) (b)

1000

500 0 500 1000

0 0.5 1 1.5 2

2

1 0 1 2

u u

t t

Figure 3.3 Output of scheme (3.12), running at a sample ratefs=44 100 Hz. In (a), sinusoidal output when f0=ω0/2π=8000 Hz, and in (b) unstable, explosive numerical behavior when f0=ω0/2π=14 040 Hz.

instruments, which almost always operate under nearly lossless conditions, and which are thus especially prone to such instability. There are various ways of analyzing this behavior—in the present case of the SHO, frequency domain methods, described in the following section, are a natural choice. Energy methods, which extend well to more complex (nonlinear) problems, appear in Section 3.2.6.

3.2.4 Frequency domain analysis

The frequency domain analysis of scheme (3.12) is very similar to that of the SHO, as outlined in Section 3.1.1. Applying az transformation, or, equivalently, inserting a test solution of the form un=zn, wherez=esk (see comments on the frequency domain ansatz in Section 2.3), leads to the characteristic equation

z+(k2ω20−2)+z1=0 which has the solutions

z±= 2−k2ω20±

2−k2ω202

−4

2 (3.15)

Whenz±are distinct, the solution to (3.12) will evolve according to

un=A+zn++Azn (3.16)

Under the condition

k < 2 ω0

(3.17) the two rootsz±will be complex conjugates of magnitude unity (see Section 2.3.3), in which case they may be written as

z±=e±j ωk whereω=ω0, and the solution (3.16) may be rewritten as

un=Acos(ωnk)+Bsin(ωnk)=C0cos(ωnk+φ0) (3.18) where

A=u0 B =u1u0cos(ωk)

sin(ωk) C0=

A2+B2 φ0=tan−1(B/A)

This solution is well behaved, and resembles the solution to the continuous-time SHO, from (3.2).

From the form of the solution above, the following bound on the numerical solution size, in terms of the initial conditions andω0, may be deduced:

|un| ≤C0 (3.19)

Condition (3.17) may be violated in two different ways. If k >2/ω0, then the two roots of (3.15) will be real, and one will be of magnitude greater than unity. Thus solution (3.16) will grow exponentially. Ifk=2/ω0, then the two roots of (3.15) coincide, atz±= −1. In this case, the solution (3.16) does not hold, and there will be a term which grows linearly. In neither of the above cases does the solution behave in accordance with (3.2), and its size cannot be bounded in terms of initial conditions alone. Such growth is called numerically unstable (and in the latter case marginally unstable), and condition (3.17) serves as a stability condition.

Stability conditions and sampling theory

Condition (3.17) may be rewritten, using sample rate fs=1/ k and the reference oscillator fre-quencyf0=ω0/2π, as

fs> πf0

which, from the point of view of sampling theory, is counterintuitive—recall the comments on this subject at the beginning of Section 2.1. One might expect that the sampling rate necessary to simulate a sinusoid at frequencyf0 should satisfyfs>2f0, and not the above condition, which is more restrictive. The reason for this is that the numerical solution does not in fact oscillate at frequencyω0, but at frequencyωgiven by

ω=1 kcos−1

1−k2ω02/2

This frequency warping effect is due to approximation error in the finite difference scheme (3.12) itself; such an effect will be generalized to numerical dispersion in distributed problems seen later in this book, and constitutes what is perhaps the largest single disadvantage of using time domain methods for sound synthesis. Note in particular thatω > ω0. Perceptually, such an effect will lead to mistuning of “modes” in a simulation of a musical instrument. There are many means of combating this unwanted effect, the most direct (and crude) being to use a higher sampling rate (or smaller value ofk)—the numerical frequencyωapproachesω0in the limit ask becomes small. This approach, however, though somewhat standard in the more mainstream simulation community, is not so useful in audio, as the sample rate is usually fixed,1 though downsampling is a possibility. Another approach, which will be outlined in Section 3.3.2, involves different types of approximations to (3.1), and will be elaborated upon extensively throughout the rest of this book.

3.2.5 Accuracy

Scheme (3.12) employs only one difference operator, a second-order accurate approximationδt t to the operator d2/dt2. While it might be tempting to conclude that the scheme itself will generate a solution which converges to the solution of (3.1) with an error that depends onk2(which is in fact true in this case), the analysis of accuracy of an entire scheme is slightly more subtle than that

1In a distributed setting, when approximations in both space and time are employed, the spatial and temporal warping errors may cancel one another to a certain degree (indeed, perfectly, in the case of the 1D wave equation).

A FINITE DIFFERENCE SCHEME 53 applied to a single operator in isolation. It is again useful to consider the action of the operatorδt t

on a continuous functionu(t ). In this case, scheme (3.12) may be rewritten as δt t+ω20

u(t )=0 (3.20)

or, using (2.6),

d2 dt2+ω02

u(t )=O(k2)

The left-hand side of the above equation would equal zero foru(t )solving (3.1), but for approxi-mation (3.12), there is a residual error on the order of the square of the time stepk.

The accuracy of a given scheme will be at least that of the constituent operators and, in some rare cases, higher; two interesting case studies of great relevance to sound synthesis are presented in Section 3.3.4 and Section 6.2.4.

3.2.6 Energy analysis

Beginning from the compact representation of the difference scheme given in (3.12), one may derive a discrete conserved energy similar to that obtained for the continuous problem in Section 3.1.2.

Multiplying (3.12) by a discrete approximation to the velocity,δu, gives t·u) (δt tu)+ω20t·u) u=0

Employing the product identities given in Section 2.4.2 in the last chapter, one may immediately arrive at

δt+

1

2t−u)2+ω20 2 uet−u

=0 or

δt+h=0 (3.21)

with

h=t+v and t=1

2t−u)2 v=ω02

2 uet−u (3.22)

Here,t,v, andhare clearly approximations to the kinetic, potential, and total energies,T,V, and H, respectively, for the continuous problem, as defined in (3.7). All are time series, and could be indexed astn,vn, andhn, though it should be kept in mind that the above expressions are centered about time instantst=(n−1/2)k.

Equation (3.21) implies that the discrete-time serieshn, which approximates the total energy Hfor the continuous-time problem, is constant. Thus the scheme (3.12) is energy conserving, in a discrete sense:

hn=constant=h0 (3.23)

Though one might think that the existence of a discrete conserved energy would immediately imply good behavior of the associated finite difference scheme, the discrete approximation to the potential energy, given above in (3.22), is of indeterminate sign, and thus, possibly, so ish. The determination of non-negativity conditions onh leads directly to a numerical stability condition.

Expanding out the operator notation, one may write the functionhnat time stepn as hn= 1

2k2

(un)2+(un1)2 +

ω02 2 − 1

k2

unun1

which is no more than a quadratic form in the variablesun andun−1. Applying the results given in Section 2.4.3, one may show that the quadratic form above is positive definite when

k < 2 ω0

which is exactly the stability condition (3.17) arrived at through frequency domain analysis. See Problem 2.12. Under this condition, one further has that

|un| ≤

2k2hn

1−(1ω20k2/2)2 =

2k2h0

1−(1ω02k2/2)2 (3.24) Just as in the case of frequency domain analysis, the solution may be bounded in terms of the initial conditions (or the initial energy) and the system parameters. In fact, the bound above is identical to that given in (3.19). See Problem 3.5.

Finite precision and round-off error

In infinite precision arithmetic, the quantity hn remains exactly constant, according to (3.23).

That is, the discrete kinetic energy tn and potential energy vn sum to the same constant, at any time step n. See Figure 3.4(a). But in a finite precision computer implementation, there will be fluctuations in the total energy at the level of the least significant bit; such a fluctuation is shown, in the case of floating point arithmetic, in Figure 3.4(b). In this case, the fluctuations are on the order of 10−15of the total energy, and the bit quantization of these variations is evident. Numerical energy is conserved to machine accuracy.

The analysis of conservative properties of finite difference schemes is younger than frequency domain analysis, and as a result, much less well known. It is, in some respects, far more powerful than the body of frequency domain analysis techniques which forms the workhorse of scheme analysis. The energy method [161] is the name given to a class of analysis techniques which do not employ frequency domain concepts. The study of finite difference schemes which furthermore possess conservation properties (of energy, and other invariants) experienced a good deal of growth in the 1980s [157, 307] and 1990s [225, 404, 145], and is related to more modern symplectic integration techniques [308, 324].

0 10 20

(b) (a)

30 40 50 60

0 5 10 15

0 10 20 30 40 50 60

1

0.5 0 0.5 1 1.5

× 10−15

n n

n

n n

n e

Figure 3.4 (a) Variation of the discrete potential energy vn (solid grey line), discrete kinetic energy tn (dotted grey line), and total discrete energyhn (solid black line), plotted against time step n, for the output of scheme (3.12). In this case, the valuesω0=1 andk=1/10 are used, and scheme (3.12) is initialized with the valuesu0=0.2 andu1= −0.4. (b) Variation of the error in energyhne, defined, at time stepn, ashne =(hn−h0)/h0, plotted as black points. Multiples of single-bit variation are plotted as grey lines.

OTHER SCHEMES 55

3.2.7 Computational considerations

The computational requirements of scheme (3.12) are most easily seen through an examination of the Matlab code in Section A.1. In each updating cycle, the algorithm requires one multiplication and one addition. Interestingly, this is nearly the same amount of computation required to generate values of a sinusoid through table reading, as is common practice in sound synthesis applications [240], as mentioned in Section 1.1.3, but with the advantage of a much reduced cost in terms of memory, as mentioned below. The use of recursive structures as a generator of sinusoids has been discussed by Smith and Cook [335]. In fact, with a small amount of additional work, a scheme can be developed which calculates exact values of the sinusoid —see Section 3.3.4.

Though, for the sake of clarity of presentation, the algorithm in Section A.1 appears to make use of three memory locations, one for the current value of the computed solution and two for the previous two values, this may be reduced to two locations through overwriting. See Programming Exercise 3.1.

Dans le document Numerical Sound Synthesis (Page 60-66)