• Aucun résultat trouvé

Analog Filter Identification

3.10 Multiple Output Gaussian Filters

4.1.2 Analog Filter Identification

The FIR system identification method above is useful if the system to be mod-elled is linear with a finite memory and hence looks like a FIR filter. The FIR method also requires data to be discretely spaced with different observations separated in time by some integer multiple of the sampling period.

In many cases, this FIR methodology might be insufficient. For example, if the underlying system has an infinite memory then modelling with a FIR filter ef-fectively truncates that memory. A simple example of an infinite memory system is an IIR filter. If the IIR filter’s impulse response decays rapidly, then FIR modelling will suffice. On the other hand, for example, if the impulse response is h(t) = sin(t), then a FIR model might be quite poor. Another situation that is poorly modelled by a FIR filter is time series data that is ir-regularly sampled over time. To build a FIR model, the sampling rate must be increased until the observation points approximately coincide with sam-ple points. We then interpret the data as being regularly samsam-pled with a high sampling rate and lots of missing data points. However, increasing the sam-pling rate increases the number of FIR coefficients required to define impulse responses of a certain duration. For example, if we were to double the sam-ple rate, we would need to double the number of coefficients to maintain the impulse response duration. In such cases, it is simpler to use an analog model.

For analog filter identification, it would be nice to use Gaussian filters, with Gaussian impulse response. The beauty of this is that a cascade of Gaussian filters is still a Gaussian filter1. For system identification this would mean

1A Gaussian convolved with a Gaussian is another Gaussian. Furthermore, a Gaussian filter has a Gaussian frequency response, and the multiplication of two Gaussians is another

that our generative model consists of white noise sources driving Gaussian filters to produce the inputs which drive more Gaussian filters to produce the outputs. Finally, the dependent Gaussian model would have Gaussian auto and cross covariance functions, simplifying the analysis. Unfortunately, Gaussian filters are acausal, and therefore unrealisable. It seems unreasonable to identify a system as unrealisable.

Consider a single-input single-output (SISO) system where the inputx(t)and output y(t) functions are related by an ordinary linear differential equation with constant coefficients [2]:

If we take the Laplace transform,L[·], of both sides and rearrange we find the system transfer functionH(s)as a real rational function of complex frequency s, gen-eral infinite in duration and is found from the inverse Laplace transform h(t) = L1[H(s)]. For H(s) to be BIBO stable, the roots of the denominator polynomial PN

n=0bnsn must have negative real parts [16]. That is, the poles of the system transfer function must lie in the left half of the s-plane. We can enforce this by choosing a parameterisation that directly encodes the po-sition of each pole. That is, we factorise the denominator and parameterise it as QN

n=0(s −βn), where the parameters βn ∈ C are the system poles with ℜ(βn)≤0(in the left half s-plane).

The transfer function H(s)is the ratio of two polynomials with real, constant coefficients. If we cascade two such systems G(s) and H(s) then the com-bined transfer function is equal to the product K(s) = G(s)H(s). This com-bined transfer function is also the ratio of two polynomials ins, and therefore maps the input to output via an ordinary differential equation with constant coefficients.

We can now specify a generative model by starting with a Gaussian white noise input w(t) filtered by G to produce the input function x(t), which is

Gaussian.

then filtered byH to form the output functiony(t). Overall,y(t)is equivalent to the output of filterK when excited byw(t). Given observations ofx(t)and y(t), we can build a dependent Gaussian process model which has parameters equal to the coefficients in the differential equations that describe filtersGand H. Of course, when we infer the parameters forHwe are effectively building a model of the system we are interested in.

The covariance functions for the dependent Gaussian processes model are most easily constructed in the frequency domain. For example, for an input separation ofτ

covxy(τ) =F1[G(jω)K(jω)] (τ) (4.19) whereω = 2πf andGis the complex conjugate of G. In words, the cross co-variance function betweenx(t)andy(t+τ)is found from the inverse Fourier transform of the product of frequency response functions of the filtersGand K. We findG(jω)from the Laplace transformG(s)by letting s → jω. Find-ingF1[H(jω)]is most easily accomplished by computing the partial fraction expansion ofH(jω), and then referring to a table of Fourier transform pairs.

As a simple example, consider a SISO system where the filtersG and H are given by

G(s) = 1

s+a H(s) = 1

s+b (4.20)

witha, b≥0to ensure stability. The cascade filterKis equal to the product K(s) =G(s)H(s) = 1

(s+a)(s+b) (4.21)

To build the dependent Gaussian processes model we require covariance func-tions cov(x(t +τ), x(t)), cov(x(t+τ), y(t)), cov(y(t+τ), x(t))and cov(y(t+ τ)y(t)). These are as found via the inverse Fourier transform as described above, giving

covyy(τ) = 1

2ab(a+b)(a−b)(aexp(−b|τ|)−bexp(−a|τ|)) (4.25)

The covariance functions are continuous, but not differentiable at τ = 0 be-cause the underlying filters have impulse responses with a discontinuity at t = 0. For example,

h(t) =



exp(−at) t≥0

0 t <0 (4.26)

This framework can be used to specify more complicated models, with higher order filters and multiple inputs and outputs. However, doing so results in increasingly complicated covariance functions and increasing numbers of parameters. For very complicated models one would want to automate the derivation of covariance functions given the filter transfer functions.