• Aucun résultat trouvé

Consider the set of polynomrals

Dans le document Methods and Algorithms for (Page 143-147)

Signal Spaces

Exarnple 2.7.3 Consider the set of polynomrals

1 hen t t may be venfied by direct computation, when the Inner product 1s defined as

(f.

S ) = f ( f ) g ( t ) d l . these polynomials are orthogonal,

J_ :

These polynomials are the first few Legendre polynom~als, all of which are orthogonal over

[ - I . I 1 Ci

2.8 Weighted inner products

For a finite-dimensional vector space, a weighted inner product can be obtained by insert- tng a Hemitian weighting matrix W between the elements:

(x, y), = yHwx.

The concept of orthogonality is defined with respect to the particular inner product used:

changing the inner product may change the orthogonality relationship between vectors.

Example 2.8.1 Consider the vectors

It is easily verified that these vectors are not orthogonal with respect to the usual inner product xTx2.

However, for the weighted inner product

the vectors xl and x2 are orthogonal.

In order for the weighted inner product to be used to define a norm, as in

it is necessary that x u Wx > 0 for all x

#

0.

A

matrix W with this property is said to be positive definite.

Example 2.8.2 The weighted inner product of the previous example cannot be used as a norm because, for any vector of the form

the product xT Wx = 0, which violates the conditions for a norm.

Weighting can also be applied to integral inner products. If there is some function w(t) 2 0 over [ a , 61, then an inner product can be defined as

104 Signal Spaces

The weighting can be used to place more emphasis on certain parts of the function. (More precisely, we must have w ( r ) _> 0, with w ( t ) = 0 only on a set of measure zero.)

Example 2.8.3 Let us define a set of polynomials by T, ( r ) = cos(n cos-I (r))

for t E [- 1. I ] The first few of these (obta~ned by appllcatlon of tngonometnc identities) are To(t) = 1 Tl(r) = t TZ(tj = 2r2 - 1 T3(t) = 4t3

-

31

A plot of the first few of these is shown tn figure 2 1 1 These polynom~als are the Chebyshevpolynoml- als They have the ~nteresting property that over the interval [-I, I], all the extrema of the functions have the values - I or 1 This property makes them very useful for approxlmatron of functions Furthermore, the Chebyshev polynomtals are orthogonal w ~ t h weight function

over the interval [- 1. 11 The orthogonality relationship between the Chebyshev polynomials is

We can define a weighted inner product on the vector space of m x n matrices by ( A , B ) = t r ( ~ ~ W A ) ,

where W is a (Hermitian) symmetric positive-definite m x m matrix.

Using a norm induced from a weighted inner product, we can define a weighted distance between two vectors:

d w ( x , Y ) 2 =

I / ~ - ~ 1 1 ; .

= ( X - Y ) H ~ ( ~ - y ) . (2.17)

Figure 2.1 I : Chebyshev polynomials T o ( t ) through T S ( r ) for t E [- 1. I ] Example 2.8.4 A we~ghted distance anses naturally in many s ~ g n a l detection, estlmatlon, and pattern recognition problems In non-whlte Gaussian noise In thls example, a detection problem IS considered Detect~on problems are d~scussed more fully in chapter 11

Let S E X" be a s~gnal w h ~ c h takes on one of two d~fferent kalues, e ~ t h e r S = Q or S = s l One of these s~gnals 1s chosen dt random v ~ ~ t h equal probabil~ty-either by a binary data transmitter or by nature The s~gnal S is observed in the presence of additive Gausian nolse N whlch has mean 0 and covariance mdtrlx R The observdtion Y can be modeled ds

:.# Weighted Inner Products 105 and determine our decision about S on the basis of which likelihood function is largest. (This is the maximum likelihood decision rule.) Canceling common factors in the comparison, this is equivalent to comparing The comparison in (2.19) corresponds to computing

utth the maxlmum I~kehhood chotce betng that whlch has the mlnlmum w e ~ g h t d~stance This welghed d~itance measure anbes commonly in pattern recognition problems and 1s known as the Mahulonobzs dl rtcmt e

Further s~mplificattons are often possible In t h ~ s compan5on

and slmllarly for /jy - s ,

//

If so and s l have the same Inner product norm so s l Wso = s: Wsl , then, when comparing /jy - s0/I

,

wtth /jy - s l

11

,, these terms cancel, as well as the yT Wy term. The c h o ~ c e 1s made depend~ng on whether

yT W% or y T w s 1

is larger, that is, depending on which weighted inner product is largest. The inner product is thus seen to be a similarity measure: the signal s is chosen that is most similar to the received signal vector, where the similarity is determined by the weighted inner product.

2.8.1 Expectation as an inner product

The examples of weighted inner products up until now have been of deterministic functions.

An important generalization develops when a joint density is used as a weighting function in the inner product. Let X and Y be random variables with joint density f x , r ( ~ , y ) . We define an inner product between them as

x y f x , y ( x , y ) d x d y .

This inner product is, of course, an expectation, and introduction of this inner product allows the conceptual power of vector spaces to be applied to mean-square estimation theory. Thus

( X , Y j = E [ X Y ]

106 Signal Spaces

Note that we can write this inner product as

( Y , Y ) = E [ y H y ] . Another notation that is sometimes convenient is to write

( Y , Y ) = t r ( E [ y y H ] ) ,

where the t r ( X ) IS the trace operator, the sum of the elements on the dtagonal of the square matrlx X (See section C 3 )

When the vector-space vlewpolnt 1s appl~ed to problems of mlnlmlzation, as d~scussed subsequently, there are two major approaches to the problem In the first caie, an lnner product IS used that 1s not based on an expectatlon, min~mtzatlon of this sort 1s referred to a\

least-squares (LS) In the s~gnal processing l~terature When an inner product 1s used that 1s defined as an expectation, then the approxlmatlon obtained 1s referred to as ai~zzi~rinun~ 17zeail- Aquares (MMS) approxlmatlon In fact. both approxlmatlon techniques rely on precisely the same theory, but slmply employ Inner products cu~ted to the needs of the particular problem

2.9 Hilbert and Banach spaces

With the definitions of metric spaces and inner-product spaces behind us, we are now ready to introduce the spaces in which most of the work in signal processing is performed.

Definition 2.31 A complete normed vector space is called a Banach space. A cornplete normed vector space with an inner product (in which the norm is the induced norm) is called a Hilbert space.

See box 2.3 for an introduction to the man Hilbert.

Some exa~nples of Banach and Hilbert spaces:

-

1 I Linear Transformations: Range and Nullspace 107

/

Box 2.3: David Hilbert (1862-1943)

t

David Hilbert has been called the "greatest mathematician of recent times."

Born and educated at Konigsberg, he received a professorship in Gottingen in 1895.

Throughout his life he worked in a variety of areas, including algebraic in- variants, algebraic numbers, calculus of variations, spectral theory and Hilbert space, and axiomatics. He is well known for proposing, in 1900, 23 signif- icant mathematical problems. Work on these problems since that time has tremendously enriched mathematics.

He spent considerable effort working on the foundations of mathematics, attempting to prove that mathematics provides an internally consistent system, so that it is not possible, for example, to prove that "F and not-F' is true.

His efforts were doomed, however; Kurt Godel demonstrated, in 193 1, that it is impossible for any sufficiently rich formal deductive system to prove consistency of the system by the system itself. There are, Gi5del showed, formally undecidable propositions, which cannot be proven to be either true or false, and the consistency of the system is one of these propositions.

3. The sequence space 1,(0, m ) is a Banach space. When p = 2, it is a Hilbert space.

4. The space L,[a, b ] is a Banach space. When p = 2 it is a Hilbert space. The Hilbert space of functions with domain over the whole real line is denoted L,(W).

Because of the utility of having the norm induced from an inner product, the emphasis in this and succeeding chapters is on Hilbert spaces.

It can be shown that if a normed vector space is finite-dimensional, then it is complete [238, p. 2671. Hence, every normed finite-dimensional space is a Banach space; if the norm is induced from an inner product then it is also a Hilbert space. Furthermore, every finite-dimensional subspace of a space is complete.

The orthogonal complement of a subspace is itself a subspace (see exercise 2.10-52).

The orthogonal complement has the following properties:

Dans le document Methods and Algorithms for (Page 143-147)