• Aucun résultat trouvé

Linear independence

Dans le document Methods and Algorithms for (Page 128-133)

Signal Spaces

2.2 Vector spaces

2.2.2 Linear independence

We will first examine the question of the uniqueness of the representation as a linear combination.

Definition 2-17 Let S be a vector space. and let T be a subset of S. The set T is linearly independent if for each finite nonelnpty subset of T (say {pl , p ~ . . . .

.

p,]) the only set of scalars satisfying the equation

IS the trlv~al solut~on cl = C? = = C, = 0

The set of vectors pl. p l

.

p, 1s said to be linearly dependent if there exlsts a set of scalar coefficients c l . C?. . c,, which are not all zero, such thdt

E x a m p l e 2.2.7

1 The functions pl ( f ) . p z ( t ) p i ( [ j . p 3 ( t ) E S of example 2 2 5 are 11nedrly dependent, because

~ 4 ( l j

+

pi ( 1 ) - p i ( [ ) = 0 .

that 15, there is d nonzelo llnear co~nbination of the functions which 1s equal to Lero

z.2 Vector Spaces 89

2 The vectors pi = 12. -3. 4 I T , p = [- 1 , 6 , -21, and p.i = [ I , 6.21' are linearlyiependent slnce

3 The funct~ons p i ( r ) = t and pz(r) = 1

+

t are linearly independent 0 Definition 2.18 Let T be a set of vectors in a vector space S over a set of scalars R (the riurnber of vectors in T could be infinite). The set of vectors V that can be reached by a11 possible (finite) linear combinations of vectors in T is the span of the vectors. This is denoted by

That is, for any

x

E V , there is some set of coefficients { c , ) in R such that

where each p, E T

It {nay be observed that V is a subspace of S. We also observe that V = span(T) is the smallest subspace of S containing T, in the sense that, for every subspace M

c

S such that T

c

M , then V

c

M.

The span of a set of vectors can be thought of as a line (if it occupies one dimension), or as a plane (if it occupies two dimensions), or as a hyperplane (if it occupies more than two dimensions). In this book we will speak of the plane spanned by a set, regardless of its dimensionality.

Example 2.2.8

1. Let pi = [ I . 1, OIT and p~ = (0, 1 , OjT be in B3. Linear combinations of these vectors are

for .I, E R. The space V = span{pl, p2) is a subset of the space R3: it is the plane in which the vectors [ l

.

1, OIT and [0, 1 , OIT lie, which is the xy plane in the usual coordinate system, as shown in figure 2.8.

2 Let pi ( t ) = 1

+

t and p Z ( t ) = t Then V = span(p,, p2] is the set of all polynom~als up to degree 1 The set V could be envisioned abstractly as a "plane" lying in the space of all polynom~als

Figure 2.8: A subspace of R3

90 Signal Spaces

Definition 2.19 Let T be a set of vectors In a vector space S and let V C S be a subspace.

If every vector x E V can be written as a linear coiubtnat~on of vectors In T , then T 1s a

spanning set of V Ci

Example 2.2.9

1 The vectors pi = [ I . 6 , 51'. pz = [ - 2 , 4 , 2 J r , p; =

I

I , 1.0IT. p4 = [ 7 , 5.21' fonna spannlng set of Ri

2 The functions pi ( f ) = 1

+

f. p 2 ( t ) = I

+

t 2 , p ? ( t ) =

f',

and p4(r) = 2 form a spanning set

of the set of polynom~als up to degree 2 D

Linear independence provides us wrth what we need for a unlque representation as a linear comblnatton, as the follow~ng theorem showc

Theorern 2.1 Let S be 0 vector space, arzd let T be u rzonempt\, subset of S Tlze set T 1s lznearl~ lrzdependenr f u n d 0111) f f o r each noivero

x

E span(T), flzere I S exactlv one fiizzte ~uhser o f T , whtch we wrll denote ar { p i . p2.

.

p,,), ar~d a urzrque set of scalars

{ c I . c z , ,c,) ruchthat

Proof We will show that " T linearly independent" implies a untque representation. Suppose that there are two sets of vectors in T ,

and corresponding nonzero coefficients such that

x = c i p l + C ~ P ~ + . . . + c , , P , , and x = d ~ q ~ + d z q 2 + . . . + d , , q , We need to show that rz = nz and p, = q, for I = 1 . 2. . . .

.

n ~ . and that c, = dl

We note that

Since ( 1

+

0. by the definition of lineal tndependence the vector pi must be an element of the set { q l . q2,

.

q,,) and the corresponding coefficients must be equal. say, pi = q , and cl = d l S~mllarly. slnce ~2

#

0 me can ray that p2 = qz and ( 2 = d2 Proceeding slmllarly, we must have p, = q, for I = 1 . 2 , n7. and c, = d,

Conve~sely, suppose that for each x E span(T) the representation x = L pi

+

C,,~P,,,

is untque Assume to the contrary that T 1s linearly dependent. co that there are vectors p i . p:!. , p,, such that

P I = -a:!~:! - 0 1 ~ 3 - - arnPnl (2 10) But t h ~ s gives two representations of the vector pi Itself, and the linear combination (2 10) Slnce this contradtctc the unique representation, T murt be Iinearlq Independent C1 2.2.3 Basis and dimension

Up to thts point we hale uced the term "dimenr~on" freely and without formal definition We have not claltfied uhat I \ meant by "fnlte-dimensiona1" and "lnfinlte-dlinensional"

vector \paces In thii 4ectton we amend this omirston by debntng the Iiamel bar]\ of 3.

vector i p x e

1)efinition 2.20 Let 5 be vector space. and let T be a iet of vector', flom S \uch thdt ipan(T) = S If

r

I \ Iine,irly independent. then

r

I \ s<lld to be ,I Hamel h i s for S L

2 2 Vector Spaces 91

1. The set of vectors in the last example is not linearly independent, since

However, the set T = ( p i . p , p i ) IS llnearly independent and spans the space R3 Hence T IS a (Hamel) basis for R3

2. The vectors

=

[

e2 =

[I] .

=

[!]

form another (Hamel) basis for R3. This basis is often called the natural basis.

3. The vectors p i ( [ ) = 1, p 2 ( t ) = t, p3(t) = t2 form a (Hamel) basis for the set S = (all polynomials of degree _( 2).

Another (Hamel) basis for S is the set of polynomials ( q i (t) = 2, q ~ ( t ) = t

+

t2, q3(t) = t } . 0 As this example shows, there is not necessarily a unique (Hamel) basis for a vector space.

However, the foIlowing theorem shows that every basis for a vector space have a common attribute: the cardinality, or number of elements in the basis.

Theorem 2.2 ff TI and T2 are Hamel bases for a vector space S, then TI and T2 have the same cardinality.

The proof of this theorem is split into two pieces: the finite-dimensional case, and the infinite-dimensional case. The latter may be omitted on a first reading.

Proof (Finite-dimensional case) Suppose

are two Hamel bases of S. Express the point ql E T2 as

least one of the coefficients c, must be nonzero; let us take this as cl. We can then write

By this means we can eliminate pl as a basis vector in TI and use instead the set { q l , p2, . . . , p,) as a basis. Similarly, we write

and as before eliminate p2, so that {ql , q2, p3, . .

.

, pm } forms a basis. Continuing in this way, we can eliminate each p,, showing that {q,, . . . , q,) spans the same space as {pi, .

.

. , p,).

We can conclude that m

>

n. Suppose, to the contrary, that n > m . Then a vector such as q,+, , which does not fall in the basis set { q , , .

. .

, q,}, would have to be linearly dependent with that set, which violates the fact that T2 is itself a basis.

Reversing the argument, we find that n 2 m. In combination, then, we conclude that m = n .

(Infinite-dimensional case) Let TI and T2 be bases. For an x E TI, let T ~ ( x ) denote the unique finite set of points in T2 needed to express x.

92 Signal Spaces

Claim: If y E T2, then y E T 2 ( x ) for some

x

E T I . Proof: Since a point y is in S, then y must be a finite linear combination of vectors in Ti ; say,

y = C l X j + C 2 X 2

+ . . .

+C,X,

for some set of vectors

x,

E T I . Then, for example,

so that, by the uniqueness of the representation, y E T2 ( x ) .

Since for every y E T2 there is some x E TI such that y E T2(x), it follows that

Noting that there are IT,

I

sets in this union2, each of which contributes at least one element to T2, we conclude that

1

T2

1

Z

1

Tg I.

Now turning the argument around, we conclude that

1

T I

I I

T2

I .

By these two inequal- ities we conclude that ( T I

1

= IT2/.

On the strength of this theorem, we can state a consistent definition for the dimension of a vector space.

Definition 2.21 Let T be a Hamel basis for a vector space S. The cardinality of T is the dimension of S. This is denoted as dim(S). It is the number of linearly independent vectors

required to span the space. C1

Since the dimension of a vector space is unique, we can conclude that a basis T for a subspace S is a smallest set of vectors whose linear combinations can form every vector in a vector space S , in the sense that a basis of

1

TI vectors is contained in every other spanning set for S.

The last remaining fact, which we will not prove, shows the importance of the Hamel basis: Every vector space has a Hamel basis. So, for many purposes, whatever we want to do with a vector space can be done to the Hamel basis.

Example 2.2.11 Let S be the set of all polynom~als Then a polynomal x ( t ) E S can be wntten as a l~near comb~nat~on of the functions (1, t , t 2 . ) It can be shown (see exerclse 2 2-32) that thls set of functions is Itnearly ~ndependent Hence the dimens~on of S IS ~nfin~te U Example 2.2.12 [Bernard Fnedman, Prrncrples and Techniques of Applzed Mathematrcs, Dover.

1990 ] To ~llurtrate that infinite dimenstonal vector spaces can be difficult to work with, and part~cular care is requtred, we demonstrate that for an ~nfin~te-dimens~onal vector space S, an ~nfinite set of llnearly Independent vectors whlch span S need not form a basis for S

Let X be the ~nfin~te-sequence space, with elements of the form ( X I , x*. x3, ), where each

x , E

R

The set of vectors

p , = ( l , O O . ,O,l,O, 1, j = 2 . 3 ,

where the second 1 Ir In the jth posltlons forms a set of Itnearly Independent vectors

We first show the set (p,, J = 2 , 3 , ) spans X Let x = ( x r , ~ 2 , x3 ) be an arbitrary element of X Let

= X I - - A ? - -I,,,

and let r, be an Integer larger than n/a,,

1'.

Now conslder the sequence of vectors

' ~ e c a l l that the notation IS/ indicates the cardinality of the set S ; see section A . l

!.3 Norms and Normed Vector Spaces 93

where p = n

+

r,. For example,

iri the limit ac n -+ m , the residual term become5

and y, -+ x. S o there is a representation for x using this infinite set of basis functions.

However-this is the subtle but important point-the representation exists as a result of a limiting process. There is no finite set of fixed scalars c 2 , c3, . . .

.

c~ such that the sequence x = (1, 0 , 0 , . . . )

can he written in terms of the basis functions as

x = ( 1 , O . 0 , . . . ) = c 2 p 7

+

cjp3

+

. . .

+

C , y p . ~ .

When we Introduced the concept of linear combinat~ons in definit~on 2 15, only jnlte sums were ,illowed S ~ n c e represent~ng x would require an infinite sum, the set of functions p 2 , p3 does not torm a b a s ~ s

It may be objected that it would be stra~ghtforward to simply express an tnfintte sum

Cz2

~ 2 ~ 2 ,

and have done with the matter But dealing with infintte renes always requires more care than does

fin~te senes, EO we consider this as a different case 0

Dans le document Methods and Algorithms for (Page 128-133)