• Aucun résultat trouvé

Linear maps

N/A
N/A
Protected

Academic year: 2022

Partager "Linear maps"

Copied!
3
0
0

Texte intégral

(1)

1111: Linear Algebra I

Dr. Vladimir Dotsenko (Vlad) Lecture 20

Linear maps

Example 1. LetVbe the vector space of all polynomials in one variablex. Consider the functionα:V→V that maps every polynomial f(x)to 3f(x)f0(x). This is not a linear map; for example, 17→0, x7→3x, but x+17→3(x+1) =3x+36=3x+0.

Definition 1. Let ϕ:V → W be a linear map, and let e1, . . . , en and f1, . . . , fm be bases of V and W respectively. Let us compute coordinates of the vectorsϕ(ei)with respect to the basisf1, . . . , fm:

ϕ(e1) =a11f1+a21f2+· · ·+am1fm, ϕ(e2) =a12f1+a22f2+· · ·+am2fm,

. . .

ϕ(en) =a1nf1+a2nf2+· · ·+amnfm. The matrix

Aϕ,e,f =

a11 a12 . . . a1n

a21 a22 . . . a2n ... . . .. ... am1 am2 . . . amn

is calledthe matrix of the linear map ϕ with respect to the given bases. For each k, itsk-th column is the column of coordinates of imageϕ(ek).

Similarly to how we proved it for transition matrices, we have the following result.

Lemma 1. Let ϕ: V → W be a linear map, and let e1, . . . , en and f1, . . . , fm be bases of V and W re- spectively. Suppose that x1, . . . , xn are coordinates of some vector v relative to the basis e1, . . . , en, and y1, . . . , ym are coordinates ofϕ(v)relative to the basisf1, . . . , fm. In the notation above, we have

 y1

y2

... ym

=Aϕ,e,f

 x1

x2

... xn

 .

Proof. The proof is indeed very analogous to the one for transition matrices: we have v=x1e1+· · ·+xnen,

so that

ϕ(v) =x1ϕ(e1) +· · ·+xnϕ(en).

1

(2)

Substituting the expansion off(ei)’s in terms offj’s, we get

ϕ(v) =x1(a11f1+a21f2+· · ·+am1fm) +· · ·+xn(a1nf1+a2nf2+· · ·+amnfm) =

= (a11x1+a12x2+· · ·+a1nxn)f1+· · ·+ (am1x1+am2x2+· · ·+amnxn)fn. Since we know that coordinates are uniquely defined, we conclude that

a11x1+a12x2+· · ·+a1nxn=y1, . . .

am1x1+am2x2+· · ·+amnxn=yn, which is what we want to prove.

The next statement is also similar to the corresponding one for transition matrices (also, we sort of used this statement implicitly when we talked about matrix products long ago).

Lemma 2. Let U,V, andW be vector spaces, and let ψ:U→V andϕ: V→W be linear maps. Finally, lete1, . . . , en,f1, . . . , fm, andg1, . . . , gk be bases ofU,V, andW respectively. Then

Aϕ◦ψ,e,g=Aϕ,f,gAψ,e,f. Proof. First, let us note that

(ϕ◦ψ)(u1+u2) =ϕ(ψ(u1+u2)) =ϕ(ψ(u1) +ψ(u2)) =ϕ(ψ(u1)) +ϕ(ψ(u2)) = (ϕ◦ψ)(u1) + (ϕ◦ψ)(u2), (ϕ◦ψ)(c·u) =ϕ(ψ(c·u)) =ϕ(cψ(u)) =cϕ(ψ(u)) =c(ϕ◦ψ)(u),

soϕ◦ψis a linear map.

Let us prove the second statement. We take a vectoru∈U, and apply the formula of Lemma 2. On the one hand, we have

(ϕ◦ψ(u))g=Aϕ◦ψ,e,gue. On the other hand, we obtain,

(ϕ◦ψ(u))g= (ϕ(ψ(u)))g=Aϕ,f,g(ψ(u)f) =Aϕ,f,g(Aψ,e,fue) = (Aϕ,f,gAψ,e,f)ue. Therefore

Aϕ◦ψ,e,gue= (Aϕ,f,gAψ,e,f)ue

for everyue. From our previous classes we know that knowingAv for all vectorsv completely determines the matrixA, so

Aϕ◦ψ,e,g=Aϕ,f,gAψ,e,f, as required.

Example 2. Let us consider the linear map X: P2 → P3 discussed earlier. Let us take the bases e1=1, e2=x, e3=x2ofP2, and the basisf1=1, f2=x, f3=x2, f4=x3ofP3, and computeAX,e,f. Note thatX(e1) =x·1=x=f2,X(e2) =x·x=x2=f3, andX(e3) =x·x2=x3=f4. Therefore

AX,e,f =

0 0 0 1 0 0 0 1 0 0 0 1

 .

2

(3)

Example 3. Let us consider the linear mapD:P3→P2. Let us take the basese1=1, e2=x, e3=x2, e4=x3 of P3, and the basisf1 = 1, f2 =x, f3 =x2 of P2, and let us computeAD,e,f. Note that D(e1) =10 =0, D(e2) =x0=1=f1,D(e3) = (x2)0=2x=2f2, and D(e4) = (x3)0=3x2=3f3. Therefore

AD,e,f =

0 1 0 0 0 0 2 0 0 0 0 3

.

Example 4. It is also useful to remark that AD,e,fAX,e,f =

1 0 0 0 2 0 0 0 3

, which is indeed AD◦C,e,e as our previous lemma suggests.

Example 5. Consider the vector space M2 of all 2×2-matrices. Let us define a function β:M2 →M2 by the formulaβ(X) =

1 1 1 0

X. Let us check that this map is a linear operator. Indeed, by properties of matrix products

β(X1+X2) = 1 1

1 0

(X1+X2) = 1 1

1 0

X1+ 1 1

1 0

X2=β(X1) +β(X2), β(cX) =

1 1 1 0

(cX) =c 1 1

1 0

X=cβ(X).

We consider the basis of matrix units inM2: e1=E11,e2=E12,e3=E21,e4=E22. We have β(e1) =

1 1 1 0

1 0 0 0

= 1 0

1 0

=e1+e3, β(e2) =

1 1 1 0

0 1 0 0

= 0 1

0 1

e2+e4, β(e3) =

1 1 1 0

0 0 1 0

= 1 0

0 0

=e1, β(e4) =

1 1 1 0

0 0 0 1

= 0 1

0 0

=e2, so

Aβ,e=

1 0 1 0 0 1 0 1 1 0 0 0 0 1 0 0

 .

It is important to remark that a lot of students get this last example wrong, saying that the matrix ofβ is

1 1 1 0

sinceβmultiplies every matrix by 1 1

1 0

. The problem here is that the space of2×2-matrices is four-dimensional soβmust be represented by a4×4-matrix.

3

Références

Documents relatifs

So, this study is an in-depth, narrative exploration of the cultural experiences, perceptions and identities, of these twelve, academically 'successful', British

function* Potential lipid deposition Feed intake Additional issues. and questions Constraints can be imposed on

Motile cilia, laterality and primary ciliary dyskinesia The plenary speaker Martina Brueckner (Yale University, USA) opened the scientific part of the meeting with an introduction

McCoy, Convex hypersurfaces with pinched principal curvatures and flow of convex hypersurfaces by high powers of curvature, Trans.. Chow, Deforming convex hypersurfaces by the n-th

In fact, most of us (and I include myself here) have done rather worse than that – in that having identified the need to think beyond growth we have then reverted to the more

transformed, not only in the direct practical way of becoming cultivated, or shaped into objects of use in the embodied artifacts we call tools…it becomes transformed as an object

Founded by a partnership between the Open University in the UK, the BBC, The British Library and (originally) 12 universities in the UK, FutureLearn has two distinc- tive

Holding meduc and f educ xed, by how much does sibs have to increase to reduce expected years of education by one year?. (A noninteger answer is acceptable