• Aucun résultat trouvé

1111: Linear Algebra I

N/A
N/A
Protected

Academic year: 2022

Partager "1111: Linear Algebra I"

Copied!
2
0
0

Texte intégral

(1)

1111: Linear Algebra I

Dr. Vladimir Dotsenko (Vlad) Lecture 17

Example 1. The spanning set that we constructed for the solution set of an arbitrary system of linear equations was, as we remarked, linearly independent, so in fact it provided a basis of that vector space.

Example 2. Themonomials xk,k>0, form a basis in the space of polynomials in one variable. Note that this basis is infinite, but we nevertheless only consider finite linear combinations at all stages.

Dimension

Note that inRn we proved that a linearly independent system of vectors consists of at mostnvectors, and a complete system of vectors consists of at leastnvectors. In a general vector space V, there is noa priori n that can play this role. Moreover, the previous example shows that sometimes, no nbounding the size of a linearly independent system of vectors may exist. It however is possible to prove a version of those statements which is valid in every vector space.

Theorem 1. LetV be a vector space, and suppose thate1, . . . , ek is a linearly independent system of vectors and that f1, . . . , fm is a complete system of vectors. Thenk6m.

Proof. Assume the contrary; without loss of generality,k > m. Since f1, . . . , fm is a complete system, we can find coefficientsaij for which

e1=a11f1+a21f2+· · ·+am1fm, e2=a12f1+a22f2+· · ·+am2fm,

. . .

ek=a1kf1+a2kf2+· · ·+amkfm.

Let us look for linear combinationsc1e1+· · ·+ckvk that are equal to zero (since these vectors are assumed linearly independent, we should not find any nontrivial ones). Such a combination, once we substitute the expressions above, becomes

c1(a11f1+a21f2+· · ·+am1fm)+c2(a12f1+a22f2+· · ·+am2fm)+. . .+ck(a1kf1+a2kf2+· · ·+amkfm) =

= (a11c1+a12c2+· · ·+a1kck)f1+· · ·+ (am1c1+am2c2+· · ·+amkck)fm. This means that if we ensure

a11c1+a12c2+· · ·+a1kck=0, . . .

am1c1+am2c2+· · ·+amkck=0,

then this linear combination is automatically zero. But since we assumek > m, this system of linear equations has a nontrivial solutionc1, . . . , ck, so the vectorse1, . . . , ek are linearly dependent, a contradiction.

1

(2)

This result leads, indirectly, to an important new notion.

Definition 1. We say that a vector spaceVisfinite-dimensional if it has a basis consisting of finitely many vectors. Otherwise we say thatV isinfinite-dimensional.

Example 3. Clearly,Rn is finite-dimensional. The space of all polynomials is infinite-dimensional: finitely many polynomials can only produce polynomials of bounded degree as linear combinations.

Lemma 1. Let V be a finite-dimensional vector space. Then every basis ofV consists of the same number of vectors.

Proof. Indeed, having a basis consisting ofnelements implies, in particularly, having a complete system of nvectors, so by our theorem, it is impossible to have a linearly independent system of more thannvectors.

Thus, every basis has finitely many elements, and for two basese1, . . . , ek andf1, . . . , fm we have k 6m andm6k, som=k.

Definition 2. For a finite-dimensjonal vectorV, the number of vectors in a basis ofVis called thedimension ofV, and is denoted by dim(V).

Example 4. The dimension ofRn is equal ton, as expected.

Example 5. The dimension of the space of polynomials in one variable xof degree at most nis equal to n+1, since it has a basis1, x, . . . , xn.

Example 6. The dimension of the space ofm×n-matrices is equal tomn.

Example 7. The dimension of the solution space to a system of homogeneous linear equations is equal to the number of free unknowns.

Coordinates

LetV be a finite-dimensional vector space, and lete1, . . . , en be a basis ofV.

Definition 3. For a vectorv∈V, the scalarsc1, . . . , cn for which v=c1e1+c2e2+· · ·+cnen

are called thecoordinates ofv with respect to the basis e1, . . . , en.

Lemma 2. The above definition makes sense: each vector has (unique) coordinates.

Proof. Existence follows from the spanning property of a basis, uniqueness — from the linear independence.

2

Références

Documents relatifs

Indeed, having a basis consisting of n elements implies, in particularly, having a complete system of n vectors, so by our theorem, it is impossible to have a linearly

In the example we considered in the previous class, one of the two matrices is much simpler than the other ones: the first matrix is diagonal, which simplifies all sorts

Then the vector product of two different standard unit vectors is equal to plus or minus the third one, plus if the order of factors agrees with the direction of the arrow, and

This shows that sometimes (when dealing with multilinear functions), linear algebra allows us to replace proving infinite number of statements (for all vectors u, v, w) by doing

If we are given a point P belonging to the line `, and v is a nonzero vector parallel to `, then for each point X on `, the vector −→.. PX is parallel

, v k are said to be linearly independent if the only linear combination of these vectors which is equal to the zero vector is the combination where all coefficients are equal to

Indeed, we can note that injectivity means that Ax = b has at most one solution for each b, which is equivalent to the absence of free variables, which is equivalent to the system Ax

We also defined elementary row operations on matrices to be the following moves that transform a matrix into another matrix with the same number of rows and columns:.. Swapping