PDF Archive

Easily share your PDF documents with your contacts, on the Web and Social Networks.

Send a file File manager PDF Toolbox Search Help Contact



vector spaces intro .pdf



Original filename: vector_spaces_intro.pdf

This PDF 1.5 document has been generated by TeX / MiKTeX pdfTeX-1.40.12, and has been sent on pdf-archive.com on 08/07/2012 at 23:59, from IP address 98.228.x.x. The current document download page has been viewed 799 times.
File size: 216 KB (11 pages).
Privacy: public file




Download original PDF file









Document preview


A Very Brief Introduction to Vector Spaces
Ben Wallis

1

The Basics.

Before we can define vector spaces, we need to first understand what is meant
by a field. There are a variety of ways to characterize fields. For instance,
a field is a commutative ring with identity for which every nonzero element
has a multiplicative inverse. Alternatively, a field can be understood as a set
which is an additive abelian group whose nonzero elements form an abelian
group under multiplication, where the distributive laws hold between addition
and multiplication operations. In this introduction, though, we take a more
elementary approach, as follows.
Definition 1.1. A field is a nonempty set F equipped with operations + :
F × F → F (addition) and · : F × F → F (multiplication) which satisfies
the following field axioms:
(F1) F is closed1 under addition and multiplication, i.e. a + b ∈ F and a · b ∈ F
for all a, b ∈ F .
(F2) Addition and multiplication are each commutative, i.e.
a + b = b + a and a · b = b · a
for all a, b ∈ F .
(F3) Addition and multiplication are each associative, i.e.
a + (b + c) = (a + b) + c and a · (b · c) = (a · b) · c
for all a, b, c ∈ F .
(F4) There exist distinct and unique additive and multiplicative identities in
F , i.e. there are unique 0, 1 ∈ F with 0 6= 1 such that
0 + a = a + 0 = a and 1 · a = a · 1 = a
1 Notice

that the term “closed” has a different meaning here than in the topological sense.

1

for all a ∈ F .
(F5) F is closed under additive inverses, i.e. for every a ∈ F there exists a
unique element −a ∈ F such that a + (−a) = 0.
(F6) The set F × of nonzero elements in F is closed under multiplicative
inverses, i.e. for every nonzero a ∈ F there exists a unique element
a−1 ∈ F such that a · a−1 = 1.
(F7) The distributive laws hold, i.e.
a · (b + c) = a · b + a · c and (a + b) · c = a · c + b · c
for all a, b, c ∈ F .
Usually we dispense with the symbol · and just write ab in place of a · b. Some
authors do not require that 1 6= 0, but this fact follows the others as long as
F contains more than a single element. Additionally, some of these axioms
are redundant, namely the uniqueness of identities and inverses, as well as the
second distributive law.
Notice that we can treat F as an abelian group under addition. The set of
nonzero elements in F likewise forms an abelian group under multiplication,
which motivates the notation
F × := {a ∈ F : a 6= 0}.
Some of the most common examples of fields include the sets C (the complex
numbers), R (the reals) and Q (the rationals). In contrast, the set Z of integers
is not a field since it is not closed under multiplicative inverses, even though it
satisfies all six of the other field axioms.
Fields need not be infinite. Consider the simplest example of a field,
Z/2Z := {0, 1}
with operations + and · defined by

1 + 1 = 0 + 0 = 0 · 1 = 1 · 0 = 0 · 0 = 0 and 1 + 0 = 0 + 1 = 1 · 1 = 1.
In fact, Z/pZ is a field for any prime p.
Some properties of fields follow immediately from the definition. We list them
here
2

Proposition 1.2. Let F be a field. Then:
(i) The cancellation laws hold for F , i.e. for all a, b, c ∈ F , if a + c = b + c
or a · c = b · c then a = b.
(ii) a · 0 = 0 · a = 0 for all a ∈ F .
(iii) The product of nonzero elements in F is again nonzero; i.e., if a, b ∈ F
with ab = 0 then either a = 0 or b = 0.
(iv) −(−a) = a for all a ∈ F .
(v) a(−b) = (−a)b = −(ab) for all a, b ∈ F ; in particular (−1)a = −a.
(vi) (−a)(−b) = ab for all a, b ∈ F .
(vii) (ab)−1 = a−1 b−1 and −(a + b) = (−a) + (−b) for all a, b ∈ F .
(vii) (−a)−1 = −(a−1 ) for all a ∈ F .
For the most part, one can think of a field as being analogous to the real
numbers. The only catch is that we must be careful not to forget that fields
don’t always have order relations or topological structure. And of course, as we
have already seen, fields need not be infinite.
We can also define a subfield of a field F to be a subset G of F which is a field
in its own right under the same operations as F . To determine whether or not
some subset G of a field F is a subfield, it suffices to verify that G contains 0
and 1, and is closed under addition, multiplication and both kinds of inverses.
Recall that a homomorphism is a structure-preserving map, where the structure in question varies from context to context. In particular, a field homomorphism is a map φ between fields which satisfies generally

φ(a + b) = φ(a) + φ(b) and φ(ab) = φ(a)φ(b).
From this it follows that also
φ(−a) = −φ(a),

φ(a−1 ) = φ(a)−1 ,

φ(0) = 0 and φ(1) = 1.

Recall that an isomorphism is a bijective homomorphism. Field isomorphisms
are always invertible; that is, if φ : F → G is a field isomorphism, then there
exists a map φ−1 : G → F such that φ−1 ◦ φ = φ ◦ φ−1 = ι (the identity map
ι(a) = a). Notice that the inverse of an isomorphism is itself an isomorphism.

3

If there exists a field isomorphism between F and G, then we say that F and G
are isomorphic as fields. In that case we write
F ∼
= G.
Definition 1.3. Let F be a field. A polynomial over F in x is an expression
am xm + am−1 xm−1 + · · · + a1 x + a0
with coefficients a0 , · · · , am ∈ F , where x is a variable ranging over elements
of F . We may denote this polynomial as
p(x) = am xm + am−1 xm−1 + · · · + a1 x + a0 .
We say that p has degree n, written deg p = n, whenever either n = m or else
ak = 0 for all n < k ≤ m. In this case an is called the leading coefficient of
p. If an = 1 then p is said to be monic. The set of all polynomials over F in x
is denoted F [x].
Please note that a polynomial is an expression, not a function. Even though
each such expression defines a unique function in the obvious way, the reverse
is not always true. For example, it is possible to find p(x), q(x) ∈ (Z/2Z)[x] for
which p = q as functions but p(x) 6= q(x) as polynomials.
With these tools in hand, we can proceed to define vector spaces.
Definition 1.4. Let F be a field. A vector space over F is a nonempty set
V equipped with operations + : V × V → V (addition) and · : F × V → V
(scalar multiplication), which satisfies the following vector space axioms.
(VS1) Addition is commutative and associative, i.e.
u + v = v + u and (u + v) + w = u + (v + w)
for all u, v, w ∈ V .
(VS2) V contains a zero element, i.e. there is 0 ∈ V such that
v+0=0+v =v
for all v ∈ V .

4

(VS3) For every v ∈ V there is a unique −v ∈ V such that
v + (−v) = (−v) + v = 0,
and in particular −v = (−1)v.
(VS4) For every v ∈ V we have 1v = v and 0v = 0.
(VS5) Scalar multiplication is associative, i.e. for all a, b ∈ F and v ∈ V we have
(ab)v = a(bv).
(VS6) All distributive laws hold, i.e. for all a, b ∈ F and u, v ∈ V we have
a(u + v) = au + av and (a + b)v = av + bv.
In this case the elements of V are called vectors and the elements of F are
called scalars.
We must take care with notation of vector spaces. Notice that the symbol +
can denote either of two distinct operations—addition of vectors or addition
of scalars. Similarly, multiplication can take place either between two scalars
(yielding again a scalar) or else between a scalar and a vector (yielding a vector).
Furthermore, the symbol 0 can denote either of two distinct elements—the additive identity in F or else the zero vector in V . Usually, though, these distinctions
are made clear in their context.
The simplest vector space is the zero space {0} (over any field). However, one
of the most common examples of a vector space is Euclidean 3-space R3 over
the real numbers R, with operations
(x1 , x2 , x3 ) + (y1 , y2 , y3 ) = (x1 + y1 , x2 + y2 , x3 + y3 )
and

a(x1 , x2 , x3 ) = (ax1 , ax2 , ax3 ).
More generally, for any field F the set
F n = {(a1 , a2 , · · · , an ) : a1 , a2 , · · · , an ∈ F }
is a vector space over F with coordinate-wise operations.
5

We can generalize further still on this observation. Let Mm×n (F ) denote the
set of all m × n matrices with entries in a field F . Then Mm×n (F ) is a vector
space over F under the obvious entry-wise operations.
Not all vector spaces are quite so boring though. Consider the following proposition.
Proposition 1.5. Let X be a nonempty set and F a field. Then the set F(X, F )
of functions f : X → F is a vector space over F , with operations defined by

(af )(x) = a(f (x)) and (f + g)(x) = f (x) + g(x).
As one might expect, if V is a vector space over a field F , it is natural to define
a vector subspace as a subset W of V which is itself a vector space over F
with operations inherited from V . To determine whether a set W ⊆ V is a
vector subspace of V , it is sufficient to check that W is nonempty and closed
under addition and scalar multiplication.
The spaces V and {0} are trivial subspaces of a vector space V , but one can find
more interesting examples as well. For instance the set F [x] of polynomials in x
with coefficients from F can be viewed as a vector subspace of the set F(F, F )
of all functions f : F → F .
Definition 1.6. Let V be a vector space over a field F , and let S ⊆ V be a
subset (not necessarily a subspace). A linear combination of vectors in S is
an expression of the form
a1 v1 + a2 v2 + · · · + an vn ,
where v1 , v2 , · · · , vn ∈ S have respective coefficients a1 , a2 , · · · , an ∈ F .
In other words, a linear combination in S is any finite sum of scalar multiples
of elements in S. Note that even though linear combinations are always finite,
there is otherwise no limit to how many terms the sum can have.
Definition 1.7. Let S be a subset of a vector space V . If there is a linear
combination of vectors in S whose sum is zero but whose coefficients are not
all zero, then we say that S is linearly dependent. Otherwise S is said to be
linearly independent.
For example, let V = R3 and F = R. Then the set
S = {(1, 0, 0), (0, 1, 0), (1, 1, 0)}
6

is linearly dependent, because
(1, 0, 0) + (0, 1, 0) − (1, 1, 0) = 0.
However, the set
B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
is linearly independent since

a1 (1, 0, 0) + a2 (0, 1, 0) + a3 (0, 0, 1) = (a1 , a2 , a3 )
is zero iff a1 = a2 = a3 = 0.
As it turns out, the set B has a very important property: If we add any vector to
this set, then the result is a linearly dependent set. In this sense, B is maximally
linearly independent. Such sets are very special, but to see why we must first
develop some additional machinery.
Definition 1.8. Let S be a nonempty subset of a vector space V . Define the
span of S, written span S, as the set of all linear combinations of vectors in S.
By convention we let span ∅ = {0}.
It’s easy to see that span S is a vector subspace of V . In case span S = V , we
say that S generates V . If furthermore S is linearly independent, we call it a
basis for V .
Now we can see what was so special about B. Notice that any vector (x1 , x2 , x3 ) ∈
R3 can be written as
x1 (1, 0, 0) + x2 (0, 1, 0) + x3 (0, 0, 1),
a linear combination of vectors in B. Since B is linearly independent, that
means it is a basis for the vector space R3 .
Proposition 1.9. Every vector space has a basis.
Proof: Let S be the collection of all linearly independent subsets of a vector
set V . Notice that ∅ ∈ S, so that S is a nonempty
S set partially-ordered by
inclusion. Let C be a chain in S. We claim that C =S{v : v ∈ C for some
C ∈ C} is linearly independent. For let v1 , · · · , vn ∈
C. Then there are
7

C1 , · · · , Cn ∈ C containing v1 , · · · , vn , respectively. Since C is a chain then
there is k ∈ {1, · · · , n} such that C1 , · · · , Cn ⊆ Ck and hence v1 , · · · , vn ∈ Ck .
Since Ck is linearly independent, that means no nonzero linear combination of
v1 , · · · , vn S
sums to zero. Since
S v1 , · · · , vn is arbitrary for finite collections of
vectors in C, it follows that C is linearly independent. Thus every chain in
S has an upper bound in S, which by Zorn’s lemma means S has a maximal
element. It’s easy to see that this maximal element is a basis for V .
Proposition 1.10. Let B1 and B2 be bases for a vector space V . If B1 is
finite, then so is B2 , and furthermore B1 and B2 both contain the same number
of vectors. Otherwise B1 and B2 are both infinite.
These propositions together permit us to define the dimension of a vector space
V with a finite basis as the number, say n, of vectors in that basis; in that case
we write dim V = n. If instead the bases of V are infinite, then we say that V
has infinite dimension, and write dim V = ∞.
Proposition 1.11. Let F be a field and let n ∈ Z+ be a positive integer. We
define ek as the vector in F n whose jth coordinates are all zero for j 6= k, but
whose kth coordinate is 1. Then the set
{e1 , e2 , · · · , en }
is a basis for F n , called the canonical basis. In particular, dim F n = n.
Of course, not all vector spaces have finite dimension. For instance Q[x] is
infinite-dimensional with countable basis.
As with so many other algebraic structures, we may carry over the notion of
homomorphisms to vector spaces. Let φ : V → W be a map between vector
spaces over the same field F . If for all u, v ∈ V and scalars a ∈ F we have

φ(u + v) = φ(u) + φ(v) and φ(av) = aφ(v),
then we say that φ is a homomorphism, or, more commonly, a linear map.
If furthermore φ is bijective, then it is a vector space isomorphism. In that
case we say that V and W are isomorphic, and write
V ∼
= W.
Also, if φ : V → W is a vector space isomorphism then its inverse map φ−1 :
W → V exists and is itself a vector space isomorphism.

8

Similar to quotient groups in group theory, we can also define quotient vector
spaces. Let W be a subspace of a vector space V over a field F . Define the
cosets of W as
[v] := v + W := {v + w : w ∈ W }.
Then the operations

[u] + [v] = [u + v] and a[v] = [av]
are well defined so that
V /W := {[v] : v ∈ V }
is a vector space over F . This vector space is called the quotient vector space
of V and W .

9

Exercises:
1. Let F and G be fields with two elements each. Show that F and G are
isomorphic. [Hint: Every field contains distinct elements 0 and 1.]
2. Find a pair p(x), q(x) ∈ (Z/2Z)[x] of polynomials in x with coefficients from
Z/2Z = {0, 1} for which p = q as functions but p(x) 6= q(x) as polynomials.
3. Show that



a b
V =
: a, b ∈ R .
0 0
is a vector space over R, and find a basis for V . What is its dimension? [Hint:
You may use the fact that M2×2 (R) is a vector space over R.]
4. Let
S = {(1, 1, 0), (2, 1, 0)}.
Then span S is a subspace of R3 . Show that
V = {(a, 0, 0) : a ∈ R}
is a subspace of span S, and that dim V = 1.
5. Let W be a subspace of a vector space V , and define π : V → V /W by
π(v) = [v].
(a) Show that π is a linear map.
(b) Show that the quotient space V /{0} is isomorphic to V .
(c) If BV is a basis for V , show that
BV /W := {[u] : u ∈ BV }
spans V /W , i.e. show that span BV /W = V /W .
(d) Show that dim V = dim(V /W ) + dim W .
[Hint: For part (d), consider separately the cases where V is finite- and infinitedimensional.]
10

6. Recall that the set Q[x] of polynomials in x with rational coefficients forms
a vector space over Q.
(a) Describe the subspace span{1, x, x2 }.
(b) Describe the subspace span{x, x2 }.
(c) Prove that Q[x] has infinite dimension by finding an infinite linearly independent subset.
7. Let W1 and W2 be subspaces of a vector space V .
(a) Show that W1 ∩ W2 is a subspace of V .
(b) Show that dim(W1 ∩ W2 ) ≤ dim W1 .
8. Let W1 and W2 be subspaces of a vector space V , and define the sum of W1
and W2 to be
W1 + W2 := {w1 + w2 : w1 ∈ W1 , w2 ∈ W2 }.
Show that W1 + W2 = span(W1 ∪ W2 ).
9. Prove proposition 1.10.

11


Related documents


PDF Document thing
PDF Document vector spaces intro
PDF Document overview of mathematics
PDF Document supphelmholtzdecomposition
PDF Document physics for scientists and engineers
PDF Document 40i14 ijaet0514312 v6 iss2 920to931


Related keywords