Definitions

fixed up

Vector space

In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. More formally, a vector space is a set on which two operations, called (vector) addition and (scalar) multiplication, are defined and satisfy certain natural axioms which are listed below. Vector spaces are the basic objects of study in linear algebra, and are used throughout mathematics, science, and engineering.

The most familiar vector spaces are two- and three-dimensional Euclidean spaces. Vectors in these spaces can be represented by ordered pairs or triples of real numbers, and are isomorphic to geometric vectors—quantities with a magnitude and a direction, usually depicted as arrows. These vectors may be added together using the parallelogram rule (vector addition) or multiplied by real numbers (scalar multiplication). The behavior of geometric vectors under these operations provides a good intuitive model for the behavior of vectors in more abstract vector spaces, which need not have a geometric interpretation. For example, the set of (real) polynomials forms a vector space.

A much more extensive idea of what constitutes a vector space is found in the See also subsection for this article, which provides links to more abstract examples of this term.

Motivation and definition

The space R2 consisting of pairs of real numbers, (x, y), is a common example for a vector space. It is one, because any pair (here a vector) can be added:

(x1, y1) + (x2, y2) = (x1 + x2, y1 + y2),
and any vector (x, y) can be multiplied by a real number s to yield another vector (sx, sy). The general vector space notion is a generalization of this idea. It is more general in several ways:

  • other fields instead of the real numbers, such as complex numbers or finite fields, are allowed.
  • the dimension, which is two above, is arbitrary.
  • most importantly, elements of vector spaces are not usually expressed as linear combinations of a particular set of vectors, i.e. there is no preference of representing the vector (x, y) as

(x, y) = x · (1, 0) + y · (0, 1)
over
(x, y) = (−1/3·x + 2/3·y) · (−1, 1) + (1/3·x + 1/3·y) · (2, 1)
The pairs of vectors (1, 0) and (0, 1) or (−1, 1) with (2, 1) are called bases of R2 (see below).

Definition

Let F be a field (such as the rationals, reals or complex numbers), whose elements will be called scalars. A vector space over the field F is a set V together with two binary operations,

satisfying the axioms below. Let u, v, w be arbitrary elements of V, and a, b be elements of F, respectively.

Associativity of addition u + (v + w) = (u + v) + w
Commutativity of addition v + w = w + v
Identity element of addition There exists an element 0V, called the zero vector, such that v + 0 = v for all vV.
Inverse elements of addition For all v ∈ V, there exists an element wV, called the additive inverse of v, such that v + w = 0.
Distributivity of scalar multiplication with respect to vector addition a (v + w) = a v + a w
Distributivity of scalar multiplication with respect to field addition (a + b) v = a v + b v
Compatibility of scalar multiplication with field multiplication a (b v) = (ab) v
Identity element of scalar multiplication 1 v = v, where 1 denotes the multiplicative identity in F

Elementary remarks

The first four axioms can be subsumed by requiring the set of vectors to be an abelian group under addition, and the rest are equivalent to a ring homomorphism f from the field into the endomorphism ring of the group of vectors. Then scalar multiplication a v is defined as (f(a))(v). This can be seen as the starting point of defining vector spaces without referring to a field.

Some sources choose to also include two axioms of closure u + vV and a vV for all a, u, and v. When the operations are interpreted as maps with codomain V, these closure axioms hold by definition, and do not need to be stated independently. Closure, however, must be checked to determine whether a subset of a vector space is a subspace.

Expressions of the form “v a”, where vV and aF, are, strictly speaking, not defined. Because of the commutativity of the underlying field, however, “a v” and “v a” are often treated synonymously. Additionally, if vV, wV, and aF where vector space V is additionally an algebra over the field F then a v w = v a w, which makes it convenient to consider “a v” and “v a” to represent the same vector.

There are a number of properties that follow easily from the vector space axioms. Some of them derive from elementary group theory, applied to the (additive) group of vectors: for example the zero vector 0V and the additive inverse −v of a vector v are unique. Other properties can be derived from the distributive law, for example scalar multiplication by zero yields the zero vector and no other scalar multiplication yields the zero vector.

History

The notion of a vector space stems conceptually from affine geometry, via the introduction of coordinates in the plane or usual three-dimensional space. Around 1636, French mathematicians Descartes and Fermat found the bases of analytic geometry by tying the solutions of an equation with two variables to the determination of a plane curve.

To achieve a geometric solutions without using coordinates, Bernhard Bolzano introduced in 1804 certain operations on points, lines and planes, which are predecessors of vectors. This work was considered in the concept of barycentric coordinates of August Ferdinand Möbius in 1827. The founding leg of the definition of vectors was the Bellavitis' definition of the bipoint, which is an oriented segment, one of whose ends is the origin and the other one a target.

The notion of vector was reconsidered with the presentation of complex numbers by Jean-Robert Argand and William Rowan Hamilton and the inception of quaternions by the latter mathematician, being elements in R2 and R4, respectively. Treating them using linear combinations goes back to Laguerre in 1867, who defined systems of linear equations.

In 1857, Cayley introduced the matrix notation which allows one to harmonize and simplify the writing of linear maps between vector spaces.

At the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. His work exceeds the framework of vector spaces, since his introduction of multiplication led him to the concept of algebras. Nonetheless, the concepts of dimension and linear independence are present, as well as the scalar product (1844). The primacy of these discoveries was disputed with Cauchy's publication Sur les clefs algébriques.

Italian mathematician Peano, one of whose important contributions was the rigorous axiomatisation of extant concepts, in particular the construction of sets, was one of the first to give the modern definition of vector spaces around the end of 19th century.

An important development of this concept is due to the construction of function spaces by Henri Lebesgue. This was later formalized by David Hilbert and Stefan Banach, in his 1920 PhD thesis.

At this time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces. Also at this time, the first studies concerning infinite dimensional vector spaces were done.

Linear maps and matrices

Two given vector spaces V and W (over the same field F) can be related by linear maps (also called linear transformations) from V to W. These are functions that are compatible with the relevant structure—i.e., they preserve sums and scalar products:
f(v + w) = f(v) + f(w) and f(a · v) = a · f(v).

An isomorphism is a linear map such that there exists an inverse map such that the two possible compositions and are identity maps. Equivalently, f is both one-to-one (injective) and onto (surjective). If there exists an isomorphism between V and W, the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via f, transported to similar ones in W, and vice versa via g.

Given any two vector spaces V and W, the set of linear maps VW forms a vector space HomF(V, W) (also denoted L(V, W): two such maps f and g are added by adding them pointwise, i.e.

(f + g)(v) = f(v) + g(v)
and scalar multiplication is given by
(a·f)(v) = a·f(v).

The case of W = F, the base field, is of particular interest. The space of linear maps from V to F is called the dual vector space, denoted V.

Matrices

Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars, i.e. elements of some field F. Any m-by-n matrix A gives rise to a linear map from Fn, the vector space consisting of n-tuples x = (x1, ..., xn) to Fm, by the following

(x_1, x_2, ..., x_n) mapsto left(sum_{i=1}^m x_i a_{i1}, sum_{i=1}^m x_i a_{i2}, ..., sum_{i=1}^m x_i a_{in} right),
or, using the matrix multiplication of the matrix A with the coordinate vector x:
mathbf x mapsto A mathbf x.
Moreover, after choosing bases of V and W (see below), any linear map is uniquely represented by a matrix via this assignment.

The determinant of a square matrix tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero.

Eigenvalues and eigenvectors

A particularly important case are endomorphisms, i.e. maps . In this case, vectors v can be compared to their image under f, f(v). Any vector v satisfying λ · v = f(v), where λ is a scalar, is called an eigenvector, with eigenvalue λ. Rephrased, this means that v is an element of kernel of the difference (the identity map In the finite-dimensional case, this can be rephrased using determinants: f having eigenvalue λ is equivalent to
det (fλ · Id) = 0.
Spelling out the definition of the determinant, the left hand side turns out to be polynomial function in λ, called the characteristic polynomial of f. If the field F is large enough to contain a zero of this polynomial (which automatically happens for F algebraically closed, such as F = C) any linear map has at least one eigenvector. The vector space V may or may not be spanned by eigenvectors, a phenomenon governed by Jordan–Chevalley decomposition.

Subspaces and quotient spaces

In general, a nonempty subset W of a vector space V that is closed under addition and scalar multiplication is called a subspace of V. Subspaces of V are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set of vectors is called its span. Expressed in terms of elements, the span is the subspace consisting of finite sums (called linear combinations)

a1v1 + a2v2 + ... + anvn,
where the ai and vi (i = 1, ..., n) are scalars and vectors, respectively.

The counterpart to subspaces are quotient vector spaces. Given any subspace WV, the quotient space V/W ("V modulo W") is defined as follows: as a set, it consists of v + W = {v + w, wW}, where v is an arbitrary vector in V. The sum of two such elements v1 + W and v2 + W is (v1 + v2) + W, and scalar multiplication is given by a · (v + W) = (a · v) + W. The key point in this definition is that v1 + W = v2 + W if and only if the difference of v1 and v2 lies in W. This way, the quotient space "forgets" information that is contained in the subspace W.

For any linear map f: VW, the kernel ker(f) consists of elements v that are mapped to 0 in W. It, as well as the image im(f) = {f(v), vV}, are linear subspaces of V and W, respectively. There is a fundamental isomorphism

V / ker(f) ≅ im(f).
The existence of kernels and images as above is part of the statement that the category of vector spaces (over a fixed field F) is an abelian category.

Examples of vector spaces

Coordinate spaces and function spaces

The first example of a vector space over a field F is the field itself, equipped with its standard addition and multiplication. This is the particular case n = 1 in the vector space usually denoted Fn, known as the coordinate space where n is an integer. Its elements are n-tuples
(f1, f2, ..., fn), where the fi are elements of F.
Infinite coordinate sequences, and more generally functions from any fixed set Ω to a field F also form vector spaces. The latter applies in particular to common geometric situations, such as Ω being the real line or an interval, open subsets of Rn etc. The vector spaces stemming of this type are called function spaces. Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity, i.e. sums and scalar multiples of functions possessing such a property will still have that property. Hence, the set of such functions are vector spaces. The methods of functional analysis provide finer information about these spaces, see below. The vector space F[x] is given by polynomial functions, i.e.
f (x) = rnxn + rn−1xn−1 + ... + r1x + r0, where the coefficients r0, ..., rn are in F,
or power series, which are similar, except that infinitely many terms are allowed.

Systems of linear equations

Systems of linear equations also lead to vector spaces. Indeed this source may be seen as one of the historical reasons for developing this notion. For example, the solutions of
a + 3b + c = 0
4a + 2b + 2c = 0
given by triples with arbitrary a, b = a/2, and c = −5a/2 form a vector space. In matrix notation, this can be interpreted as the solution of the equation
Ax = 0,
where x is the vector (a, b, c) and A is the matrix
begin{bmatrix}
1 & 3 & 1 4 & 2 & 2end{bmatrix}. Equivalently, this solution space is the kernel of the linear map attached to A (see above).

In a similar vein, the solutions of homogeneous linear differential equations, for example

f ''(x) + 2f '(x) + f (x) = 0
also form vector spaces: since the derivatives of the sought function f appear linearly (as opposed to f ''(x)2, for example) and (f + g)' = f ' + g ', any linear combination of solutions is still a solution. In this particular case the solutions are given by where a and b are arbitrary constants, and e=2.718....

Algebraic number theory

A common situation in algebraic number theory is a field F containing a smaller field E. Then, by the given multiplication and addition operations of F, F becomes an E-vector space. F is also called a field extension. As such C, the complex numbers are a vector space over R. Another example is Q(z), the smallest field containing the rationals and some complex number z. The dimension of this vector space (see below) is closely tied to z being algebraic or transcendental.

Basic constructions

In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. In addition to the concrete definitions given below, they are also characterized by universal properties, which determines an object X by specifying the linear maps from X to any other vector space.

Direct product and direct sum

The direct product prod_{i in I} V_i of a family of vector spaces Vi, where i runs through some index set I, consists of tuples (vi)iI, i.e. for any index i, one element vi of Vi is given. Addition and scalar multiplication is performed componentwise:
(vi) + (wi) = (vi + wi).
a · (vi) = (a · vi),
A variant of this construction is the direct sum oplus_{i in I} V_i (also called coproduct and denoted coprod_{i in I}V_i), where only tuples with finitely many nonzero vectors are allowed. The direct sum is denoted . If the index set I is finite, the two constructions agree, but differ otherwise.

Tensor product

The tensor product VF W, or simply VW, is a vector space consisting of finite (formal) sums of symbols
v1w1 + v2w2 + ... + vnwn,
subject to certain rules mimicking bilinearity, such as
a · (vw) = (a · v) ⊗ w = v ⊗ (a · w).

The tensor product—one of the central notions of multilinear algebra—can be seen as the extension of the hierarchy of scalars, vectors and matrices. Via the fundamental isomorphism

HomF (V, W) ≅ VF W,
matrices, which are essentially the same as linear maps, i.e. contained in the left hand side, translate into an element of the tensor product of the dual of V with W.

In the important case VV, the tensor product can be loosely thought of as adding formal "products" of vectors (which, ad hoc, don't exist in vector spaces). In general, there are no relations between the two tensors v1v2 and v2v1. Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing v1v2 = − v2v1 yields the exterior algebra. The latter is the linear algebraic fundament of differential forms: they are elements of the exterior algebra of the cotangent space to manifolds. Tensors, i.e. element of some tensor product have various applications, for example the Riemann curvature tensor encodes all curvatures of a manifold at a time, which finds applications in general relativity, for example, where the Einstein curvature tensor describes the curvature of space-time.

Bases and dimension

If, in a (finite or infinite) set {vi}iI no vector can be removed without changing the span, the set is said to be linearly independent. Equivalently, an equation

0 = a1vi1 + ai2v2 + ... + anvin
can only hold if all scalars a1, ..., an equal zero.

A linearly independent set whose span is V is called a basis for V. Hence, every element can be expressed as a finite sum of basis elements, and any such representation is unique (once a basis is chosen). Vector spaces are sometimes introduced from this coordinatised viewpoint.

Using Zorn’s Lemma (which is equivalent to the axiom of choice), it can be proven that every vector space has a basis. It follows from the ultrafilter lemma, which is weaker than the axiom of choice, that all bases of a given vector space have the same cardinality. This cardinality is called the dimension of the vector space. Historically, the existence of bases was first shown by Felix Hausdorff. It is known that, given the rest of the axioms, this statement is in fact equivalent to the axiom of choice.

For example, the dimension of the coordinate space Fn is n, since any element in this space (x1, x2, ..., xn) can be uniquely expressed as a linear combination of n vectors e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), namely the sum

sum_{i=1}^n x_i mathbf{e}_i.

By the unicity of the decomposition of any element into a linear combination of chosen basis elements vi, linear maps are completely determined by specifying f(vi). Given two vector spaces, V and W, of the same dimension, a choice of bases of V and W and a bijection between the sets of bases gives rise to the map that maps any basis element of V to the corresponding basis element of W. This map is, by its very definition, an isomorphisms. Therefore, vector spaces over a given field are fixed up to isomorphism by the dimension. Thus any n-dimensional vector spaces over F is isomorphic to F0n.

Vector spaces with additional structures

From the point of view of linear algebra, the vector spaces are completely understood insofar as any vector space is characterized, up to isomorphism, by its dimension. The needs of functional analysis require considering additional structures, especially with respect to convergence of infinite series. On the other hand, the notion of bases as explained above can be difficult to apply to infinite-dimensional spaces, thus also calling for an adapted approach.

Therefore, it is common to study vector spaces with certain additional structures. This is often necessary to recover ordinary notions from geometry or analysis.

Topological vector spaces

Convergense issues are adressed by considering vector spaces V which also carry a compatible topology, i.e. a structure that allows to talk about elements being close to each other. Compatible here means that addition and scalar multiplication should be continuous maps, i.e. if x and y in V, and a in F vary by a bounded amount (the field also has to carry a topology in this setting), then so do x + y and ax.

Only in such topological vector spaces can one consider infinite sums of vectors, i.e. series, through the notion of convergence. For example, the term

sum_{i=0}^{infty} f_i,
where the fi are some elements of a given vector space of real or complex functions means the limit of the corresponding finite sums of functions.

A way of ensuring the existence of limits of infinite series as above is to restrict attention to complete vector spaces, i.e. any Cauchy sequence (which can be thought of as sequences that "should" possess a limit) do have a limit. Roughly, completeness means the absence of holes. E.g. the rationals are not complete, since there are series of rational numbers converging to irrational numbers such as sqrt 2. A less immediate example is provided by functions equipped with the Riemann integral.

In the realm of topological vector spaces, such as Banach and Hilbert spaces, all notions should be coherent with the topology. For example, instead of considering all linear maps (also called functionals) VW, it is useful to require maps to be continuous. For example, the dual space V consists of continuous functionals VR (or C). If V is some vector space of (well-behaved) functions, this dual space, called space of distributions, which can be thought of as generalized functions, find applications in solving differential equations. Applying the dual construction twice yields the bidual V∗∗. There is always an natural, injective map VV∗∗. This map may or may not be an isomorphism. If so, V is called reflexive.

Banach spaces

Banach spaces, in honor of Stefan Banach, are complete normed vector spaces, i.e. the topology comes from a norm, a datum that allows to measure lengths of vectors.

A common example is the vector space lp consisting of infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤ ∞) given by

|mathbf x|_p := (sum_i |x_i|^p)^{1/p} for p < ∞ and |mathbf x|_infty := text{sup}_i |x_i|
is finite. In the case of finitely many entries, i.e. Rn, the topology does not yield additional insight—in fact, all topologies on finite-dimensional topological vector spaces are equivalent, i.e. give rise to the same notion of convergence. In the infinite-dimensional situation, however, the topologies for different p are inequivalent. E.g. the sequence xn of vectors
xn = (2n, 2n, ..., 2n, 0, 0, ...)—the first 2n components are 2n, the following ones are 0
yields
|x_n|_1 = sum_{i=1}^{2^n} 2^{-n} = 1 and |x_n|_infty = sup (2^{-n}, 0) = 2^{-n},
i.e. the sequence xn, with n tending to ∞ converges to the zero vector for p = ∞, but does not for p = 1. This is an example for the remark that the study of topological vector spaces is richer than that of vector spaces without additional data.

More generally, it is possible to consider functions endowed with a norm that replaces the sum in the above p-norm by an integral, specifically the Lebesgue integral

|f|_p := left(int |f(x)|^p dx right)^{1/p}.
The set of integrable functions on a given domain Ω (for example an interval) satisfying |f |p < ∞, and equipped with this norm is denoted Lp(Ω).

Since the above uses the Lebesgue integral (as opposed to the Riemann integral), these spaces are complete. Concretely this means that for any sequence of functions satisfying the condition

lim_{k, n to infty}int_Omega |{f}_k (x)-{f}_n (x)|^p dx = 0 .
there exists a function f(x) belonging to the vector space Lp(Ω) such that
lim_{k to infty}int_Omega |{f} (x)-{f}_k (x)|^p dx = 0 .

Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.

Hilbert spaces

Slightly more special, but equally crucial to functional analysis is the case where the topology is induced by an inner product, which allows to measure angles between vectors. This entails, that lengths of vectors can be defined too, namely by |mathbf v| = sqrt {langle v, v rangle}. If such a space is complete, it is called Hilbert space, in honor of David Hilbert.

A key case is the Hilbert space L2(Ω), whose inner product is given by

langle f | g rangle = int_Omega overline{f(x)} g(x) dx ., with overline{f(x)} being the complex conjugate of f(x).

Reversing this direction of thought, i.e. finding a sequence of functions fn that approximate a given function, is equally crucial. Early analysis, for example, in the guise of the Taylor approximation, established an approximation of differentiable functions f by polynomials. Ad hoc, this technique is local, i.e. approximating f closely at some point x may not approximate the function globally. The Stone-Weierstrass theorem, however, states that every continuous function on [a, b] can be approximated as closely as desired by a polynomial. More generally, and more conceptually, the theorem yields a simple description what "basic functions" suffice to generate a Hilbert space, in the sense that the closure of their span (i.e. finite sums and limits of those) is the whole space. For distinction, a basis in the linear algebraic sense as above is then called a Hamel basis. Not only does the theorem exhibit polynomials as sufficient for approximation purposes, it, together with the Gram-Schmidt process, also allows the construction of a basis of orthogonal polynomials. Orthogonality means that langle p | q rangle = 0, i.e. the polynomials obtained don't interfer. Instead of polynomials, similar statements hold for Legendre polynomials, Bessel functions and Hypergeometric functions.

Resolving general functions into sums of trigonometric functions is known as the Fourier expansion, a technique much used in engineering. It is possible to describe any function f(x) on a bounded, closed interval (or equivalently, any periodic function) as the limit of the following sum

f_N (x) = frac{a_0}{2} + sum_{m=1}^{N}left[a_mcosleft(mxright)+b_msinleft(mxright)right] as N → ∞ , with suitable coefficients am and bm, called Fourier coefficients. This expansion is surprising insofar that countably many functions, namely the rational multiples of sin(mx) and cos(mx), where m takes values in the integers, are enough to express any other function, of which there are uncountably many.

The solutions to various important differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations and frequently solutions with particular physical properties are used as basis functions, often orthogonal, that serve as the axes in a corresponding Hilbert space.

As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time, by means of a partial differential equation determining a wavefunction. Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of an associated (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem describes the representation of linear operators that act upon functions in terms of these eigenfunctions and their eigenvalues.

Distributions

Distributions (or "generalized functions") are a powerful instrument to solve differential equations, and are exceeding the cadre of Hilbert spaces. Concretely, a distribution is a map assigning a number to any function in a given vector space. A standard example is given by integrating the function over some domain Ω:
f mapsto int_Omega f(x)dx
The great use of distributions stems from the remark that standard analytic notions such as derivatives can be generalized to distributions. Thus differential equations can be solved in the distributive sense first. This can be accomplished using Green's functions. Then, the found solution can in some cases be proven (e.g. using the Riesz representation theorem) to be actually a true function.

Algebras over fields

In general, vector spaces do not possess a multiplication operation. (An exceptional case are finite-dimensional vector spaces over finite fields, which turn out to be (finite) fields, as well.) A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field. An important example is the ring of polynomials F[x] in one variable, x, with coefficients in a field F, or similarly with several variables. In this case the multiplication is both commutative and associative. These rings, and their quotient rings form the basis of algebraic geometry, because they are the rings of functions of algebraic geometric objects.

Another crucial example are Lie algebras, which are neither commutative, nor associative, but the failure to be so is measured by the constraints ([x, y] denotes multiplication of x and y):

[x, y] = −[y, x] and [x, [y, z]] + [y, [x, z]] + [z, [x, y]] = 0
The standard example is the vector space of n-by-n matrices, setting [x, y] to be the commutator xyyx. Lie algebras are intimately connected to Lie groups. A special case of Lie algebras are Poisson algebras.

Ordered vector spaces

An ordered vector space is a vector space equipped with an order ≤, i.e. vectors can be compared. Rn can be ordered, for example, by comparing the coordinates of the vectors. Riesz spaces present further key cases.

Generalizations

Modules

Modules are to ring (mathematics) what vector spaces are to fields, i.e. the very same axioms, applied to a ring R instead of a field F yield modules. In contrast to the good understanding of vector spaces offered by linear algebra, the theory of modules is in general much more complicated. This is due to the presence of elements rR that do not possess multiplicative inverses. For example, modules need not have bases as the Z-module (i.e. abelian group) Z/2 shows; those modules that do (including all vector spaces) are known as free modules.

Vector bundles

A family of vector spaces, parametrised continuously by some topological space X, is a vector bundle. More precisely, a vector bundle E over X is given by a continuous map

π : EX,
which is locally a product of X with some (fixed) vector space V, i.e. such that for every point x in X, there is a neighborhood U of x such that the restriction of π to π−1(U) equals the projection V × UU . The case dim V = 1 is called a line bundle. The interest in this notion comes from the fact that while the situation is simple to oversee locally, there may be global twisting phenomena. For example, the Möbius strip can be seen as a line bundle over the circle S1 (at least if one extends the bounded interval to infinity). The (non-)existence of vector bundles with certain properties can tell something about the underlying space X. For example, over the 2-sphere S2, there is no tangent vector field which is everywhere nonzero, as opposed to the circle S1. The study of all vector bundles over some topological space is known as K-theory. An algebraic counterpart to vector bundles are locally free modules, which—in the guise of projective modules—are important in homological algebra and algebraic K-theory.

Affine and projective spaces

Affine spaces can be thought of being vector spaces whose origin is not specified. Formally, an affine space is a set with a transitive vector space action. In particular, a vector space is an affine space over itself, by the structure map
V2V, (a, b) ↦ ab.
Sets of the form x + Rm (viewed as a subset of some bigger Rn), i.e. moving some linear subspace by a fixed vector x, yields affine spaces, too.

The set of one-dimensional subspaces of a fixed vector space V is known as projective space, an important geometric object formalizing the idea of parallel lines intersecting at infinity. More generally, the Grassmann manifold consists of linear subspaces of higher (fixed) dimension n. Finally, flag manifolds parametrize flags, i.e. chains of subspaces (with fixed dimension)

0 = V0V1 ⊂ ... ⊂ Vn = V

Notes

Citations

References

Linear algebra

Functional analysis

Historical references

Further references

See also



Search another word or see fixed upon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature
FAVORITES
RECENT

;