Added to Favorites

Related Searches

In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. More formally, a vector space is a set on which two operations, called (vector) addition and (scalar) multiplication, are defined and satisfy certain natural axioms which are listed below. Vector spaces are the basic objects of study in linear algebra, and are used throughout mathematics, science, and engineering.## Motivation and definition

^{2} (see below).
### Definition

Let F be a field (such as the rationals, reals or complex numbers), whose elements will be called scalars. A vector space over the field F is a set V together with two binary operations,

### Elementary remarks

The first four axioms can be subsumed by requiring the set of vectors to be an abelian group under addition, and the rest are equivalent to a ring homomorphism f from the field into the endomorphism ring of the group of vectors. Then scalar multiplication a v is defined as (f(a))(v). This can be seen as the starting point of defining vector spaces without referring to a field. ## History

The notion of a vector space stems conceptually from affine geometry, via the introduction of coordinates in the plane or usual three-dimensional space. Around 1636, French mathematicians Descartes and Fermat found the bases of analytic geometry by tying the solutions of an equation with two variables to the determination of a plane curve.## Linear maps and matrices

Two given vector spaces V and W (over the same field F) can be related by linear maps (also called linear transformations) from V to W. These are functions that are compatible with the relevant structure—i.e., they preserve sums and scalar products:
### Matrices

### Eigenvalues and eigenvectors

A particularly important case are endomorphisms, i.e. maps . In this case, vectors v can be compared to their image under f, f(v). Any vector v satisfying λ · v = f(v), where λ is a scalar, is called an eigenvector, with eigenvalue λ. Rephrased, this means that v is an element of kernel of the difference (the identity map In the finite-dimensional case, this can be rephrased using determinants: f having eigenvalue λ is equivalent to
## Subspaces and quotient spaces

_{i} and v_{i} (i = 1, ..., n) are scalars and vectors, respectively. ## Examples of vector spaces

### Coordinate spaces and function spaces

The first example of a vector space over a field F is the field itself, equipped with its standard addition and multiplication. This is the particular case n = 1 in the vector space usually denoted F^{n}, known as the coordinate space where n is an integer. Its elements are n-tuples
^{n} etc. The vector spaces stemming of this type are called function spaces. Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity, i.e. sums and scalar multiples of functions possessing such a property will still have that property. Hence, the set of such functions are vector spaces. The methods of functional analysis provide finer information about these spaces, see below. The vector space F[x] is given by polynomial functions, i.e.
### Systems of linear equations

Systems of linear equations also lead to vector spaces. Indeed this source may be seen as one of the historical reasons for developing this notion. For example, the solutions of
^{2}, for example) and (f + g)' = f ' + g ', any linear combination of solutions is still a solution. In this particular case the solutions are given by where a and b are arbitrary constants, and e=2.718....
### Algebraic number theory

A common situation in algebraic number theory is a field F containing a smaller field E. Then, by the given multiplication and addition operations of F, F becomes an E-vector space. F is also called a field extension. As such C, the complex numbers are a vector space over R. Another example is Q(z), the smallest field containing the rationals and some complex number z. The dimension of this vector space (see below) is closely tied to z being algebraic or transcendental.
## Basic constructions

In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. In addition to the concrete definitions given below, they are also characterized by universal properties, which determines an object X by specifying the linear maps from X to any other vector space.
### Direct product and direct sum

The direct product $prod\_\{i\; in\; I\}\; V\_i$ of a family of vector spaces V_{i}, where i runs through some index set I, consists of tuples (v_{i})_{i ∈ I}, i.e. for any index i, one element v_{i} of V_{i} is given. Addition and scalar multiplication is performed componentwise:
### Tensor product

The tensor product V ⊗_{F} W, or simply V ⊗ W, is a vector space consisting of finite (formal) sums of symbols
## Bases and dimension

_{1}, ..., a_{n} equal zero.## Vector spaces with additional structures

From the point of view of linear algebra, the vector spaces are completely understood insofar as any vector space is characterized, up to isomorphism, by its dimension. The needs of functional analysis require considering additional structures, especially with respect to convergence of infinite series. On the other hand, the notion of bases as explained above can be difficult to apply to infinite-dimensional spaces, thus also calling for an adapted approach. ### Topological vector spaces

Convergense issues are adressed by considering vector spaces V which also carry a compatible topology, i.e. a structure that allows to talk about elements being close to each other. Compatible here means that addition and scalar multiplication should be continuous maps, i.e. if x and y in V, and a in F vary by a bounded amount (the field also has to carry a topology in this setting), then so do x + y and ax._{i} are some elements of a given vector space of real or complex functions means the limit of the corresponding finite sums of functions. #### Banach spaces

^{n}, the topology does not yield additional insight—in fact, all topologies on finite-dimensional topological vector spaces are equivalent, i.e. give rise to the same notion of convergence. In the infinite-dimensional situation, however, the topologies for different p are inequivalent. E.g. the sequence x_{n} of vectors
_{n}, with n tending to ∞ converges to the zero vector for p = ∞, but does not for p = 1. This is an example for the remark that the study of topological vector spaces is richer than that of vector spaces without additional data. _{p} < ∞, and equipped with this norm is denoted L^{p}(Ω). ^{p}(Ω) such that
#### Hilbert spaces

_{m} and b_{m}, called Fourier coefficients. This expansion is surprising insofar that countably many functions, namely the rational multiples of sin(mx) and cos(mx), where m takes values in the integers, are enough to express any other function, of which there are uncountably many. #### Distributions

Distributions (or "generalized functions") are a powerful instrument to solve differential equations, and are exceeding the cadre of Hilbert spaces. Concretely, a distribution is a map assigning a number to any function in a given vector space. A standard example is given by integrating the function over some domain Ω:
### Algebras over fields

### Ordered vector spaces

An ordered vector space is a vector space equipped with an order ≤, i.e. vectors can be compared. R^{n} can be ordered, for example, by comparing the coordinates of the vectors. Riesz spaces present further key cases.
## Generalizations

### Modules

Modules are to ring (mathematics) what vector spaces are to fields, i.e. the very same axioms, applied to a ring R instead of a field F yield modules. In contrast to the good understanding of vector spaces offered by linear algebra, the theory of modules is in general much more complicated. This is due to the presence of elements r ∈ R that do not possess multiplicative inverses. For example, modules need not have bases as the Z-module (i.e. abelian group) Z/2 shows; those modules that do (including all vector spaces) are known as free modules.
### Vector bundles

^{−1}(U) equals the projection V × U → U . The case dim V = 1 is called a line bundle. The interest in this notion comes from the fact that while the situation is simple to oversee locally, there may be global twisting phenomena. For example, the Möbius strip can be seen as a line bundle over the circle S^{1} (at least if one extends the bounded interval to infinity). The (non-)existence of vector bundles with certain properties can tell something about the underlying space X. For example, over the 2-sphere S^{2}, there is no tangent vector field which is everywhere nonzero, as opposed to the circle S^{1}. The study of all vector bundles over some topological space is known as K-theory. An algebraic counterpart to vector bundles are locally free modules, which—in the guise of projective modules—are important in homological algebra and algebraic K-theory.
### Affine and projective spaces

Affine spaces can be thought of being vector spaces whose origin is not specified. Formally, an affine space is a set with a transitive vector space action. In particular, a vector space is an affine space over itself, by the structure map
^{m} (viewed as a subset of some bigger R^{n}), i.e. moving some linear subspace by a fixed vector x, yields affine spaces, too.## Notes

## Citations

## References

### Linear algebra

### Functional analysis

### Historical references

### Further references

## See also

The most familiar vector spaces are two- and three-dimensional Euclidean spaces. Vectors in these spaces can be represented by ordered pairs or triples of real numbers, and are isomorphic to geometric vectors—quantities with a magnitude and a direction, usually depicted as arrows. These vectors may be added together using the parallelogram rule (vector addition) or multiplied by real numbers (scalar multiplication). The behavior of geometric vectors under these operations provides a good intuitive model for the behavior of vectors in more abstract vector spaces, which need not have a geometric interpretation. For example, the set of (real) polynomials forms a vector space.

A much more extensive idea of what constitutes a vector space is found in the See also subsection for this article, which provides links to more abstract examples of this term.

The space R^{2} consisting of pairs of real numbers, (x, y), is a common example for a vector space. It is one, because any pair (here a vector) can be added:

- (x
_{1}, y_{1}) + (x_{2}, y_{2}) = (x_{1}+ x_{2}, y_{1}+ y_{2}),

- other fields instead of the real numbers, such as complex numbers or finite fields, are allowed.
- the dimension, which is two above, is arbitrary.
- most importantly, elements of vector spaces are not usually expressed as linear combinations of a particular set of vectors, i.e. there is no preference of representing the vector (x, y) as

- (x, y) = x · (1, 0) + y · (0, 1)

- (x, y) = (−1/3·x + 2/3·y) · (−1, 1) + (1/3·x + 1/3·y) · (2, 1)

- vector addition: V × V → V denoted v + w, where v, w ∈ V, and
- scalar multiplication: F × V → V denoted av, where a ∈ F and v ∈ V,

satisfying the axioms below. Let u, v, w be arbitrary elements of V, and a, b be elements of F, respectively.

Associativity of addition | u + (v + w) = (u + v) + w |

Commutativity of addition | v + w = w + v |

Identity element of addition | There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V. |

Inverse elements of addition | For all v ∈ V, there exists an element w ∈ V, called the additive inverse of v, such that v + w = 0. |

Distributivity of scalar multiplication with respect to vector addition | a (v + w) = a v + a w |

Distributivity of scalar multiplication with respect to field addition | (a + b) v = a v + b v |

Compatibility of scalar multiplication with field multiplication | a (b v) = (ab) v |

Identity element of scalar multiplication | 1 v = v, where 1 denotes the multiplicative identity in F |

Some sources choose to also include two axioms of closure u + v ∈ V and a v ∈ V for all a, u, and v. When the operations are interpreted as maps with codomain V, these closure axioms hold by definition, and do not need to be stated independently. Closure, however, must be checked to determine whether a subset of a vector space is a subspace.

Expressions of the form “v a”, where v ∈ V and a ∈ F, are, strictly speaking, not defined. Because of the commutativity of the underlying field, however, “a v” and “v a” are often treated synonymously. Additionally, if v ∈ V, w ∈ V, and a ∈ F where vector space V is additionally an algebra over the field F then a v w = v a w, which makes it convenient to consider “a v” and “v a” to represent the same vector.

There are a number of properties that follow easily from the vector space axioms. Some of them derive from elementary group theory, applied to the (additive) group of vectors: for example the zero vector 0 ∈ V and the additive inverse −v of a vector v are unique. Other properties can be derived from the distributive law, for example scalar multiplication by zero yields the zero vector and no other scalar multiplication yields the zero vector.

To achieve a geometric solutions without using coordinates, Bernhard Bolzano introduced in 1804 certain operations on points, lines and planes, which are predecessors of vectors. This work was considered in the concept of barycentric coordinates of August Ferdinand Möbius in 1827. The founding leg of the definition of vectors was the Bellavitis' definition of the bipoint, which is an oriented segment, one of whose ends is the origin and the other one a target.

The notion of vector was reconsidered with the presentation of complex numbers by Jean-Robert Argand and William Rowan Hamilton and the inception of quaternions by the latter mathematician, being elements in R^{2} and R^{4}, respectively. Treating them using linear combinations goes back to Laguerre in 1867, who defined systems of linear equations.

In 1857, Cayley introduced the matrix notation which allows one to harmonize and simplify the writing of linear maps between vector spaces.

At the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. His work exceeds the framework of vector spaces, since his introduction of multiplication led him to the concept of algebras. Nonetheless, the concepts of dimension and linear independence are present, as well as the scalar product (1844). The primacy of these discoveries was disputed with Cauchy's publication Sur les clefs algébriques.

Italian mathematician Peano, one of whose important contributions was the rigorous axiomatisation of extant concepts, in particular the construction of sets, was one of the first to give the modern definition of vector spaces around the end of 19th century.

An important development of this concept is due to the construction of function spaces by Henri Lebesgue. This was later formalized by David Hilbert and Stefan Banach, in his 1920 PhD thesis.

At this time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces. Also at this time, the first studies concerning infinite dimensional vector spaces were done.

- f(v + w) = f(v) + f(w) and f(a · v) = a · f(v).

An isomorphism is a linear map such that there exists an inverse map such that the two possible compositions and are identity maps. Equivalently, f is both one-to-one (injective) and onto (surjective). If there exists an isomorphism between V and W, the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via f, transported to similar ones in W, and vice versa via g.

Given any two vector spaces V and W, the set of linear maps V → W forms a vector space Hom_{F}(V, W) (also denoted L(V, W): two such maps f and g are added by adding them pointwise, i.e.

- (f + g)(v) = f(v) + g(v)

- (a·f)(v) = a·f(v).

The case of W = F, the base field, is of particular interest. The space of linear maps from V to F is called the dual vector space, denoted V^{∗}.

Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars, i.e. elements of some field F. Any m-by-n matrix A gives rise to a linear map from F^{n}, the vector space consisting of n-tuples x = (x_{1}, ..., x_{n}) to F^{m}, by the following

- $(x\_1,\; x\_2,\; ...,\; x\_n)\; mapsto\; left(sum\_\{i=1\}^m\; x\_i\; a\_\{i1\},\; sum\_\{i=1\}^m\; x\_i\; a\_\{i2\},\; ...,\; sum\_\{i=1\}^m\; x\_i\; a\_\{in\}\; right)$,

- $mathbf\; x\; mapsto\; A\; mathbf\; x$.

The determinant of a square matrix tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero.

- det (f − λ · Id) = 0.

In general, a nonempty subset W of a vector space V that is closed under addition and scalar multiplication is called a subspace of V. Subspaces of V are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set of vectors is called its span. Expressed in terms of elements, the span is the subspace consisting of finite sums (called linear combinations)

- a
_{1}v_{1}+ a_{2}v_{2}+ ... + a_{n}v_{n},

The counterpart to subspaces are quotient vector spaces. Given any subspace W ⊂ V, the quotient space V/W ("V modulo W") is defined as follows: as a set, it consists of v + W = {v + w, w ∈ W}, where v is an arbitrary vector in V. The sum of two such elements v_{1} + W and v_{2} + W is (v_{1} + v_{2}) + W, and scalar multiplication is given by a · (v + W) = (a · v) + W. The key point in this definition is that v_{1} + W = v_{2} + W if and only if the difference of v_{1} and v_{2} lies in W. This way, the quotient space "forgets" information that is contained in the subspace W.

For any linear map f: V → W, the kernel ker(f) consists of elements v that are mapped to 0 in W. It, as well as the image im(f) = {f(v), v ∈ V}, are linear subspaces of V and W, respectively. There is a fundamental isomorphism

- V / ker(f) ≅ im(f).

- (f
_{1}, f_{2}, ..., f_{n}), where the f_{i}are elements of F.

- f (x) = r
_{n}x^{n}+ r_{n−1}x^{n−1}+ ... + r_{1}x + r_{0}, where the coefficients r_{0}, ..., r_{n}are in F,

- a + 3b + c = 0

- 4a + 2b + 2c = 0

- Ax = 0,

- $begin\{bmatrix\}$

In a similar vein, the solutions of homogeneous linear differential equations, for example

- f ''(x) + 2f '(x) + f (x) = 0

- (v
_{i}) + (w_{i}) = (v_{i}+ w_{i}).

- a · (v
_{i}) = (a · v_{i}),

- v
_{1}⊗ w_{1}+ v_{2}⊗ w_{2}+ ... + v_{n}⊗ w_{n},

- a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w).

The tensor product—one of the central notions of multilinear algebra—can be seen as the extension of the hierarchy of scalars, vectors and matrices. Via the fundamental isomorphism

- Hom
_{F}(V, W) ≅ V^{∗}⊗_{F}W,

In the important case V ⊗ V, the tensor product can be loosely thought of as adding formal "products" of vectors (which, ad hoc, don't exist in vector spaces). In general, there are no relations between the two tensors v_{1} ⊗ v_{2} and v_{2} ⊗ v_{1}. Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing v_{1} ⊗ v_{2} = − v_{2} ⊗ v_{1} yields the exterior algebra. The latter is the linear algebraic fundament of differential forms: they are elements of the exterior algebra of the cotangent space to manifolds. Tensors, i.e. element of some tensor product have various applications, for example the Riemann curvature tensor encodes all curvatures of a manifold at a time, which finds applications in general relativity, for example, where the Einstein curvature tensor describes the curvature of space-time.

If, in a (finite or infinite) set {v_{i}}_{i ∈ I} no vector can be removed without changing the span, the set is said to be linearly independent. Equivalently, an equation

- 0 = a
_{1}v_{i}1 + a_{i}2v_{2}+ ... + a_{n}v_{i}n

A linearly independent set whose span is V is called a basis for V. Hence, every element can be expressed as a finite sum of basis elements, and any such representation is unique (once a basis is chosen). Vector spaces are sometimes introduced from this coordinatised viewpoint.

Using Zorn’s Lemma (which is equivalent to the axiom of choice), it can be proven that every vector space has a basis. It follows from the ultrafilter lemma, which is weaker than the axiom of choice, that all bases of a given vector space have the same cardinality. This cardinality is called the dimension of the vector space. Historically, the existence of bases was first shown by Felix Hausdorff. It is known that, given the rest of the axioms, this statement is in fact equivalent to the axiom of choice.

For example, the dimension of the coordinate space F^{n} is n, since any element in this space (x_{1}, x_{2}, ..., x_{n}) can be uniquely expressed as a linear combination of n vectors e_{1} = (1, 0, ..., 0), e_{2} = (0, 1, 0, ..., 0), to e_{n} = (0, 0, ..., 0, 1), namely the sum

- $sum\_\{i=1\}^n\; x\_i\; mathbf\{e\}\_i$.

By the unicity of the decomposition of any element into a linear combination of chosen basis elements v_{i}, linear maps are completely determined by specifying f(v_{i}). Given two vector spaces, V and W, of the same dimension, a choice of bases of V and W and a bijection between the sets of bases gives rise to the map that maps any basis element of V to the corresponding basis element of W. This map is, by its very definition, an isomorphisms. Therefore, vector spaces over a given field are fixed up to isomorphism by the dimension. Thus any n-dimensional vector spaces over F is isomorphic to F^{0n}.

Therefore, it is common to study vector spaces with certain additional structures. This is often necessary to recover ordinary notions from geometry or analysis.

Only in such topological vector spaces can one consider infinite sums of vectors, i.e. series, through the notion of convergence. For example, the term

- $sum\_\{i=0\}^\{infty\}\; f\_i$,

A way of ensuring the existence of limits of infinite series as above is to restrict attention to complete vector spaces, i.e. any Cauchy sequence (which can be thought of as sequences that "should" possess a limit) do have a limit. Roughly, completeness means the absence of holes. E.g. the rationals are not complete, since there are series of rational numbers converging to irrational numbers such as $sqrt\; 2$. A less immediate example is provided by functions equipped with the Riemann integral.

In the realm of topological vector spaces, such as Banach and Hilbert spaces, all notions should be coherent with the topology. For example, instead of considering all linear maps (also called functionals) V → W, it is useful to require maps to be continuous. For example, the dual space V^{∗} consists of continuous functionals V → R (or C). If V is some vector space of (well-behaved) functions, this dual space, called space of distributions, which can be thought of as generalized functions, find applications in solving differential equations. Applying the dual construction twice yields the bidual V^{∗∗}. There is always an natural, injective map V → V^{∗∗}. This map may or may not be an isomorphism. If so, V is called reflexive.

Banach spaces, in honor of Stefan Banach, are complete normed vector spaces, i.e. the topology comes from a norm, a datum that allows to measure lengths of vectors.

A common example is the vector space l^{p} consisting of infinite vectors with real entries x = (x_{1}, x_{2}, ...) whose p-norm (1 ≤ p ≤ ∞) given by

- $|mathbf\; x|\_p\; :=\; (sum\_i\; |x\_i|^p)^\{1/p\}$ for p < ∞ and $|mathbf\; x|\_infty\; :=\; text\{sup\}\_i\; |x\_i|$

- x
_{n}= (2^{−n}, 2^{−n}, ..., 2^{−n}, 0, 0, ...)—the first 2^{n}components are 2^{−n}, the following ones are 0

- $|x\_n|\_1\; =\; sum\_\{i=1\}^\{2^n\}\; 2^\{-n\}\; =\; 1$ and $|x\_n|\_infty\; =\; sup\; (2^\{-n\},\; 0)\; =\; 2^\{-n\}$,

More generally, it is possible to consider functions endowed with a norm that replaces the sum in the above p-norm by an integral, specifically the Lebesgue integral

- $|f|\_p\; :=\; left(int\; |f(x)|^p\; dx\; right)^\{1/p\}$.

Since the above uses the Lebesgue integral (as opposed to the Riemann integral), these spaces are complete. Concretely this means that for any sequence of functions satisfying the condition

- $lim\_\{k,\; n\; to\; infty\}int\_Omega\; |\{f\}\_k\; (x)-\{f\}\_n\; (x)|^p\; dx\; =\; 0\; .$

- $lim\_\{k\; to\; infty\}int\_Omega\; |\{f\}\; (x)-\{f\}\_k\; (x)|^p\; dx\; =\; 0\; .$

Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.

Slightly more special, but equally crucial to functional analysis is the case where the topology is induced by an inner product, which allows to measure angles between vectors. This entails, that lengths of vectors can be defined too, namely by $|mathbf\; v|\; =\; sqrt\; \{langle\; v,\; v\; rangle\}$. If such a space is complete, it is called Hilbert space, in honor of David Hilbert.

A key case is the Hilbert space L^{2}(Ω), whose inner product is given by

- $langle\; f\; |\; g\; rangle\; =\; int\_Omega\; overline\{f(x)\}\; g(x)\; dx\; .$, with $overline\{f(x)\}$ being the complex conjugate of f(x).

Reversing this direction of thought, i.e. finding a sequence of functions f_{n} that approximate a given function, is equally crucial. Early analysis, for example, in the guise of the Taylor approximation, established an approximation of differentiable functions f by polynomials. Ad hoc, this technique is local, i.e. approximating f closely at some point x may not approximate the function globally. The Stone-Weierstrass theorem, however, states that every continuous function on [a, b] can be approximated as closely as desired by a polynomial. More generally, and more conceptually, the theorem yields a simple description what "basic functions" suffice to generate a Hilbert space, in the sense that the closure of their span (i.e. finite sums and limits of those) is the whole space. For distinction, a basis in the linear algebraic sense as above is then called a Hamel basis. Not only does the theorem exhibit polynomials as sufficient for approximation purposes, it, together with the Gram-Schmidt process, also allows the construction of a basis of orthogonal polynomials. Orthogonality means that $langle\; p\; |\; q\; rangle\; =\; 0$, i.e. the polynomials obtained don't interfer. Instead of polynomials, similar statements hold for Legendre polynomials, Bessel functions and Hypergeometric functions.

Resolving general functions into sums of trigonometric functions is known as the Fourier expansion, a technique much used in engineering. It is possible to describe any function f(x) on a bounded, closed interval (or equivalently, any periodic function) as the limit of the following sum

- $$

The solutions to various important differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations and frequently solutions with particular physical properties are used as basis functions, often orthogonal, that serve as the axes in a corresponding Hilbert space.

As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time, by means of a partial differential equation determining a wavefunction. Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of an associated (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem describes the representation of linear operators that act upon functions in terms of these eigenfunctions and their eigenvalues.

- $f\; mapsto\; int\_Omega\; f(x)dx$

In general, vector spaces do not possess a multiplication operation. (An exceptional case are finite-dimensional vector spaces over finite fields, which turn out to be (finite) fields, as well.) A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field. An important example is the ring of polynomials F[x] in one variable, x, with coefficients in a field F, or similarly with several variables. In this case the multiplication is both commutative and associative. These rings, and their quotient rings form the basis of algebraic geometry, because they are the rings of functions of algebraic geometric objects.

Another crucial example are Lie algebras, which are neither commutative, nor associative, but the failure to be so is measured by the constraints ([x, y] denotes multiplication of x and y):

- [x, y] = −[y, x] and [x, [y, z]] + [y, [x, z]] + [z, [x, y]] = 0

A family of vector spaces, parametrised continuously by some topological space X, is a vector bundle. More precisely, a vector bundle E over X is given by a continuous map

- π : E → X,

- V
^{2}→ V, (a, b) ↦ a − b.

The set of one-dimensional subspaces of a fixed vector space V is known as projective space, an important geometric object formalizing the idea of parallel lines intersecting at infinity. More generally, the Grassmann manifold consists of linear subspaces of higher (fixed) dimension n. Finally, flag manifolds parametrize flags, i.e. chains of subspaces (with fixed dimension)

- 0 = V
_{0}⊂ V_{1}⊂ ... ⊂ V_{n}= V

- Vector (geometry), for vectors in physics
- Vector field
- Vector spaces without fields
- Coordinates (mathematics)

- Vector fields in cylindrical and spherical coordinates
- Lie derivative
- Covariant derivative
- Clifford algebra

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Saturday October 11, 2008 at 09:12:22 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Saturday October 11, 2008 at 09:12:22 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.