Added to Favorites

Related Searches

Nearby Words

In linear algebra, a basis for a vector space of dimension n is a sequence of n vectors α_{1}, ..., α_{n} with the property that every vector in the space can be expressed uniquely as a linear combination of the basis vectors. Since it is often desirable to work with more than one basis for a vector space, it is of fundamental importance in linear algebra to be able to easily transform coordinate-wise representations of vectors and linear transformations taken with respect to one basis to their equivalent representations with respect to another basis. Such a transformation is called a change of basis.## Preliminary notions

The standard basis for R^{n} is {e_{1}, ..., e_{n}}, where e_{j} = (0, ..., 1, 0, ..., 0) is the element of R^{n} with 1 in the j-th place and 0s elsewhere.## Change of coordinates

First we examine the question of how the coordinates of a vector ξ in V change when we select another basis. Suppose {α_{1}, ..., α_{n}} and {α'_{1}, ..., α'_{n}} are two ordered bases for V. Let φ_{1} and φ_{2} be the corresponding coordinate isomorphisms from R^{n} to V, i.e. φ_{1}(e_{j}) = α_{j} and φ_{2}(e_{j}) = α'_{j} for j = 1, ..., n. If x = (x_{1}, ..., x_{n}) is the coordinate n-tuple of ξ with respect to the first basis, so that ξ = φ_{1}(x), then the coordinate tuple of ξ with respect to the second basis is φ_{2}^{-1}(ξ) = φ_{2}^{-1}(φ_{1}(x)). Now the map φ_{2}^{-1} o φ_{1} is an automorphism on R^{n} and therefore has a matrix p. Moreover, the j-th column of p is φ_{2}^{-1} o φ_{1}(e_{j}) = φ_{2}^{-1}(α_{j}), that is, the coordinate n-tuple of α_{j} with respect to the second basis {α'_{1}, ..., α'_{n}}. Thus y = φ_{2}^{-1}(φ_{1}(x)) = px is the coordinate n-tuple of ξ with respect to the basis {α'_{1}, ..., α'_{n}}.
## The matrix of a linear transformation

Now suppose T : V → W is a linear transformation, {α_{1}, ..., α_{n}} is a basis for V and {β_{1}, ..., β_{m}} is a basis for W. Let φ and ψ be the coordinate isomorphisms for V and W, respectively, relative to the given bases. Then the map T_{1} = ψ^{-1} o T o φ is a linear transformation from R^{n} to R^{m}, and therefore has a matrix t; its j-th column is ψ^{-1}(T(α_{j})) for j = 1, ..., n. This matrix is called the matrix of T with respect to the ordered bases {α_{1}, ..., α_{n}} and {β_{1}, ..., β_{m}}. If η = T(ξ) and y and x are the coordinate tuples of η and ξ, then y = ψ^{-1}(T(φ(x))) = tx. Conversely, if ξ is in V and x = φ^{-1}(ξ) is the coordinate tuple of ξ with respect to {α_{1}, ..., α_{n}}, and we set y = tx and η = ψ(y), then η = ψ(T_{1}(x)) = T(ξ). That is, if ξ is in V and η is in W and x and y are their coordinate tuples, then y = tx if and only if η = T(ξ).### Change of basis

Now we ask what happens to the matrix of T : V → W when we change bases in V and W. Let {α_{1}, ..., α_{n}} and {β_{1}, ..., β_{m}} be ordered bases for V and W respectively, and suppose we are given a second pair of bases {α'_{1}, ..., α'_{n}} and {β'_{1}, ..., β'_{m}}. Let φ_{1} and φ_{2} be the coordinate isomorphisms taking the usual basis in R^{n} to the first and second bases for V, and let ψ_{1} and ψ_{2} be the isomorphisms taking the usual basis in R^{m} to the first and second bases for W.## The matrix of an endomorphism

An important case of the matrix of a linear transformation is that of an endomorphism, that is,
a linear map from a vector space V to itself: that is, the case that W = V.
We can naturally take {β_{1}, ..., β_{n}} = {α_{1}, ..., α_{n}} and {β'_{1}, ..., β'_{m}} = {α'_{1}, ..., α'_{n}}. The matrix of the linear map T is necessarily square.
### Change of basis

## The matrix of a bilinear form

### Change of basis

## Example from mechanics

(this example to be replaced or amended)## Important instances

In abstract vector space theory the change of basis concept is innocuous; it seems to add little to science. Yet there are cases in associative algebras where a change of basis is sufficient to turn a caterpillar into a butterfly, figuratively speaking:
## See also

## External links

Although the terminology of vector spaces is used below and the symbol R can be taken to mean the field of real numbers, the results discussed hold whenever R is a commutative ring and vector space is everywhere replaced with free R-module.

If T : R^{n} → R^{m} is a linear transformation, the m × n matrix of T is the matrix t whose j-th column is T(e_{j}) for j = 1, ..., n. In this case we have T(x) = tx for all x in R^{n}, where we regard x as a column vector and the multiplication on the right side is matrix multiplication. It is a basic fact in linear algebra that the vector space Hom(R^{n}, R^{m}) of all linear transformations from R^{n} to R^{m} is naturally isomorphic to the space R^{m × n} of m × n matrices over R; that is, a linear transformation T : R^{n} → R^{m} is for all intents and purposes equivalent to its matrix t.

We will also make use of the following simple observation.

Theorem Let V and W be vector spaces, let {α_{1}, ..., α_{n}} be a basis for V, and let {γ_{1}, ..., γ_{n}} be any n vectors in W. Then there exists a unique linear transformation T : V → W with T(α_{j}) = γ_{j} for j = 1, ..., n.

This unique T is defined by T(x_{1}α_{1} + ... + x_{n}α_{n}) = x_{1}γ_{1} + ... + x_{n}γ_{n}. Of course, if {γ_{1}, ..., γ_{n}} happens to be a basis for W, then T is bijective as well as linear; in other words, T is an isomorphism. If in this case we also have W = V, then T is said to be an automorphism.

Now let V be a vector space over R and suppose {α_{1}, ..., α_{n}} is a basis for V. By definition, if ξ is a vector in V then ξ = x_{1}α_{1} + ... + x_{n}α_{n} for a unique choice of scalars x_{1}, ..., x_{n} in R called the coordinates of ξ relative to the ordered basis {α_{1}, ..., α_{n}}. The vector x = (x_{1}, ..., x_{n}) in R^{n} is called the coordinate tuple of ξ (relative to this basis). The unique linear map φ : R^{n} → V with φ(e_{j}) = α_{j} for j = 1, ..., n is called the coordinate isomorphism for V and the basis {α_{1}, ..., α_{n}}. Thus φ(x) = ξ if and only if ξ = x_{1}α_{1} + ... + x_{n}α_{n}.

Theorem Suppose U, V and W are vector spaces of finite dimension and an ordered basis is chosen for each. If T : U → V and S : V → W are linear transformations with matrices s and t, then the matrix of the linear transformation S o T : U → W (with respect to the given bases) is st.

Let T_{1} = ψ_{1}^{-1} o T o φ_{1}, and T_{2} = ψ_{2}^{-1} o T o φ_{2} (both maps taking R^{n} to R^{m}), and let t_{1} and t_{2} be their respective matrices. Let p and q be the matrices of the change-of-coordinates automorphisms φ_{2}^{-1} o φ_{1} on R^{n} and ψ_{2}^{-1} o ψ_{1} on R^{m}.

The relationships of these various maps to one another are illustrated in the following commutative diagram.

(insert standard change-of-basis diagram)

Since we have T_{2} = ψ_{2}^{-1} o T o φ_{2} = (ψ_{2}^{-1} o ψ_{1}) o T_{1} o (φ_{1}^{-1} o φ_{2}), and since composition of linear maps corresponds to matrix multiplication, it follows that

- t
_{2}= q t_{1}p^{-1}.

We apply the same change of basis, so that q = p and the change of basis formula becomes

- t
_{2}= p t_{1}p^{-1}.

In this situation the invertible matrix p is called a change-of-basis matrix for the vector space V, and the equation above says that the matrices t_{1} and t_{2} are similar.

A bilinear form on a vector space V over a field R is a mapping V × V → R which is linear in both arguments. That is, B : V × V → R is bilinear if the maps

- $v\; mapsto\; B(v,\; w)$

- $v\; mapsto\; B(w,\; v)$

The Gram matrix G attached to a basis $alpha\_1,dots,\; alpha\_n$ is defined by

- $G\_\{i,j\}\; =\; B(alpha\_i,alpha\_j)\; ,$.

If $v\; =\; sum\_i\; x\_i\; alpha\_i$ and $w\; =\; sum\_i\; y\_i\; alpha\_i$ are the expressions of vectors v, w with respect to this basis, then the bilinear form is given by

- $B(v,w)\; =\; v^top\; G\; w\; ,$ .

The matrix will be symmetric if the bilinear form B is a symmetric bilinear form.

If P is the invertible matrix representing a change of basis from $alpha\_1,dots,\; alpha\_n$ to $alpha\text{'}\_1,dots,\; alpha\text{'}\_n$ then the Gram matrix transforms by the matrix congruence

- $G\text{'}\; =\; P^top\; G\; P\; ,$ .

Let's say we have a train rolling on rails. Using a cartesian coordinate system, let the rail be headed straight in the X-direction (east, on most maps). Now, if we push the train in the X-direction, it will move, but if we try to push the train in the Y-direction (north), it won't be able to move (without derailing).

We could formulate this as a matrix, where the first column shows acceleration of the train when pushed in the X-direction =(1,0), and the second column shows the acceleration of the train when pushed in the Y-direction =(0,0):

- $A=begin\{pmatrix\}$

Now, let's say that we want the rail to be headed in northeasterly direction (45 degrees on the compass).

How should our matrix look now? If we push the train in the X-direction, it should move a little in the X-direction, but it should also move in the Y-direction.

A basis change lets us find the matrix B which describes the movement of the train on a northeasterly rail.

All we need to do is to change basis to a basis where the X-axis is in the direction of the rails, multiply with our matrix A, and then change back to the original basis.

A rotation matrix for a 45 degree rotation looks like this:

- $R=begin\{pmatrix\}$

Let the direction we're pushing the train in be P. Putting P in our new basis:

- $R^\{-1\}\; P$

Applying our matrix A:

- $A\; (R^\{-1\}\; P)$

Changing back to the original basis:

- $R\; (A\; (R^\{-1\}\; P))$

And using the matrix multiplication laws, we can remove the parenthesis:

- $R\; A\; R^\{-1\}\; P$

And identify the matrix we were looking for:

- $R\; A\; R^\{-1\}$

And by remembering that the inverse of a rotation matrix is simpy its transpose (this step isn't really necessary, but transpose is a quicker operation to do by hand than finding the inverse), our final answer is:

- $B=R\; A\; R^\{T\}\; =\; begin\{pmatrix\}$

We can now see (by looking at the first column of the matrix) that if we push the train in the X-direction, it will move in the direction of the rail .

If we try to push in the north-westerly direction, the train will not move:

- $B\; P\; =\; begin\{pmatrix\}$

So the matrix we found seems to do the trick.

- In the split-complex number plane there is an alternative “diagonal basis”. The standard hyperbola xx − yy = 1 becomes xy = 1 after the change of basis. Transformations of the plane that leave the hyperbolae in place correspond to each other, modulo a change of basis. The contextual difference is profound enough to then separate Lorentz boost from squeeze mapping. A panoramic view of the literature of these mappings can be taken using the underlying change of basis.
- With the real matrices (2 x 2) one finds the beginning of a catalogue of linear algebras due to Arthur Cayley. His associate James Cockle put forward in 1849 his algebra of coquaternions or split-quaternions, which are the same algebra as the 2 x 2 real matrices, just laid out on a different matrix basis. Once again it is the concept of change of basis that synthesizes Cayley’s matrix algebra and Cockle’s coquaternions.

- integral transform, the continuous analogue of change of basis.

- MIT Linear Algebra Lecture on Change of Bases at Google Video, from MIT OpenCourseWare

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Friday April 04, 2008 at 20:01:47 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Friday April 04, 2008 at 20:01:47 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.