Added to Favorites

Related Searches

Definitions

Nearby Words

Calculus of variations is a field of mathematics that deals with functionals, as opposed to ordinary calculus which deals with functions. Such functionals can for example be formed as integrals involving an unknown function and its derivatives. The interest is in extremal functions: those making the functional attain a maximum or minimum value.## Weak and strong extrema

Recall that the supremum norm for real continuous functions on a topological space $X$ is defined as
*
## The Euler–Lagrange equation

Under ideal conditions, the maxima and minima of a given function may be located by finding the points where its derivative vanishes. By analogy, solutions of smooth variational problems may be obtained by solving the associated Euler–Lagrange equation. In order to illustrate this process, consider the problem of finding the shortest curve in the plane that connects two points $(x\_1,\; y\_1)$ and $(x\_2,\; y\_2)$. The arc length is given by## The Beltrami Identity

## du Bois Reymond's theorem

## Fermat's principle

### Snell's law

### Fermat's principle in three dimensions

#### Connection with the wave equation

## The action principle

## Functions of several variables

Variational problems that involve multiple integrals arise in numerous applications. For example, if φ(x,y) denotes the displacement of a membrane above the domain D in the x,y plane, then its potential energy is proportional to its surface area:
### Dirichlet's principle

It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by
### Generalization to other boundary value problems

## Eigenvalue problems

### Sturm-Liouville problems

### Eigenvalue problems in several dimensions

## See also

## Reference books

## References

Perhaps the simplest example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.

Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: the solution or solutions may be found by dipping a wire frame in a solution of soap suds. Although such experiments are relatively easy to perform, their mathematical interpretation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.

- $|y|\; =\; sup\{y(x):\; x\; in\; X\}$.

A functional $J(y)$ defined on some appropriate space of functions $V$ with norm $|cdot|\_V$ is said to have a weak extremum at the point $y\_0$ if there exists some $delta\; >\; 0$ such that, for all functions y with

- $|y\; -\; y\_0|\_V\; <\; delta$,

$J(y\_0)\; -\; J(y)$ has the same sign. Typically, $V$ is the space of r-times continuously differentiable functions on a compact subset $E$ of the real line, with its norm given by

- $|y|\_V\; =\; sum^r\_\{n\; =\; 0\}sup\{y^\{(n)\}(x):\; x\; in\; E\}$.

This norm is just the sum of the supremum norms of $y$ and its derivatives.

A functional $J$ is said to have a strong extremum at $y\_0$ if $J(y\_0)\; -\; J(y)$ has the same sign for all functions in a $delta$-neighbourhood of $y\_0$ in the norm of continuous functions, as opposed to whichever norm the space may have been given. If $y\_0$ is a strong extremum for $J$ then it is also a weak extremum, but the converse may not hold. Finding strong extrema is usually more difficult than finding weak extrema and in what follows it will be assumed that we are looking for weak extrema.

- $A[f]\; =\; int\_\{x\_1\}^\{x\_2\}\; sqrt\{1\; +\; [f\text{'}(x)\; ]^2\}\; ,\; dx,$

with

- $f\text{'}(x)\; =\; frac\{df\}\{dx\},\; ,$

and where $y=f(x)$, $f(x\_1)=y\_1$ and $f(x\_2)=y\_2.$ The function f should have at least one derivative in order to satisfy the requirements for valid application of the function, further, if $f\_0$ is a local minimum and $f\_1$ is an arbitrary function that vanishes at the endpoints $x\_1$ and $x\_2$ and with at least one derivative, then we must have

- $A[f\_0]\; le\; A[f\_0\; +\; epsilon\; f\_1]$

for any number ε close to 0. Therefore, the derivative of $A[f\_0\; +\; epsilon\; f\_1]$ with respect to ε (the first variation of A) must vanish at ε=0. Thus

- $int\_\{x\_1\}^\{x\_2\}\; frac\{\; f\_0\text{'}(x)\; f\_1\text{'}(x)\; \}\; \{sqrt\{1\; +\; [f\_0\text{'}(x)\; ]^2\}\},dx\; =0,\; ,$

for any choice of the function $f\_1$. We may interpret this condition as the vanishing of all directional derivatives of $A[f\_0]$ in the space of differentiable functions, and this is formalized by requiring the Fréchet derivative of A to vanish at $f\_0$. If we assume that $f\_0$ has two continuous derivatives (or if we consider weak derivatives), then we may use integration by parts:

- $int\_a^b\; u(x)\; v\text{'}(x),dx\; =\; left[u(x)\; v(x)\; right]\_\{a\}^\{b\}\; -\; int\_a^b\; u\text{'}(x)\; v(x),dx$

with the substitution

- $u(x)=frac\{\; f\_0\text{'}(x)\}\; \{sqrt\{1\; +\; [f\_0\text{'}(x)\; ]^2\}\},\; quad\; v\text{'}(x)=f\_1\text{'}(x),$

then we have

- $left[u(x)\; v(x)\; right]\_\{x\_1\}^\{x\_2\}\; -\; int\_\{x\_1\}^\{x\_2\}\; f\_1(x)\; frac\{d\}\{dx\}left[frac\{\; f\_0\text{'}(x)\; \}\; \{sqrt\{1\; +\; [f\_0\text{'}(x)\; ]^2\}\}\; right]\; ,\; dx\; =0,$

but the first term is zero since $v(x)=f\_1(x)$ was chosen to vanish at $x\_1$ and $x\_2$ where the evaluation is taken. Therefore,

- $int\_\{x\_1\}^\{x\_2\}\; f\_1(x)\; frac\{d\}\{dx\}left[frac\{\; f\_0\text{'}(x)\; \}\; \{sqrt\{1\; +\; [f\_0\text{'}(x)\; ]^2\}\}\; right]\; ,\; dx\; =0$

for any twice differentiable function $f\_1$ that vanishes at the endpoints of the interval. This is a special case of the fundamental lemma of calculus of variations:

- $I\; =int\_\{x\_1\}^\{x\_2\}\; f\_1(x)\; H(x),\; dx\; =0,\; ,$

for any differentiable function $f\_1(x)$ that vanishes at the endpoints of the interval. Since $f\_1(x)$ is an arbitrary function within the integration range, we conclude that $H(x)\; =\; 0$. Therefore,

- $frac\{d\}\{dx\}left[frac\{\; f\_0\text{'}(x)\; \}\; \{sqrt\{1\; +\; [f\_0\text{'}(x)\; ]^2\}\}\; right]\; =0.,$

It follows from this equation that

- $frac\{d^2\; f\_0\}\{dx^2\}=0,$

and hence the extremals are straight lines.

A similar calculation holds in the general case where

- $A[f]\; =\; int\_\{x\_1\}^\{x\_2\}\; L(x,f,f\text{'}),\; dx\; .\; ,$

and f is required to have two continuous derivatives. Again, we find an extremal $f\_0$ by setting $f\; =\; f\_0\; +\; epsilon\; f\_1$, taking the derivative with respect to ε, and setting $epsilon\; =\; 0$ at the end:

- $$

& = 0,end{align}

where we have used the chain rule in the second line and integration by parts in the third. As before, the last term in the third line vanishes due to our choice of $f\_1$. Finally, according to the fundamental lemma of calculus of variations, we find that $L$ will satisfy the Euler–Lagrange equation

- $-frac\{d\}\{dx\}\; frac\{part\; L\}\{part\; f\text{'}\}\; +\; frac\{part\; L\}\{part\; f\}=0,$

In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal $f$. The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremal. Sufficient conditions for an extremal are discussed in the references.

Frequently in physical problems, it turns out that $part\; L/part\; x=0$. In that case, the Euler-Lagrange equation can be simplified using the Beltrami identity:

- $L-f\text{'}frac\{part\; L\}\{part\; f\text{'}\}=C,$

The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral A requires only first derivatives of trial functions. The condition that the first variation vanish at an extremal may be regarded as a weak form of the Euler-Lagrange equation. The theorem of du Bois Reymond asserts that this weak form implies the strong form. If L has continuous first and second derivatives with respect to all of its arguments, and if

- $frac\{part^2\; L\}\{(part\; f\text{'})^2\}\; ne\; 0,$

then $f\_0$ has two continuous derivatives, and it satisfies the Euler-Lagrange equation.

Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the x-coordinate is chosen as the parameter along the path, and $y=f(x)$ along the path, then the optical length is given by

- $A[f]\; =\; int\_\{x=x\_0\}^\{x\_1\}\; n(x,f(x))\; sqrt\{1\; +\; f\text{'}(x)^2\}\; dx,\; ,$

where the refractive index $n(x,y)$ depends upon the material. If we try $f(x)\; =\; f\_0\; (x)\; +\; epsilon\; f\_1\; (x)$ then the first variation of A (the derivative of A with respect to ε) is

- $delta\; A[f\_0,f\_1]\; =\; int\_\{x=x\_0\}^\{x\_1\}\; left[frac\{\; n(x,f\_0)\; f\_0\text{'}(x)\; f\_1\text{'}(x)\}\{sqrt\{1\; +\; f\_0\text{'}(x)^2\}\}\; +\; n\_y\; (x,f\_0)\; f\_1\; right]\; dx.$

After integration by parts of the first term within brackets, we obtain the Euler-Lagrange equation

- $-frac\{d\}\{dx\}\; left[frac\{\; n(x,f\_0)\; f\_0\text{'}\}\{sqrt\{1\; +\; f\_0\text{'}^2\}\}\; right]\; +\; n\_y\; (x,f\_0)\; =0.\; ,$

The light rays may be determined by integrating this equation.

There is a discontinuity of the refractive index when light enters or leaves a lens. Let

- $n(x,y)\; =\; n\_-\; quad\; hbox\{if\}\; quad\; x<0,\; ,$

- $n(x,y)\; =\; n\_+\; quad\; hbox\{if\}\; quad\; x>0,,$

where $n\_-$ and $n\_+$ are constants. Then the Euler-Lagrange equation holds as before in the region where x<0 or x>0, and in fact the path is a straight line there, since the refractive index is constant. At the x=0, f must be continuous, but f' may be discontinuous. After integration by parts in the separate regions and using the Euler-Lagrange equations, the first variation takes the form

- $delta\; A[f\_0,f\_1]\; =\; f\_1(0)left[n\_-frac\{f\_0\text{'}(0\_-)\}\{sqrt\{1\; +\; f\_0\text{'}(0\_-)^2\}\}\; -n\_+frac\{f\_0\text{'}(0\_+)\}\{sqrt\{1\; +\; f\_0\text{'}(0\_+)^2\}\}\; right].,$

The factor multiplying $n\_-$ is the sine of angle of the incident ray with the x axis, and the factor multiplying $n\_+$ is the sine of angle of the refracted ray with the x axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length.

It is expedient to use vector notation: let $X=(x\_1,x\_2,x\_3),$ let t be a parameter, let $X(t)$ be the parametric representation of a curve C, and let $dot\; X(t)$ be its tangent vector. The optical length of the curve is given by

- $A[C]\; =\; int\_\{t=t\_0\}^\{t\_1\}\; n(X)\; sqrt\{\; dot\; X\; cdot\; dot\; X\}\; dt.\; ,$

Note that this integral is invariant with respect to changes in the parametric representation of C. The Euler-Lagrange equations for a minimizing curve have the symmetric form

- $frac\{d\}\{dt\}\; P\; =\; sqrt\{\; dot\; X\; cdot\; dot\; X\}\; nabla\; n,\; ,$

where

- $P\; =\; frac\{n(X)\; dot\; X\}\{sqrt\{dot\; X\; cdot\; dot\; X\}\; \}.,$

It follows from the definition that P satisfies

- $P\; cdot\; P\; =\; n(X)^2.\; ,$

Therefore the integral may also be written as

- $A[C]\; =\; int\_\{t=t\_0\}^\{t\_1\}\; P\; cdot\; dot\; X\; ,\; dt.,$

This form suggests that if we can find a function ψ whose gradient is given by P, then the integral A is given by the difference of ψ at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of ψ. In order to find such a function, we turn to the wave equation, which governs the propagation of light.

The wave equation for an inhomogeneous medium is

- $u\_\{tt\}\; =\; c^2\; nabla\; cdot\; nabla\; u,\; ,$

where c is the velocity, which generally depends upon X. Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy

- $varphi\_t^2\; =\; c(X)^2\; nabla\; varphi\; cdot\; nabla\; varphi.\; ,$

We may look for solutions in the form

- $varphi(t,X)\; =\; t\; -\; psi(X).\; ,$

In that case, ψ satisfies

- $nabla\; psi\; cdot\; nabla\; psi\; =\; n^2,\; ,$

where $n=1/c.$ According to the theory of first order partial differential equations, if $P\; =\; nabla\; psi,$ then P satisfies

- $frac\{dP\}\{ds\}\; =\; 2\; n\; nabla\; n,\; ,$

along a system of curves (the light rays) that are given by

- $frac\{dX\}\{ds\}\; =\; P.\; ,$

These equations for solution of a first-order partial differential equation are identical to the Euler-Lagrange equations if we make the identification

- $frac\{ds\}\{dt\}\; =\; frac\{sqrt\{\; dot\; X\; cdot\; dot\; X\}\; \}\{n\}.\; ,$

We conclude that the function ψ is the value of the minimizing integral A as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton-Jacobi theory, which applies to more general variational problems.

The action was defined by Hamilton to be the time integral of the Lagrangian, L, which is defined as a difference of energies:

- $L\; =\; T\; -\; U,\; ,$

- $A[C]\; =\; int\_\{t=t\_0\}^\{t\_1\}\; L(X,\; dot\; X)\; dt\; ,$

- $frac\{d\}\{dt\}\; frac\{part\; L\}\{part\; dot\; X\}\; =\; frac\{part\; L\}\{part\; X\},\; ,$

The conjugate momenta P are defined by

- $P\; =\; frac\{part\; L\}\{part\; dot\; X\}.\; ,$

- $T\; =\; frac\{1\}\{2\}\; m\; dot\; x^2,\; ,$

- $P\; =\; m\; dot\; x.\; ,$

- $H(X,P)\; =\; -L(X,dot\; X)\; +\; P\; cdot\; dot\; X.,$

- $frac\{part\; psi\}\{part\; t\}\; +\; H(X,nabla\; psi)\; =0.,$

- $U[varphi]\; =\; iint\_D\; sqrt\{1\; +nabla\; varphi\; cdot\; nabla\; varphi\}\; dx,dy.,$

- $varphi\_\{xx\}(1\; +\; varphi\_y^2)\; +\; varphi\_\{yy\}(1\; +\; varphi\_x^2)\; -\; 2varphi\_x\; varphi\_y\; varphi\_\{xy\}\; =\; 0.,$

- $V[varphi]\; =\; frac\{1\}\{2\}iint\_D\; nabla\; varphi\; cdot\; nabla\; varphi\; ,\; dx,\; dy.,$

- $frac\{d\}\{depsilon\}\; V[u\; +\; epsilon\; v]|\_\{epsilon=0\}\; =\; iint\_D\; nabla\; u\; cdot\; nabla\; v\; ,\; dx,dy\; =\; 0.,$

- $iint\_D\; nabla\; cdot\; (v\; nabla\; u)\; ,dx,dy\; =$

- $iint\_D\; vnabla\; cdot\; nabla\; u\; ,dx,dy\; =0\; ,$

- $nabla\; cdot\; nabla\; u=\; 0\; ,$ in D.

The difficulty with this reasoning is the assumption that the minimizing function u must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea Dirichlet's principle in honor of his teacher Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize

- $W[varphi]\; =\; int\_\{-1\}^\{1\}\; (xvarphi\text{'})^2\; ,\; dx,$

A more general expression for the potential energy of a membrane is

- $v[varphi]\; =\; iint\_D\; left[frac\{1\}\{2\}\; nabla\; varphi\; cdot\; nabla\; varphi\; +\; f(x,y)\; varphi\; right]\; ,\; dx,dy\; ,\; +\; int\_C\; left[frac\{1\}\{2\}\; sigma(s)\; varphi^2\; +\; g(s)\; varphi\; right]\; ,\; ds.$

- $iint\_D\; left[nabla\; u\; cdot\; nabla\; v\; +\; f\; v\; right]\; ,\; dx,\; dy\; +\; int\_C\; left[sigma\; u\; v\; +\; g\; v\; right]\; ,\; ds\; =0.\; ,$

- $iint\_D\; left[-v\; nabla\; cdot\; nabla\; u\; +\; v\; f\; right]\; ,\; dx\; ,\; dy\; +\; int\_C\; v\; left[frac\{part\; u\}\{part\; n\}\; +\; sigma\; u\; +\; g\; right]\; ,\; ds\; =0.\; ,$

- $-\; nabla\; cdot\; nabla\; u\; +\; f\; =0\; ,$

- $frac\{part\; u\}\{part\; n\}\; +\; sigma\; u\; +\; g\; =0,\; ,$

The preceding reasoning is not valid if $sigma$ vanishes identically on C. In such a case, we could allow a trial function $varphi\; equiv\; c$, where c is a constant. For such a trial function,

- $V[c]\; =\; cleft[iint\_D\; f\; ,\; dx,dy\; +\; int\_C\; g\; ds\; right].$

- $iint\_D\; f\; ,\; dx,dy\; +\; int\_C\; g\; ,\; ds\; =0.,$

Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems.

The Sturm-Liouville eigenvalue problem involves a general quadratic form

- $Q[varphi]\; =\; int\_\{x\_1\}^\{x\_2\}\; left[p(x)\; varphi\text{'}(x)^2\; +\; q(x)\; varphi(x)^2\; right]\; ,\; dx,\; ,$

- $varphi(x\_1)=0,\; quad\; varphi(x\_2)=0.\; ,$

- $R[varphi]\; =int\_\{x\_1\}^\{x\_2\}\; r(x)varphi(x)^2\; ,\; dx.,$

- $-(pu\text{'})\text{'}\; +q\; u\; -lambda\; r\; u\; =0,\; ,$

- $lambda\; =\; frac\{Q[u]\}\{R[u]\}.\; ,$

The next smallest eigenvalue and eigenfunction can be obtained by minimizing Q under the additional constraint

- $int\_\{x\_1\}^\{x\_2\}\; r(x)\; u\_1(x)\; varphi(x)\; ,\; dx=0.\; ,$

The variational problem also applies to more general boundary conditions. Instead of requiring that φ vanish at the endpoints, we may not impose any condition at the endpoints, and set

- $Q[varphi]\; =\; int\_\{x\_1\}^\{x\_2\}\; left[p(x)\; varphi\text{'}(x)^2\; +\; q(x)varphi(x)^2\; right]\; ,\; dx\; +\; a\_1\; varphi(x\_1)^2\; +\; a\_2\; varphi(x\_2)^2,\; ,$

- $V\_1\; =\; frac\{2\}\{R[u]\}\; left(int\_\{x\_1\}^\{x\_2\}\; left[p(x)\; u\text{'}(x)v\text{'}(x)\; +\; q(x)u(x)v(x)\; -lambda\; u(x)\; v(x)\; right]\; ,\; dx\; +\; a\_1\; u(x\_1)v(x\_1)\; +\; a\_2\; u(x\_2)v(x\_2)\; right)\; ,\; ,$

- $frac\{R[u]\}\{2\}\; V\_1\; =\; int\_\{x\_1\}^\{x\_2\}\; v(x)\; left[-(p\; u\text{'})\text{'}\; +\; q\; u\; -lambda\; r\; u\; right]\; ,\; dx\; +\; v(x\_1)[-p(x\_1)u\text{'}(x\_1)\; +\; a\_1\; u(x\_1)]\; +\; v(x\_2)\; [p(x\_2\; u\text{'}(x\_2)\; +\; a\_2\; u(x\_2).\; ,$

- $-(p\; u\text{'})\text{'}\; +\; q\; u\; -lambda\; r\; u\; =0\; quad\; hbox\{for\}\; quad\; x\_1\; <\; x\; <\; x\_2.,$

- $-p(x\_1)u\text{'}(x\_1)\; +\; a\_1\; u(x\_1)=0,\; quad\; hbox\{and\}\; quad\; p(x\_2\; u\text{'}(x\_2)\; +\; a\_2\; u(x\_2)=0.,$

Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain D with boundary B in three dimensions we may define

- $Q[varphi]\; =\; iiint\_D\; p(X)\; nabla\; varphi\; cdot\; nabla\; varphi\; +\; q(X)\; varphi^2\; ,\; dx\; ,\; dy\; ,\; dz\; +\; iint\_B\; sigma(S)\; varphi^2\; ,\; dS,\; ,$

- $R[varphi]\; =\; iiint\_D\; r(X)\; varphi(X)^2\; ,\; dx\; ,\; dy\; ,\; dz.,$

- $-nabla\; cdot\; (p(X)\; nabla\; u)\; +\; q(x)\; u\; -\; lambda\; r(x)\; u=0,,$

- $lambda\; =\; frac\{Q[u]\}\{R[u]\}.,$

- $p(S)\; frac\{part\; u\}\{part\; n\}\; +\; sigma(S)\; u\; =0,$

- Isoperimetric inequality
- Variational principle
- Fermat's principle
- Principle of least action
- Infinite-dimensional optimization
- Functional analysis
- Perturbation methods
- Young measure
- Optimal control

- Gelfand, I.M. and Fomin, S.V.: Calculus of Variations, Dover Publ., 2000
- Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1-98
- Charles Fox: An Introduction to the Calculus of Variations, Dover Publ., 1987
- Forsyth, A.R.: Calculus of Variations, Dover, 1960
- Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992
- Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974
- Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968
- Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950.
- Courant, R. and D. Hilbert: Methods of Mathematical Physics, Vol I. Interscience Press, 1953.
- Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962
- Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998.

- Johan Byström, Lars-Erik Persson, and Fredrik Strömberg, Chapter III: Introduction to the calculus of variations (undated).
- Calculus of variations example problems.
- Chapter 8: Calculus of Variations, from Optimization for Engineering Systems, by Ralph W. Pike, Louisiana State University

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday October 08, 2008 at 11:53:21 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday October 08, 2008 at 11:53:21 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.