Added to Favorites

Related Searches

Definitions

Nearby Words

statistical mechanics, quantitative study of systems consisting of a large number of interacting elements, such as the atoms or molecules of a solid, liquid, or gas, or the individual quanta of light (see photon) making up electromagnetic radiation. Although the nature of each individual element of a system and the interactions between any pair of elements may both be well understood, the large number of elements and possible interactions can present an almost overwhelming challenge to the investigator who seeks to understand the behavior of the system. Statistical mechanics provides a mathematical framework upon which such an understanding may be built. Since many systems in nature contain large number of elements, the applicability of statistical mechanics is broad. In contrast to thermodynamics, which approaches such systems from a macroscopic, or large-scale, point of view, statistical mechanics usually approaches systems from a microscopic, or atomic-scale, point of view. The foundations of statistical mechanics can be traced to the 19th-century work of Ludwig Boltzmann, and the theory was further developed in the early 20th cent. by J. W. Gibbs. In its modern form, statistical mechanics recognizes three broad types of systems: those that obey Maxwell-Boltzmann statistics, those that obey Bose-Einstein statistics, and those that obey Fermi-Dirac statistics. Maxwell-Boltzmann statistics apply to systems of classical particles, such as the atmosphere, in which considerations from the quantum theory are small enough that they may be ignored. The other two types of statistics concern quantum systems: systems in which quantum-mechanical properties cannot be ignored. Bose-Einstein statistics apply to systems of bosons (particles that have integral values of the quantum mechanical property called spin); an unlimited number of bosons can be placed in the same state. Photons, for instance, are bosons, and so the study of electromagnetic radiation, such as the radiation of a black body involves the use of Bose-Einstein statistics. Fermi-Dirac statistics apply to systems of fermions (particles that have half-integral values of spin); no two fermions can exist in the same state. Electrons are fermions, and so Fermi-Dirac statistics must be employed for a full understanding of the conduction of electrons in metals. Statistical mechanics has also yielded deep insights in the understanding of magnetism, phase transitions, and superconductivity.

The Columbia Electronic Encyclopedia Copyright © 2004.

Licensed from Columbia University Press

Licensed from Columbia University Press

Branch of physics that combines the principles and procedures of statistics with the laws of both classical mechanics and quantum mechanics. It considers the average behaviour of a large number of particles rather than the behaviour of any individual particle, drawing heavily on the laws of probability, and aims to predict and explain the measurable properties of macroscopic (bulk) systems on the basis of the properties and behaviour of their microscopic constituents.

Learn more about statistical mechanics with a free trial on Britannica.com.

Encyclopedia Britannica, 2008. Encyclopedia Britannica Online.

Statistical mechanics is the application of probability theory, which includes mathematical tools for dealing with large populations, to the field of mechanics, which is concerned with the motion of particles or objects when subjected to a force. Statistical mechanics, sometimes called statistical physics, can be viewed as a subfield of physics and chemistry. Pioneers in establishing the field were Ludwig Boltzmann and Josiah Willard Gibbs.

It provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, therefore explaining thermodynamics as a natural result of statistics and mechanics (classical and quantum) at the microscopic level. In particular, it can be used to calculate the thermodynamic properties of bulk materials from the spectroscopic data of individual molecules.

This ability to make macroscopic predictions based on microscopic properties is the main advantage of statistical mechanics over thermodynamics. Both theories are governed by the second law of thermodynamics through the medium of entropy. However, entropy in thermodynamics can only be known empirically, whereas in statistical mechanics, it is a function of the distribution of the system on its micro-states.

The fundamental postulate in statistical mechanics (also known as the equal a priori probability postulate) is the following:

- Given an isolated system in equilibrium, it is found with equal probability in each of its accessible microstates.

This postulate is a fundamental assumption in statistical mechanics - it states that a system in equilibrium does not have any preference for any of its available microstates. Given Ω microstates at a particular energy, the probability of finding the system in a particular microstate is p = 1/Ω.

This postulate is necessary because it allows one to conclude that for a system at equilibrium, the thermodynamic state (macrostate) which could result from the largest number of microstates is also the most probable macrostate of the system.

The postulate is justified in part, for classical systems, by Liouville's theorem (Hamiltonian), which shows that if the distribution of system points through accessible phase space is uniform at some time, it remains so at later times.

Similar justification for a discrete system is provided by the mechanism of detailed balance.

This allows for the definition of the information function (in the context of information theory):...

- $$

When all rhos are equal, I is minimal, which reflects the fact that we have minimal information about the system. When our information is maximal, i.e. one rho is equal to one and the rest to zero (we know what state the system is in), the function is maximal. [Comments: It should be "When all rhos are equal, I is maximal...", please have a check] This "information function" is the same as the reduced entropic function in thermodynamics.

The entropy of such a system can only increase, so that the maximum of its entropy corresponds to an equilibrium state for the system.

Because an isolated system keeps a constant energy, the total energy of the system does not fluctuate. Thus, the system can access only those of its micro-states that correspond to a given value E of the energy. The internal energy of the system is then strictly equal to its energy.

Let us call $Omega(E)$ the number of micro-states corresponding to this value of the system's energy. The macroscopic state of maximal entropy for the system is the one in which all micro-states are equally likely to occur, with probability $\{1/\{Omega\; (E)\}\}$, during the system's fluctuations.

- $$

- where

- $S$ is the system entropy,

- $k\_B$ is Boltzmann's constant

- $P\_i\; =\; \{e^\{-beta\; E\_i\}over\{sum\_j^\{j\_\{max\}\}e^\{-beta\; E\_j\}\}\}$

- where $beta=\{1over\{kT\}\}$,

The temperature $T$ arises from the fact that the system is in thermal equilibrium with its environment. The probabilities of the various microstates must add to one, and the normalization factor in the denominator is the canonical partition function:

- $Z\; =\; sum\_j^\{j\_\{max\}\}\; e^\{-beta\; E\_j\}$

where $E\_i$ is the energy of the $i$th microstate of the system. The partition function is a measure of the number of states accessible to the system at a given temperature. The article canonical ensemble contains a derivation of Boltzmann's factor and the form of the partition function from first principles.

To sum up, the probability of finding a system at temperature $T$ in a particular state with energy $E\_i$ is

- $P\_i\; =\; frac\{e^\{-beta\; E\_i\}\}\{Z\}$

The partition function can be used to find the expected (average) value of any microscopic property of the system, which can then be related to macroscopic variables. For instance, the expected value of the microscopic energy $E$ is interpreted as the microscopic definition of the thermodynamic variable internal energy $U$., and can be obtained by taking the derivative of the partition function with respect to the temperature. Indeed,

- $langle\; Erangle=\{sum\_i\; E\_i\; e^\{-beta\; E\_i\}over\; Z\}=-\{1\; over\; Z\}\; \{dZ\; over\; dbeta\}$

implies, together with the interpretation of $langle\; Erangle$ as $U$, the following microscopic definition of internal energy:

- $Ucolon\; =\; -\{dln\; Zover\; d\; beta\}.$

The entropy can be calculated by (see Shannon entropy)

- $\{Sover\; k\}\; =\; -\; sum\_i\; p\_i\; ln\; p\_i\; =\; sum\_i\; \{e^\{-beta\; E\_i\}over\; Z\}(beta\; E\_i+ln\; Z)\; =\; ln\; Z\; +\; beta\; U$

which implies that

- $-frac\{ln(Z)\}\{beta\}\; =\; U\; -\; TS\; =\; F$

is the free energy of the system or in other words,

- $Z=e^\{-beta\; F\},$

Having microscopic expressions for the basic thermodynamic potentials $U$ (internal energy), $S$ (entropy) and $F$ (free energy) is sufficient to derive expressions for other thermodynamic quantities. The basic strategy is as follows. There may be an intensive or extensive quantity that enters explicitly in the expression for the microscopic energy $E\_i$, for instance magnetic field (intensive) or volume (extensive). Then, the conjugate thermodynamic variables are derivatives of the internal energy. The macroscopic magnetization (extensive) is the derivative of $U$ with respect to the (intensive) magnetic field, and the pressure (intensive) is the derivative of $U$ with respect to volume (extensive).

The treatment in this section assumes no exchange of matter (i.e. fixed mass and fixed particle numbers). However, the volume of the system is variable which means the density is also variable.

This probability can be used to find the average value, which corresponds to the macroscopic value, of any property, $J$, that depends on the energetic state of the system by using the formula:

- $langle\; J\; rangle\; =\; sum\_i\; p\_i\; J\_i\; =\; sum\_i\; J\_i\; frac\{e^\{-beta\; E\_i\}\}\{Z\}$

where $langle\; J\; rangle$ is the average value of property $J$. This equation can be applied to the internal energy, $U$:

- $U\; =\; sum\_i\; E\_i\; frac\{e^\{-beta\; E\_i\}\}\{Z\}$

Subsequently, these equations can be combined with known thermodynamic relationships between $U$ and $V$ to arrive at an expression for pressure in terms of only temperature, volume and the partition function. Similar relationships in terms of the partition function can be derived for other thermodynamic properties as shown in the following table; see also the detailed explanation in configuration integral

Helmholtz free energy: | $F\; =\; -\; \{ln\; Zover\; beta\}$ |
---|---|

Internal energy: | $U\; =\; -left(frac\{partialln\; Z\}\{partialbeta\}\; right)\_\{N,V\}$ |

Pressure: | $P\; =\; -left(\{partial\; Fover\; partial\; V\}right)\_\{N,T\}=\; \{1over\; beta\}\; left(frac\{partial\; ln\; Z\}\{partial\; V\}\; right)\_\{N,T\}$ |

Entropy: | $S\; =\; k\; (ln\; Z\; +\; beta\; U),$ |

Gibbs free energy: | $G\; =\; F+PV=-\{ln\; Zover\; beta\}\; +\; \{Vover\; beta\}\; left(frac\{partial\; ln\; Z\}\{partial\; V\}right)\_\{N,T\}$ |

Enthalpy: | $H\; =\; U\; +\; PV,$ |

Constant volume heat capacity: | $C\_V\; =\; left(frac\{partial\; U\}\{partial\; T\}\; right)\_\{N,V\}$ |

Constant pressure heat capacity: | $C\_P\; =\; left(frac\{partial\; H\}\{partial\; T\}\; right)\_\{N,P\}$ |

Chemical potential: | $mu\_i\; =\; -\{1over\; beta\}\; left(frac\{partial\; ln\; Z\}\{partial\; N\_i\}\; right)\_\{T,V,N\}$ |

To clarify, this is not a grand canonical ensemble.

It is often useful to consider the energy of a given molecule to be distributed among a number of modes. For example, translational energy refers to that portion of energy associated with the motion of the center of mass of the molecule. Configurational energy refers to that portion of energy associated with the various attractive and repulsive forces between molecules in a system. The other modes are all considered to be internal to each molecule. They include rotational, vibrational, electronic and nuclear modes. If we assume that each mode is independent (a questionable assumption) the total energy can be expressed as the sum of each of the components:

- $E\; =\; E\_t\; +\; E\_c\; +\; E\_n\; +\; E\_e\; +\; E\_r\; +\; E\_v,$

Where the subscripts $t$, $c$, $n$, $e$, $r$, and $v$ correspond to translational, configurational, nuclear, electronic, rotational and vibrational modes, respectively. The relationship in this equation can be substituted into the very first equation to give:

- $Z\; =\; sum\_i\; e^\{-beta(E\_\{ti\}\; +\; E\_\{ci\}\; +\; E\_\{ni\}\; +\; E\_\{ei\}\; +\; E\_\{ri\}\; +\; E\_\{vi\})\}$

- $=\; sum\_i$

If we can assume all these modes are completely uncoupled and uncorrelated, so all these factors are in a probability sense completely independent, then

- $Z\; =\; Z\_t\; Z\_c\; Z\_n\; Z\_e\; Z\_r\; Z\_v,$

Thus a partition function can be defined for each mode. Simple expressions have been derived relating each of the various modes to various measurable molecular properties, such as the characteristic rotational or vibrational frequencies.

Expressions for the various molecular partition functions are shown in the following table.

Nuclear | $Z\_n\; =\; 1\; qquad\; (T\; <\; 10^8\; K)$ |
---|---|

Electronic | $Z\_e\; =\; W\_0\; e^\{kT\; D\_e\; +\; W\_1\; e^\{-theta\_\{e1\}/T\}\; +\; cdots\}$ |

Vibrational | $Z\_v\; =\; prod\_j\; frac\{e^\{-theta\_\{vj\}\; /\; 2T\}\}\{1\; -\; e^\{-theta\_\{vj\}\; /\; T\}\}$ |

Rotational (linear) | $Z\_r\; =\; frac\{T\}\{sigma\}\; theta\_r$ |

Rotational (non-linear) | $Z\_r\; =\; frac\{1\}\{sigma\}sqrt\{frac\{\{pi\}T^3\}\{theta\_A\; theta\_B\; theta\_C\}\}$ |

Translational | $Z\_t\; =\; frac\{(2\; pi\; mkT)^\{3/2\}\}\{h^3\}$ |

Configurational (ideal gas) | $Z\_c\; =\; V,$ |

These equations can be combined with those in the first table to determine the contribution of a particular energy mode to a thermodynamic property. For example the "rotational pressure" could be determined in this manner. The total pressure could be found by summing the pressure contributions from all of the individual modes, ie:

- $P\; =\; P\_t\; +\; P\_c\; +\; P\_n\; +\; P\_e\; +\; P\_r\; +\; P\_v,$

In grand canonical ensemble V, T and chemical potential are fixed. If the system under study is an open system, (matter can be exchanged), but particle number is not conserved, we would have to introduce chemical potentials, μ_{j}, j=1,...,n and replace the canonical partition function with the grand canonical partition function:

- $Xi(V,T,mu)\; =\; sum\_i\; expleft(beta\; left[sum\_\{j=1\}^n\; mu\_j\; N\_\{ij\}-E\_iright\; ]right)$

where N_{ij} is the number of j^{th} species particles in the i^{th} configuration. Sometimes, we also have other variables to add to the partition function, one corresponding to each conserved quantity. Most of them, however, can be safely interpreted as chemical potentials. In most condensed matter systems, things are nonrelativistic and mass is conserved. However, most condensed matter systems of interest also conserve particle number approximately (metastably) and the mass (nonrelativistically) is none other than the sum of the number of each type of particle times its mass. Mass is inversely related to density, which is the conjugate variable to pressure. For the rest of this article, we will ignore this complication and pretend chemical potentials don't matter. See grand canonical ensemble.

Let's rework everything using a grand canonical ensemble this time. The volume is left fixed and does not figure in at all in this treatment. As before, j is the index for those particles of species j and i is the index for microstate i:

- $U\; =\; sum\_i\; E\_i\; frac\{exp(-beta\; (E\_i-sum\_j\; mu\_j\; N\_\{ij\}))\}\{Xi\}$

- $N\_j\; =\; sum\_i\; N\_\{ij\}\; frac\{exp(-beta\; (E\_i-sum\_j\; mu\_j\; N\_\{ij\}))\}\{Xi\}$

Grand potential: | $Phi\_\{G\}\; =\; -\; \{ln\; Xiover\; beta\}$ |

Internal energy: | $U\; =\; -left(frac\{partialln\; Xi\}\{partialbeta\}\; right)\_\{mu\}+sum\_i\{mu\_ioverbeta\}left(\{partial\; ln\; Xiover\; partial\; mu\_i\}right\; )\_\{beta\}$ |

Particle number: | $N\_i=\{1overbeta\}left(\{partial\; ln\; Xiover\; partial\; mu\_i\}right)\_beta$ |

Entropy: | $S\; =\; k\; (ln\; Xi\; +\; beta\; U-\; beta\; sum\_i\; mu\_i\; N\_i),$ |

Helmholtz free energy: | $F\; =\; G+sum\_i\; mu\_i\; N\_i=-\{ln\; Xiover\; beta\}\; +sum\_i\{mu\_iover\; beta\}\; left(frac\{partial\; ln\; Xi\}\{partial\; mu\_i\}right)\_\{beta\}$ |

All the above descriptions differ in the way they allow the given system to fluctuate between its configurations.

In the micro-canonical ensemble, the system exchanges no energy with the outside world, and is therefore not subject to energy fluctuations, while in the canonical ensemble, the system is free to exchange energy with the outside in the form of heat.

In the thermodynamic limit, which is the limit of large systems, fluctuations become negligible, so that all these descriptions converge to the same description. In other words, the macroscopic behavior of a system does not depend on the particular ensemble used for its description.

Given these considerations, the best ensemble to choose for the calculation of the properties of a macroscopic system is that ensemble which allows the result be most easily derived.

The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behaviour of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random (diffusive) walk in space. Indeed, Schrödinger's equation is itself a diffusion equation in imaginary time, $t\text{'}\; =\; it$.

The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk.

Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of + or - a fixed distance b, depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where $S\_\{i\}$ is the ith step taken):

$langle\; S\_\{i\}\; rangle\; =\; 0$ ; due to a priori equal probabilities

$langle\; S\_\{i\}\; S\_\{j\}\; rangle\; =\; b^2\; delta\_\{ij\}$

The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of $b^2$. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0;

$x\; =\; sum\_\{i=1\}^\{N\}\; S\_\{i\}$

$langle\; x\; rangle\; =\; langle\; sum\_\{i=1\}^\{N\}\; S\_\{i\}\; rangle$

$langle\; x\; rangle\; =\; sum\_\{i=1\}^\{N\}\; langle\; S\_\{i\}\; rangle$

As stated $langle\; S\_\{i\}\; rangle$ is 0, so the sum of 0 is still 0. It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below

$x\_\{rms\}\; =\; sqrt\; \{langle\; x^2\; rangle\}\; =\; b\; sqrt\; N$

From the diffusion equation it can be shown that the distance a diffusing particle moves in a media is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process.

Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers.

There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles.

By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is $mathbf\{R\}\; =\; sum\_\{i=1\}^\{N\}\; mathbf\; r\_i$ where $mathbf\; \{r\}\_\{i\}$ is the vector position of the i-th link in the chain.
As a result of the central limit theorem, if N >> 1 then the we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves;

$langle\; mathbf\{r\}\_\{i\}\; rangle\; =\; 0$ ; by the isotropy of space

$langle\; mathbf\{r\}\_\{i\}\; cdot\; mathbf\{r\}\_\{j\}\; rangle\; =\; 3\; b^2\; delta\_\{ij\}$ ; all the links in the chain are uncorrelated with one another

Using the statistics of the individual links, it is easily shown that $langle\; mathbf\; R\; rangle\; =\; 0$ and $langle\; mathbf\; R\; cdot\; mathbf\; R\; rangle\; =\; 3Nb^2$. Notice this last result is the same as that found for random walks in time.

Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form

$P\; =\; frac\{1\}\{left\; (frac\{2\; pi\; N\; b^2\}\{3\}\; right\; )^\{3/2\}\}\; exp\; frac\; \{-\; 3mathbf\; R\; cdot\; mathbf\; R\}\{2Nb^2\}$

What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz;

$Omega\; left\; (mathbf\{R\}\; right\; )\; =\; c\; Pleft\; (mathbf\{R\}\; right\; )$

where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to $mathbf\; \{R\}\; =\; 0$. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering

$S\; left\; (mathbf\; \{R\}\; right\; )\; =\; k\_B\; ln\; Omega\; \{left\; (mathbf\; R\; right)\; \}$

$Delta\; S\; left(mathbf\; \{R\}\; right\; )\; =\; S\; left(mathbf\; \{R\}\; right\; )\; -\; S\; left\; (0\; right\; )$

$Delta\; F\; =\; -\; T\; Delta\; S\; left\; (mathbf\; \{R\}\; right\; )$

where F is the Helmholtz free energy it is trivial to show that

$Delta\; F\; =\; k\_B\; T\; frac\; \{3R^2\}\{2Nb^2\}\; =\; frac\; \{1\}\{2\}\; K\; R^2\; quad\; ;\; K\; =\; frac\; \{3\; k\_B\; T\}\{Nb^2\}$

A Hookian spring!

This result is known as the Entropic Spring Result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring. What is particularly astonishing about this result however, is that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching.

- Dangerously irrelevant
- Fluctuation dissipation theorem
- Ising Model
- List of notable textbooks in statistical mechanics
- Important Publications in Statistical Mechanics
- Ludwig Boltzmann
- Mean field theory
- Nanomechanics
- Paul Ehrenfest
- Statistical physics
- Thermodynamic limit

Maxwell Boltzmann | Bose-Einstein | Fermi-Dirac | |
---|---|---|---|

Particle | Boson | Fermion | |

Statistics |
Partition function Statistical properties Microcanonical ensemble | Canonical ensemble | Grand canonical ensemble | ||

Statistics |
Maxwell-Boltzmann statistics Maxwell-Boltzmann distribution Boltzmann distribution Gibbs paradox | Bose-Einstein statistics | Fermi-Dirac statistics |

Thomas-Fermi approximation | gas in a box gas in a harmonic trap | ||

Gas | Ideal gas |
Bose gas Debye model Bose-Einstein condensate Planck's law of black body radiation |
Fermi gas Fermion condensate |

Chemical Equilibrium | Classical Chemical equilibrium |

- Chandler, David (1987).
*Introduction to Modern Statistical Mechanics*. Oxford University Press. - Huang, Kerson (1990).
*Statistical Mechanics*. Wiley, John & Sons, Inc. - Kittel, Charles; Herbert Kroemer (1980).
*Thermal Physics, Second Edition*. San Francisco: W.H. Freeman and Company. - McQuarrie, Donald (2000).
*Statistical Mechanics (2nd rev. Ed.)*. University Science Books. - Dill, Ken; Bromberg, Sarina (2003).
*Molecular Driving Forces*. Garland Science. - List of notable textbooks in statistical mechanics

- Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
- Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Friday October 10, 2008 at 20:53:09 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Friday October 10, 2008 at 20:53:09 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.