Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins (symbol: K). Many engineering fields in the U.S. however, measure thermodynamic temperature using the Rankine scale.
By international agreement, the unit “kelvin” and its scale are defined by two points: absolute zero, and the triple point of Vienna Standard Mean Ocean Water (water with a specified blend of hydrogen and oxygen isotopes). Absolute zero—the coldest possible temperature—is defined as being precisely 0 K and −273.15 °C. The triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things:
Temperatures expressed in kelvins are converted to degrees Rankine simply by multiplying by 1.8 as follows: TK × 1.8 = T°R, where TK and T°R are temperatures in kelvins and degrees Rankine respectively. Temperatures expressed in Rankine are converted to kelvins by dividing by 1.8 as follows: T°R ÷ 1.8 = TK.
(precisely by definition)
|0 K||−273.15 °C||∞|
|450 pK||–273.14999999955 °C||6,400 kilometers|
(precisely by definition)
|0.001 K||−273.149 °C||2.897 77 meters|
(Radio, FM band)
|Water’s triple point|
(precisely by definition)
|273.16 K||0.01 °C||10,608.3 nm|
(Long wavelength I.R.)
|Water’s boiling point A||373.1339 K||99.9839 °C||7766.03 nm|
(Mid wavelength I.R.)
|Incandescent lampB||2500 K||≈2200 °C||1160 nm|
|Sun’s visible surfaceD||5778 K||5505 °C||501.5 nm|
|28,000 K||28,000 °C||100 nm|
(Far Ultraviolet light)
|Sun’s core E||16 MK||16 million °C||0.18 nm (X-rays)|
|350 MK||350 million °C||8.3 × 10−3 nm|
|Sandia National Labs’|
Z machine E
|2 GK||2 billion °C||1.4 × 10−3 nm|
|Core of a high–mass|
star on its last day E
|3 GK||3 billion °C||1 × 10−3 nm|
|Merging binary neutron|
star system E
|350 GK||350 billion °C||8 × 10−6 nm|
Ion Collider E
|1 TK||1 trillion °C||3 × 10−6 nm|
|CERN’s proton vs.|
nucleus collisions E
|10 TK||10 trillion °C||3 × 10−7 nm|
|Universe 5.391 × 10−44 s|
after the Big Bang E
|1.417 × 1032 K||1.417 × 1032 °C||1.616 × 10−26 nm|
The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the average—or “mean”—kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and z–axis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law’s formula pV = nRT and is embodied in the gas laws.
The extent to which the kinetic energy of translational motion of an individual atom or molecule (particle) in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: kB). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particle’s translational motion as follows:
While the Boltzmann constant is useful for finding the mean kinetic energy of a particle, it’s important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x–axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x–axis represents infinite temperature. Additionally, the x and y–axis on both graphs are scaled proportionally.
The kinetic energy stored internally in molecules allows a substance to contain more heat energy at a given temperature (and in the case of gases, at a given pressure and volume), and to absorb more of it for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions is not at that same instant contributing to the molecules’ translational motions. This extra kinetic energy simply increases the amount of heat energy a substance absorbs for a given temperature rise. This property is known as a substance’s specific heat capacity.
Different molecules absorb different amounts of heat energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances’ molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
Heat conduction is the diffusion of heat energy from hot parts of a system to cold. A “system” can be either a single bulk entity or a plurality of discrete bulk entities. The term “bulk” in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever heat energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).
One particular heat conduction mechanism occurs when translational motion—the particle motion underlying temperature—transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can advance forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more.
Translational motion in solids however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets traveling at the speed of sound for a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam.
Metals however, are not restricted to only phonon-based heat conduction. Heat energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of heat energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals’ thermal conductivity and their electrical conductivity. Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of “quantum gas” due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only th that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion,
However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner—because they are much less massive—heat energy is readily borne by mobile conduction electrons. Additionally, because they’re delocalized and very fast, kinetic heat energy conducts extremely quickly through metals with abundant conduction electrons.
Black-body radiation diffuses heat energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.
As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which heat energy escapes a system.
Even though heat energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green.
At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are—on average—at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are quantized forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added heat energy only breaks the bonds of a specific quantity of its atoms or molecules, converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional heat energy can’t make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), heat energy must be removed from a substance.
As stated above, the heat energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it’s called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements. If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole. Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the heat energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals’ ratios are even greater, typically in the range of 400 to 1200 times. And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.
Water’s sizable enthalpy of vaporization is why one’s skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite direction, this is why one’s skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity). Water’s highly energetic enthalpy of vaporization is also an important factor underlying why “solar pool covers” (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water 8.4 degrees Celsius.
Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero heat energy; one must be very precise with what one means by “heat energy.” Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T=0 helium remains liquid at room pressure and must be under a pressure of at least 25 bar to crystallize. This is because helium’s heat of fusion—the energy required to melt helium ice—is so low (only 21 J mol−1) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. Only if under at least 25 bar of pressure will this latent heat energy be liberated as helium freezes while approaching absolute zero. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars). These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one.
The above complexities make for rather cumbersome blanket statements regarding the internal energy in T=0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy. One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration). Lastly, it is always true to say that all T=0 substances contain zero kinetic heat energy.
Earth’s proximity to the Sun is why most everything near Earth’s surface is warm with a temperature substantially above absolute zero. Solar radiation constantly replenishes heat energy that Earth loses into space and a relatively stable state of equilibrium is achieved. Because of the wide variety of heat diffusion mechanisms (one of which is black-body radiation which occurs at the speed of light), objects on Earth rarely vary too far from the global mean surface and air temperature of 287 to 288 K (14 to 15 °C). The more an object’s or system’s temperature varies from this average, the more rapidly it tends to come back into equilibrium with the ambient environment.
Loosely stated, temperature controls the flow of heat between two systems, and the universe as a whole, as with any natural system, tends to progress so as to maximize entropy. This suggests that there should be a relationship between temperature and entropy. To elucidate this, consider first the relationship between heat, work and temperature. One way to study this is to analyse a heat engine, which is a device for converting heat into mechanical work, such as the Carnot heat engine. Such a heat engine functions by using a temperature gradient between a high temperature TH and a low temperature TC to generate work, and the work done (per cycle, say) by the heat engine is equal to the difference between the heat energy qH put into the system at the high temperature the heat qC ejected at the low temperature (in that cycle). The efficiency of the engine is the work divided by the heat put into the system or
where wcy is the work done per cycle. Thus the efficiency depends only on qC/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, the ratio qC/qH should be a function f of these temperatures:
Carnot’s theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3. This can only be the case if
Now specialize to the case that is a fixed reference temperature: the temperature of the triple point of water. Then for any T2 and T3,
It follows immediately that
Substituting Equation 3 back into Equation 1 gives a relationship for the efficiency in terms of temperature:
Notice that for TC=0 the efficiency is 100% and that efficiency becomes greater than 100% for TC<0. Since an efficiency greater than 100% violates the first law of thermodynamics, this requires that zero must be the minimum possible temperature. This has an intuitive interpretation: temperature is the motion of particles, so no system can, on average, have less motion than the minimum permitted by quantum physics. In fact, as of June 2006, the coldest man-made temperature was 450 pK.
Subtracting the right hand side of Equation 4 from the middle portion and rearranging gives
where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function S (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by
where the subscript indicates heat transfer in a reversible process. The function S corresponds to the entropy of the system, mentioned previously, and the change of S around any cycle is zero (as is necessary for any state function). Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat:
For a system in which the entropy S is a function S(E) of its energy E, the thermodynamic temperature T is therefore given by
In the following notes, wherever numeric equalities are shown in ‘concise form’—such as 1.85487(14) × 1043—the two digits between the parentheses denotes the uncertainty at 1σ standard deviation (68% confidence level) in the two least significant digits of the significand.