Respect all members: no insults, harassment, or hate speech.
Be tolerant of different viewpoints, cultures, and beliefs. If you do not agree with others, just create separate note, article or collection.
Clearly distinguish between personal opinion and fact.
Verify facts before posting, especially when writing about history, science, or statistics.
Promotional content must be published on the “Related Services and Products” page—no more than one paragraph per service. You can also create subpages under the “Related Services and Products” page and publish longer promotional text there.
Do not post materials that infringe on copyright without permission.
Always credit sources when sharing information, quotes, or media.
Be respectful of the work of others when making changes.
Discuss major edits instead of removing others' contributions without reason.
If you notice rule-breaking, notify community about it in talks.
Do not share personal data of others without their consent.
Thermodynamic temperature, also known as absolute temperature, is a physical quantity that measures temperature starting from absolute zero, the point at which particles have minimal thermal motion.
Thermodynamic temperature is typically expressed using the Kelvin scale, on which the unit of measurement is the kelvin (unit symbol: K). This unit is the same interval as the degree Celsius, used on the Celsius scale but the scales are offset so that 0 K on the Kelvin scale corresponds to absolute zero. For comparison, a temperature of 295 K corresponds to 21.85 °C and 71.33 °F. Another absolute scale of temperature is the Rankine scale, which is based on the Fahrenheit degree interval.
Historically, thermodynamic temperature was defined by Lord Kelvin in terms of a relation between the macroscopic quantities thermodynamic work and heat transfer as defined in thermodynamics, but the kelvin was redefined by international agreement in 2019 in terms of phenomena that are now understood as manifestations of the kinetic energy of free motion of particles such as atoms, molecules, and electrons.[citation needed]
Thermodynamic temperature can be defined in purely thermodynamic terms using the Carnot cycle. Thermodynamic temperature was rigorously defined historically long before particles such as atoms, molecules, and electrons were fully understood.
The International System of Units (SI) specifies the absolute scale for measuring temperature, and the unit of measure kelvin (symbol: K) for specific values along the scale. A temperature interval of one degree Celsius is the same as one kelvin. Since the 2019 revision of the SI, the kelvin has been defined in relation to the physical property underlying thermodynamic temperature: the kinetic energy of atomic free particle motion. The revision fixed the Boltzmann constant at exactly 1.380649×10−23 J⋅K−1.[1]
The property that imbues material substances with a temperature can be readily understood by examining the ideal gas law, which relates, through the Boltzmann constant, how heat energy causes precisely defined changes in the pressure and temperature of certain gases. This is because monatomic gases like helium and argon behave kinetically like freely moving perfectly elastic and spherical billiard balls that move only in a specific subset of the possible motions that can occur in matter: that comprising the three translationaldegrees of freedom. The translational degrees of freedom are the familiar billiard ball-like movements along the x-, y-, and z-axes of 3D space (see Fig. 1, below). This is why the noble gases all have the same heat capacity per atom and why that value is lowest of all the gases.
Molecules (two or more chemically bound atoms), however, have internal structure and therefore have additional internal degrees of freedom (see Fig. 3, below), which has the effect that molecules absorb more heat energy for any given rise in temperature than do the monatomic gases. Heat energy is born in all available degrees of freedom; this is in accordance with the equipartition theorem, so all available internal degrees of freedom have the same average energy as do their three external degrees of freedom. However, the property that gives all gases their pressure, which is the net force per unit area on a container arising from gas particles recoiling off it, is a function of the kinetic energy borne in the freely moving atoms' and molecules' three translational degrees of freedom.[2]
Fixing the Boltzmann constant at a specific value had the effect of precisely establishing the magnitude of the kelvin in terms of the average kinetic behavior of the noble gases. Moreover, the starting point of the thermodynamic temperature scale, absolute zero, was reaffirmed as the point at which zero average kinetic energy remains in a sample; the only remaining particle motion being that comprising random vibrations due to zero-point energy.
Temperature scales are numerical. The numerical zero of a temperature scale is not bound to the absolute zero of temperature. Nevertheless, some temperature scales have their numerical zero coincident with the absolute zero of temperature. Examples are the Kelvin temperature scale and the Rankine temperature scale. Other temperature scales have their numerical zero far from the absolute zero of temperature. Examples are the Celsius scale and the Fahrenheit scale.
At the zero point of thermodynamic temperature, absolute zero, the particle constituents of matter have minimal motion and can become no colder.[3][4] Absolute zero, which is a temperature of zero kelvins (0 K), precisely corresponds to −273.15 °C and −459.67 °F. Matter at absolute zero has no remaining transferable average kinetic energy and the only remaining particle motion is due to an ever-pervasive quantum mechanical phenomenon called ZPE (zero-point energy).[5] Though the atoms in, for instance, a container of liquid helium that was precisely at absolute zero would still jostle slightly due to zero-point energy, a theoretically perfect heat engine with such helium as one of its working fluids could never transfer any net kinetic energy (heat energy) to the other working fluid and no thermodynamic work could occur.
Temperature is generally expressed in absolute terms when scientifically examining temperature's interrelationships with certain other physical properties of matter such as its volume or pressure (see Gay-Lussac's law), or the wavelength of its emitted black-body radiation. Absolute temperature is also useful when calculating chemical reaction rates (see Arrhenius equation). Furthermore, absolute temperature is typically used in cryogenics and related phenomena like superconductivity, as per the following example usage:
"Conveniently, tantalum's transition temperature (Tc) of 4.4924 kelvins is slightly above the 4.2221 K boiling point of helium."
Though there have been many other temperature scales throughout history, there have been only two scales for measuring thermodynamic temperature which have absolute zero as their null point (0): The Kelvin scale and the Rankine scale.
Throughout the scientific world where modern measurements are nearly always made using the International System of Units, thermodynamic temperature is measured using the Kelvin scale. The Rankine scale is part of English engineering units and finds use in certain engineering fields, particularly in legacy reference works. The Rankine scale uses the degree Rankine (symbol: °R) as its unit, which is the same magnitude as the degree Fahrenheit (symbol: °F).
A unit increment of one kelvin is exactly 1.8 times one degree Rankine; thus, to convert a specific temperature on the Kelvin scale to the Rankine scale, x K = 1.8 x °R, and to convert from a temperature on the Rankine scale to the Kelvin scale, x °R = x/1.8 K. Consequently, absolute zero is "0" for both scales, but the melting point of water ice (0 °C and 273.15 K) is 491.67 °R.
To convert temperature intervals (a span or difference between two temperatures), the formulas from the preceding paragraph are applicable; for instance, an interval of 5 kelvins is precisely equal to an interval of 9 degrees Rankine.
For 65 years, between 1954 and the 2019 revision of the SI, a temperature interval of one kelvin was defined as 1/273.16 of the temperature difference between the triple point of water and absolute zero. The 1954 resolution by the International Bureau of Weights and Measures (BIPM), plus later resolutions and publications, defined the triple point of water as precisely 273.16 K and acknowledged that it was "common practice" to accept that due to previous conventions (namely, that 0 °C had long been defined as the melting point of water and that the triple point of water had long been experimentally determined to be indistinguishably close to 0.01 °C), the difference between the Celsius scale and Kelvin scale is accepted as 273.15 K; which is to say, 0 °C corresponds to 273.15 K.[6] The net effect of this as well as later resolutions was twofold: 1) they defined absolute zero as precisely 0 K, and 2) they defined that the triple point of special isotopically controlled water called Vienna Standard Mean Ocean Water occurred at precisely 273.16 K and 0.01 °C. One effect of the aforementioned resolutions was that the melting point of water, while very close to 273.15 K and 0 °C, was not a defining value and was subject to refinement with more precise measurements.
The 1954 BIPM standard did a good job of establishing—within the uncertainties due to isotopic variations between water samples—temperatures around the freezing and triple points of water, but required that intermediate values between the triple point and absolute zero, as well as extrapolated values from room temperature and beyond, to be experimentally determined via apparatus and procedures in individual labs. This shortcoming was addressed by the International Temperature Scale of 1990, or ITS‑90, which defined 13 additional points, from 13.8033 K, to 1,357.77 K. While definitional, ITS‑90 had—and still has—some challenges, partly because eight of its extrapolated values depend upon the melting or freezing points of metal samples, which must remain exceedingly pure lest their melting or freezing points be affected—usually depressed.
The 2019 revision of the SI was primarily for the purpose of decoupling much of the SI system's definitional underpinnings from the kilogram, which was the last physical artifact defining an SI base unit (a platinum/iridium cylinder stored under three nested bell jars in a safe located in France) and which had highly questionable stability. The solution required that four physical constants, including the Boltzmann constant, be definitionally fixed.
Assigning the Boltzmann constant a precisely defined value had no practical effect on modern thermometry except for the most exquisitely precise measurements. Before the revision, the triple point of water was exactly 273.16 K and 0.01 °C and the Boltzmann constant was experimentally determined to be 1.38064903(51)×10−23 J/K, where the "(51)" denotes the uncertainty in the two least significant digits (the 03) and equals a relative standard uncertainty of 0.37 ppm.[7] Afterwards, by defining the Boltzmann constant as exactly 1.380649×10−23 J/K, the 0.37 ppm uncertainty was transferred to the triple point of water, which became an experimentally determined value of 273.1600±0.0001 K (0.0100±0.0001 °C). That the triple point of water ended up being exceedingly close to 273.16 K after the SI revision was no accident; the final value of the Boltzmann constant was determined, in part, through clever experiments with argon and helium that used the triple point of water for their key reference temperature.[8][9]
Notwithstanding the 2019 revision, water triple-point cells continue to serve in modern thermometry as exceedingly precise calibration references at 273.16 K and 0.01 °C. Moreover, the triple point of water remains one of the 14 calibration points comprising ITS‑90, which spans from the triple point of hydrogen (13.8033 K) to the freezing point of copper (1,357.77 K), which is a nearly hundredfold range of thermodynamic temperature.
Relationship of temperature, motions, conduction, and thermal energy
Figure 1 The translational motion of fundamental particles of nature such as atoms and molecules is directly related to temperature. Here, the size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. These room-temperature atoms have a certain average speed (slowed down here two trillion-fold). At any given instant however, a particular helium atom may be moving much faster than average while another may be nearly motionless. Five atoms are colored red to facilitate following their motions. This animation illustrates statistical mechanics, which is the science of how the group behavior of a large collection of microscopic objects emerges from the kinetic properties of each individual object.
Nature of kinetic energy, translational motion, and temperature
The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x-, y-, and z-axes dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law's formula pV = nRT and is embodied in the gas laws.
Though the kinetic energy borne exclusively in the three translational degrees of freedom comprise the thermodynamic temperature of a substance, molecules, as can be seen in Fig. 3, can have other degrees of freedom, all of which fall under three categories: bond length, bond angle, and rotational. All three additional categories are not necessarily available to all molecules, and even for molecules that can experience all three, some can be "frozen out" below a certain temperature. Nonetheless, all those degrees of freedom that are available to the molecules under a particular set of conditions contribute to the specific heat capacity of a substance; which is to say, they increase the amount of heat (kinetic energy) required to raise a given amount of the substance by one kelvin or one degree Celsius.
The relationship of kinetic energy, mass, and velocity is given by the formula Ek = 1/2mv2.[10] Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity.
The Boltzmann constant relates the thermodynamic temperature of a gas to the mean kinetic energy of a particle's translational motion:
where:
is the mean kinetic energy for an individual particle
kB is the Boltzmann constant
T is the thermodynamic temperature of the bulk quantity of the substance
Figure 2 The translational motions of helium atoms occur across a range of speeds. Compare the shape of this curve to that of a Planck curve in Fig. 5 below.
While the Boltzmann constant is useful for finding the mean kinetic energy in a sample of particles, it is important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x-axis to the right). This graph uses inverse speed for its x-axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x-axis represents infinite temperature. Additionally, the x- and y-axes on both graphs are scaled proportionally.
Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast[11] and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting low temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second in order to calculate their temperature.[12] Formulas for calculating the velocity and speed of translational motion are given in the following footnote.[13]
Figure 2.5 This simulation illustrates an argon atom as it would appear through a 400-power optical microscope featuring a reticle graduated with 50 μm (0.05 mm) tick marks. This atom is moving with a velocity of 14.43 μm/s, which gives the atom a kinetic temperature of one-trillionth of a kelvin. The atom requires 13.9 seconds to travel 200 μm (0.2 mm). Though the atom is being invisibly jostled due to zero-point energy, its translational motion seen here comprises all its kinetic energy.
It is neither difficult to imagine atomic motions due to kinetic temperature, nor distinguish between such motions and those due to zero-point energy. Consider the following hypothetical thought experiment, as illustrated in Fig. 2.5 at left, with an atom that is exceedingly close to absolute zero. Imagine peering through a common optical microscope set to 400 power, which is about the maximum practical magnification for optical microscopes. Such microscopes generally provide fields of view a bit over 0.4 mm in diameter. At the center of the field of view is a single levitated argon atom (argon comprises about 0.93% of air) that is illuminated and glowing against a dark backdrop. If this argon atom was at a beyond-record-setting one-trillionth of a kelvin above absolute zero,[14] and was moving perpendicular to the field of view towards the right, it would require 13.9 seconds to move from the center of the image to the 200 μm tick mark; this travel distance is about the same as the width of the period at the end of this sentence on modern computer monitors. As the argon atom slowly moved, the positional jitter due to zero-point energy would be much less than the 200 nm (0.0002 mm) resolution of an optical microscope. Importantly, the atom's translational velocity of 14.43 μm/s constitutes all its retained kinetic energy due to not being precisely at absolute zero. Were the atom precisely at absolute zero, imperceptible jostling due to zero-point energy would cause it to very slightly wander, but the atom would perpetually be located, on average, at the same spot within the field of view. This is analogous to a boat that has had its motor turned off and is now bobbing slightly in relatively calm and windless ocean waters; even though the boat randomly drifts to and fro, it stays in the same spot in the long term and makes no headway through the water. Accordingly, an atom that was precisely at absolute zero would not be "motionless", and yet, a statistically significant collection of such atoms would have zero net kinetic energy available to transfer to any other collection of atoms. This is because regardless of the kinetic temperature of the second collection of atoms, they too experience the effects of zero-point energy. Such are the consequences of statistical mechanics and the nature of thermodynamics.
Figure 3 Molecules have internal structures because they are composed of atoms that have different ways of moving within molecules. Being able to store kinetic energy in these internal degrees of freedom contributes to a substance's specific heat capacity, or internal energy, allowing it to contain more internal energy at the same temperature.
As mentioned above, there are other ways molecules can jiggle besides the three translational degrees of freedom that imbue substances with their kinetic temperature. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements; these are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gaseshelium and argon, which have only the three translational degrees of freedom (the x-, y-, and z-axes). Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called "internal", the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as internal energy is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum.
The kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions, is not contributing to the molecules' translational motions at that same instant.[15] This extra kinetic energy simply increases the amount of internal energy that substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity.
Different molecules absorb different amounts of internal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases.[16] Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
Diffusion of thermal energy: entropy, phonons, and mobile conduction electrons
Figure 4 The temperature-induced translational motion of particles in solids takes the form of phonons. Shown here are phonons with identical amplitudes but with wavelengths ranging from 2 to 12 average inter-molecule separations (a).
Heat conduction is the diffusion of thermal energy from hot parts of a system to cold parts. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).
One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more.[17]
Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at the speed of sound of a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient[18] and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam.
Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity.[19] Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 1⁄1836 that of a proton. As Isaac Newton wrote with his third law of motion,
Law #3: All forces occur in pairs, and these two forces are equal in magnitude and opposite in direction.
However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they are delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons.
Figure 5 The spectrum of black-body radiation has the form of a Planck curve. A 5500 K black-body has a peak emittance wavelength of 527 nm. Compare the shape of this curve to that of a Maxwell distribution in Fig. 2 above.
Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see § Table of thermodynamic temperatures).
Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.
As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system.
^For a true black body (which tungsten filaments are not). Tungsten filaments' emissivity is greater at shorter wavelengths, which makes them appear whiter.
^For a true black body (which the plasma was not). The Z machine's dominant emission originated from 40 MK electrons (soft x–ray emissions) within the plasma.
Figure 6 Ice and water: two phases of the same substance
The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to breakchemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin.
Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutecticalloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal latticechemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green.
Figure 7 Water's temperature does not change during phase transitions as heat flows into or out of it. The total heat capacity of a mole of water in its liquid phase (the green line) is 7.5507 kJ.
At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules,[33] converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy cannot make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance.
As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it is called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements.[34] If the substance is one of the monatomic gases (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole.[35] Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times.[36] The phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.[37]
Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above); water vapors (gas phase) are liquefied on the skin with releasing a large amount of energy (enthalpy) to the environment including the skin, resulting in skin damage. In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity); the water evaporation on the skin takes a large amount of energy from the environment including the skin, reducing the skin temperature. Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when the pools are not in use) are so effective at reducing heating costs: they prevent evaporation. (In other words, taking energy from water when it is evaporated is limited.) For instance, the evaporation of just 20 mm of water from a 1.29 m-deep pool chills its water 8.4 °C (15.1 °F).
The total energy of all translational and internal particle motions, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy[5] of a substance comprise the internal energy of it.
As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic energy or temperature decreases); the internal motions of molecules diminish (their internal energy or temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower;[38] and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When particles of a substance are as close as possible to complete rest and retain only ZPE (zero-point energy)-induced quantum mechanical motion, the substance is at the temperature of absolute zero (T = 0).
Figure 9 Due to the effects of zero-point energy, helium at ambient pressure remains a superfluid even when exceedingly close to absolute zero; it will not freeze unless under 25 bar of pressure (c. 25 atmospheres).
Whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero internal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T = 0 helium remains liquid at room pressure (Fig. 9 at right) and must be under a pressure of at least 25 bar (2.5 MPa) to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures.
A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid–solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one.
The above complexities make for rather cumbersome blanket statements regarding the internal energy in T = 0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy.[5][39] One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration).[40] Lastly, all T = 0 substances contain zero kinetic thermal energy.[5][13]
Practical applications for thermodynamic temperature
Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay-Lussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold gage[41] pressure of 200 kPa, then its absolute pressure is 300 kPa.[42][43] Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as 316 K/296 K = 6.8% greater thermodynamic temperature and absolute pressure; that is, an absolute pressure of 320 kPa, which is a gage pressure of 220 kPa.
The thermodynamic temperature is closely linked to the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratioT2/T1 of two temperatures T1 and T2 is the same in all absolute scales.
Strictly speaking, the temperature of a system is well-defined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena.
Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, TH, and a lower temperature heat sink, TC, through a gas filled piston. The work done per cycle is equal in magnitude to net heat taken up, which is sum of the heat qH taken up by the engine from the high-temperature source, plus the waste heat given off by the engine, qC < 0.[44] The efficiency of the engine is the work divided by the heat put into the system or
where is the work done per cycle. Thus the efficiency depends only on |qC| / |qH|.
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, that is to say, the efficiency is the function of only temperatures
In addition, a reversible heat engine operating between a pair of thermal reservoirs at temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3. If this were not the case, then energy (in the form of q) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles as an engine design choice, and any reversible engine between the same reservoir at T1 and T3 must be equally efficient regardless of the engine design.
If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as below.
Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at T2. We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following.
In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3.
With this understanding of q1, q2 and q3, mathematically,
But since the first function is not a function of T2, the product of the final two functions must result in the removal of T2 as a variable. The only way is therefore to define the function f as follows:
and
so that
I.e. the ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ;[45] it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale.
Such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of TH and TC, and hence derive that the (complete) Carnot cycle is isentropic:
Substituting this back into our first formula for efficiency yields a relationship in terms of temperature:
Note that for TC = 0 the efficiency is 100% and that efficiency becomes greater than 100% for TC < 0, which is unrealistic. Subtracting 1 from the right hand side of the Equation (4) and the middle portion gives and thus [46][44]
The generalization of this equation is the Clausius theorem, which proposes the existence of a state function (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by
where the subscript rev indicates heat transfer in a reversible process. The function is the entropy of the system, mentioned previously, and the change of around any cycle is zero (as is necessary for any state function). The Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid a logic loop, we should first define entropy through statistical mechanics):
For a constant-volume system (so no mechanical work ) in which the entropy is a function of its internal energy, and the thermodynamic temperature is therefore given by
so that the reciprocal of the thermodynamic temperature is the rate of change of entropy with respect to the internal energy at the constant volume.
Guillaume Amontons (1663–1705) published two papers in 1702 and 1703 that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 °C—only 33.15 degrees short of the true value of −273.15 °C. Amonton's discovery of a one-to-one relationship between absolute temperature and absolute pressure was rediscovered a century later and popularized within the scientific community by Joseph Louis Gay-Lussac. Today, this principle of thermodynamics is commonly known as Gay-Lussac's law but is also known as Amonton's law.
In 1742, Anders Celsius (1701–1744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level.
Coincident with the death of Anders Celsius in 1744, the botanist Carl Linnaeus (1707–1778) effectively reversed[47][48][full citation needed] Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made Linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (BIPM). The 9th CGPM (General Conference on Weights and Measures and the CIPM (International Committee for Weights and Measures formally adopted[49]degree Celsius (symbol: °C) in 1948.
In his book Pyrometrie (1777)[50] completed four months before his death, Johann Heinrich Lambert (1728–1777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C.
Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746–1823) is often credited with discovering (circa 1787), but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was V1/T1 = V2/T2.
Joseph Louis Gay-Lussac (1778–1850) published work in 1802 (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C).
William Thomson (1824–1907), also known as Lord Kelvin, wrote in his 1848 paper "On an Absolute Thermometric Scale"[51] of the need for a scale whereby infinite cold (absolute zero) was the scale's zero point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale. Thomson's value of −273 was derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C.
In the paper he proposed to define temperature using idealized heat engines. In detail, he proposed that, given three heat reservoirs at temperatures , if two reversible heat engines (Carnot engine), one working between and another between , can produce the same amount of mechanical work by letting the same amount of heat pass through, then define .
Note that like Carnot, Kelvin worked under the assumption that heat is conserved ("the conversion of heat (or caloric) into mechanical effect is probably impossible"), and if heat goes into the heat engine, then heat must come out.[52]
Kelvin, realizing after Joule's experiments that heat is not a conserved quantity but is convertible with mechanical work, modified his scale in the 1851 work An Account of Carnot's Theory of the Motive Power of Heat. In this work, he defined as follows:[53]
Given two heat reservoirs , and a reversible heat engine working between them, such that if during an engine cycle, heat moves into the engine, and heat comes out of the engine, then .
The above definition fixes the ratios between absolute temperatures, but it does not fix a scale for absolute temperature. For the scale, Thomson proposed to use the Celsius degree, that is, the interval between the freezing and the boiling point of water.
In 1859 Macquorn Rankine (1820–1872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment, that is, the interval between the freezing and the boiling point of water. This absolute scale is known today as the Rankine thermodynamic temperature scale.
Ludwig Boltzmann (1844–1906) made major contributions to thermodynamics between 1877 and 1884 through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics.
Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed in the 1930s that absolute zero was equivalent to −273.15 °C.
Resolution 3[54] of the 9th General Conference on Weights and Measures (CGPM) in 1948 fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the International Committee for Weights and Measures (CIPM) and the CGPM formally adopted[55] the name Celsius for the degree Celsius and the Celsius temperature scale.[58]
Resolution 3[59] of the 10th CGPM in 1954 gave the Kelvin scale its modern definition by choosing the triple point of water as its upper defining point (with no change to absolute zero being the null point) and assigning it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and −273.15 °C.
Resolution 3[60] of the 13th CGPM in 1967/1968 renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol °K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4[61] that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water".
The CIPM affirmed in 2005[62] that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water.
In November 2018, the 26th General Conference on Weights and Measures (CGPM) changed the definition of the Kelvin by fixing the Boltzmann constant to 1.380649×10−23 when expressed in the unit J/K. This change (and other changes in the definition of SI units) was made effective on the 144th anniversary of the Metre Convention, 20 May 2019.
In the following notes, wherever numeric equalities are shown in concise form, such as 1.85487(14)×1043, the two digits between the parentheses denotes the uncertainty at 1-σ (1 standard deviation, 68% confidence level) in the two least significant digits of the significand.
^ abcdeAbsolute zero's relationship to zero-point energy While scientists are achieving temperatures ever closer to absolute zero, they can not fully achieve a state of zero temperature. However, even if scientists could remove all kinetic thermal energy from matter, quantum mechanicalzero-point energy (ZPE) causes particle motion that can never be eliminated. Encyclopædia Britannica Online defines zero-point energy as the "vibrational energy that molecules retain even at the absolute zero of temperature". ZPE is the result of all-pervasive energy fields in the vacuum between the fundamental particles of nature; it is responsible for the Casimir effect and other phenomena. See Zero Point Energy and Zero Point Field. See also Solid HeliumArchived 2008-02-12 at the Wayback Machine by the University of Alberta's Department of Physics to learn more about ZPE's effect on Bose–Einstein condensates of helium.
Although absolute zero (T = 0) is not a state of zero molecular motion, it is the point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following thought experiment: two T = 0 helium atoms in zero gravity are carefully positioned and observed to have an average separation of 620 pm between them (a gap of ten atomic diameters). It is an "average" separation because ZPE causes them to jostle about their fixed positions. Then one atom is given a kinetic kick of precisely 83 yoctokelvins (1 yK = 1×10−24 K). This is done in a way that directs this atom's velocity vector at the other atom. With 83 yK of kinetic energy between them, the 620 pm gap through their common barycenter would close at a rate of 719 pm/s and they would collide after 0.862 second. This is the same speed as shown in the Fig. 1animation above. Before being given the kinetic kick, both T = 0 atoms had zero kinetic energy and zero kinetic velocity because they could persist indefinitely in that state and relative orientation even though both were being jostled by ZPE. At T = 0, no kinetic energy is available for transfer to other systems.
Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of thermal energy between "systems" (a plurality of particles and fields modeled as an average). Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can never be a net outflow of thermal energy from such a system. Also, the peak emittance wavelength of black-body radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and black-body photons can no longer escape. Because of ZPE, however, virtual photons are still emitted at T = 0. Such photons are called "virtual" because they can't be intercepted and observed. Furthermore, this zero-point radiation has a unique zero-point spectrum. However, even though a T = 0 system emits zero-point radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T = 0, heat will flow inward, and if the surrounding environment is at 'T = 0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T = 0 with the ZPE-induced spontaneous emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zero-point energy. The graph also helps in the understanding of how zero-point energy got its name: it is the vibrational energy matter retains at the zero-kelvin point. Derivation of the classical electromagnetic zero-point radiation spectrum via a classical thermodynamic operation involving van der Waals forces, Daniel C. Cole, Physical Review A, 42 (1990) 1847.
^At non-relativistic temperatures of less than about 30 GK, classical mechanics are sufficient to calculate the velocity of particles. At 30 GK, individual neutrons (the constituent of neutron stars and one of the few materials in the universe with temperatures in this range) have a 1.0042 γ (gamma or Lorentz factor). Thus, the classic Newtonian formula for kinetic energy is in error less than half a percent for temperatures less than 30 GK.
^Even room–temperature air has an average molecular translational speed of 1822 km/hour. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T = 296.15 K. Assumption's primary variables: An altitude of 194 m above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dew point of 9 °C (40.85% relative humidity), and 760 mmHg (101 kPa) sea level–corrected barometric pressure.
^Kastberg, A.; et al. (27 February 1995). "Adiabatic Cooling of Cesium to 700 nK in an Optical Lattice". Physical Review Letters. 74 (9): 1542–1545. Bibcode:1995PhRvL..74.1542K. doi:10.1103/PhysRevLett.74.1542. PMID10059055. A record cold temperature of 450 pK in a Bose–Einstein condensate of sodium atoms (achieved by A. E. Leanhardt et al. of MIT)[citation needed] equates to an average vector-isolated atom velocity of 0.4 mm/s and an average atom speed of 0.7 mm/s.
^ abThe rate of translational motion of atoms and molecules is calculated based on thermodynamic temperature as follows:
where
is the vector-isolated mean velocity of translational particle motion
The mean speed (not vector-isolated velocity) of an atom or molecule along any arbitrary path is calculated as follows:
where is the mean speed of translational particle motion.
The mean energy of the translational motions of a substance's constituent particles correlates to their mean speed, not velocity. Thus, substituting for v in the classic formula for kinetic energy, Ek = 1/2mv2 produces precisely the same value as does Emean = 3/2kBT (as shown in § Nature of kinetic energy, translational motion, and temperature). The Boltzmann constant and its related formulas establish that absolute zero is the point of both zero kinetic energy of particle motion and zero kinetic velocity (see also Note 1 above).
^One-trillionth of a kelvin is to one kelvin as the thickness of two sheets of kitchen aluminum foil (0.04 mm) is to the distance around Earth at the equator.
^The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers' wall, some of the kinetic energy borne in the molecule's internal degrees of freedom can constructively add to its translational motion during the instant of the collision and extra kinetic energy will be transferred into the container's wall. This would induce an extra, localized, impulse-like contribution to the average pressure on the container. However, since the internal motions of molecules are random, they have an equal probability of destructively interfering with translational motion during a collision with a container's walls or another molecule. Averaged across any bulk quantity of a gas, the internal thermal motions of molecules have zero net effect upon the temperature, pressure, or volume of a gas. Molecules' internal degrees of freedom simply provide additional locations where kinetic energy is stored. This is precisely why molecular-based gases have greater specific internal capacity than monatomic gases (where additional internal energy must be added to achieve a given temperature rise).
^When measured at constant-volume since different amounts of work must be performed if measured at constant-pressure. Nitrogen's CvH (100 kPa, 20 °C) equals 20.8 J⋅mol–1⋅K–1 vs. the monatomic gases, which equal 12.4717 J⋅mol–1⋅K–1. Freeman, W. H. "Part 3: Change". Physical Chemistry(PDF). Exercise 21.20b, p. 787. Archived from the original(PDF) on 2007-09-27. See also Nave, R. "Molar Specific Heats of Gases". HyperPhysics. Georgia State University.
^The speed at which thermal energy equalizes throughout the volume of a gas is very rapid. However, since gases have extremely low density relative to solids, the heat flux (the thermal power passing per area) through gases is comparatively low. This is why the dead-air spaces in multi-pane windows have insulating qualities.
^Correlation is 752 (W⋅m−1⋅K−1)/(MS⋅cm), σ = 81, through a 7:1 range in conductivity. Value and standard deviation based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn, Co, Ni, Os, Fe, Pa, Pt, and Sn. Data from CRC Handbook of Chemistry and Physics, 1st Student Edition.
^The cited emission wavelengths are for true black bodies in equilibrium. In this table, only the sun so qualifies. CODATA recommended value of 2.897771955...×10−3 m⋅K used for Wien displacement law constant b.
^A record cold temperature of 450 ±80 pK in a Bose–Einstein condensate (BEC) of sodium (23Na) atoms was achieved in 2003 by researchers at MIT. Leanhardt, A. E.; et al. (12 September 2003). "Cooling Bose–Einstein Condensates Below 500 Picokelvin". Science. 301 (5639): 1515. Bibcode:2003Sci...301.1513L. doi:10.1126/science.1088827. PMID12970559. The thermal velocity of the atoms averaged about 0.4 mm/s. This record's peak emittance black-body radiation wavelength of 6400 km is roughly the radius of Earth.
^The peak emittance wavelength of 2.897 77 m is a frequency of 103.456 MHz.
^"Sun Fact Sheet". NASA Space Science Center Coordinated Archive. Archived from the original on 1998-02-22. Retrieved 2023-08-27.
^The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the Teller–Ulam configuration (commonly known as a "hydrogen bomb"). Peak temperatures in Gadget-style fission bomb cores (commonly known as an "atomic bomb") are in the range of 50 to 100 MK. "Nuclear Weapons Frequently Asked Questions". 3.2.5 Matter At High Temperatures.[full citation needed] All referenced data was compiled from publicly available sources.
^Peak temperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term "bulk quantity" draws a distinction from collisions in particle accelerators wherein high "temperature" applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature was achieved over a period of about ten nanoseconds during "shot Z1137". In fact, the iron and manganese ions in the plasma averaged 3.58±0.41 GK (309±35 keV) for 3 ns (ns 112 through 115). Haines, M. G.; et al. (2006). "Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2 × 109 Kelvin". Physical Review Letters. 96 (7) 075003. Bibcode:2006PhRvL..96g5003H. doi:10.1103/PhysRevLett.96.075003. PMID16606100. No. 075003. For a press summary of this article, see "Sandia's Z machine exceeds two billion degrees Kelvin". Sandia. March 8, 2006. Archived from the original on 2006-07-02.
^Core temperature of a high–mass (>8–11 solar masses) star after it leaves the main sequence on the Hertzsprung–Russell diagram and begins the alpha process (which lasts one day) of fusing silicon–28 into heavier elements in the following steps: sulfur–32 → argon–36 → calcium–40 → titanium–44 → chromium–48 → iron–52 → nickel–56. Within minutes of finishing the sequence, the star explodes as a Type II supernova.
^Based on a computer model that predicted a peak internal temperature of 30 MeV (350 GK) during the merger of a binary neutron star system (which produces a gamma–ray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20 km in diameter, and were orbiting around their barycenter (common center of mass) at about 390 Hz during the last several milliseconds before they completely merged. The 350 GK portion was a small volume located at the pair's developing common core and varied from roughly 1 to 7 km across over a time span of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). At 350 GK, the average neutron has a vibrational speed of 30% the speed of light and a relativistic mass 5% greater than its rest mass. Oechslin, R.; Janka, H.-T. (2006). "Torus formation in neutron star mergers and well-localized short gamma-ray bursts". Monthly Notices of the Royal Astronomical Society. 368 (4): 1489–1499. arXiv:astro-ph/0507099v2. Bibcode:2006MNRAS.368.1489O. doi:10.1111/j.1365-2966.2006.10238.x. S2CID15036056. For a summary, see "Short Gamma-Ray Bursts: Death Throes of Merging Neutron Stars". Max-Planck-Institut für Astrophysik. Retrieved 24 September 2024.
^Battersby, Stephen (2 March 2011). "Eight extremes: The hottest thing in the universe". New Scientist. While the details of this process are currently unknown, it must involve a fireball of relativistic particles heated to something in the region of a trillion kelvin.
^Water's enthalpy of fusion (0 °C, 101.325 kPa) equates to 0.062284 eV per molecule so adding one joule of thermal energy to 0 °C water ice causes 1.0021×1020 water molecules to break away from the crystal lattice and become liquid.
^Water's enthalpy of fusion is 6.0095 kJ⋅mol−1 K−1 (0 °C, 101.325 kPa). Chaplin, Martin. "Water Properties (including isotopologues)". Water Structure and Science. London South Bank University. Archived from the original on 2020-11-21. The only metals with enthalpies of fusion not in the range of 6–30 J mol−1 K−1 are (on the high side): Ta, W, and Re; and (on the low side) most of the group 1 (alkaline) metals plus Ga, In, Hg, Tl, Pb, and Np.
^For xenon, available values range from 2.3 to 3.1 kJ/mol. "Xenon – 54Xe: the essentials". WebElements. Retrieved 24 September 2024. Helium's heat of fusion of only 0.021 kJ/mol is so weak of a bonding force that zero-point energy prevents helium from freezing unless it is under a pressure of at least 25 atmospheres.
^H2O specific heat capacity, Cp = 0.075327 kJ⋅mol−1⋅K−1 (25 °C); enthalpy of fusion = 6.0095 kJ/mol (0 °C, 101.325 kPa); enthalpy of vaporization (liquid) = 40.657 kJ/mol (100 °C). Chaplin, Martin. "Water Properties (including isotopologues)". Water Structure and Science. London South Bank University. Archived from the original on 2020-11-21.
^Mobile conduction electrons are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of quantum gas due to the effects of zero-point energy. Consequently, even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about 1.6×106 m/s. Kinetic thermal energy adds to this speed and also causes delocalized electrons to travel farther away from the nuclei.
^No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density are hexagonal close packed (HCP) and face-centered cubic (FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually composed of atoms of different sizes, can be considered as closest-packed structures when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2O4). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice with a lower energy state.
^Nearly half of the 92 naturally occurring chemical elements that can freeze under a vacuum also have a closest-packed crystal lattice. This set includes beryllium, osmium, neon, and iridium (but excludes helium), and therefore have zero latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula: H=U + pV), internal energy may exclude different sources of thermal energy (particularly ZPE) depending on the nature of the analysis. Accordingly, all T = 0 closest-packed matter under a perfect vacuum has either minimal or zero enthalpy, depending on the nature of the analysis. Alberty, Robert A. (2001). "Use of Legendre Transforms In Chemical Thermodynamics"(PDF). Pure and Applied Chemistry. 73 (8): 1349. doi:10.1351/pac200173081349.
^Regarding the spelling "gage" vs. "gauge" in the context of pressures measured relative to atmospheric pressure, the preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the spelling "gauge pressure" to distinguish it from the pressure-measuring instrument, which in the U.K., is spelled pressure gage. For the same reason, many of the largest American manufacturers of pressure transducers and instrumentation use the spelling gage pressure (the convention used here) in their formal documentation to distinguish it from the instrument, which is spelled pressure gauge.
^Pressure also must be in absolute terms. The air still in a tire at a gage pressure of 0 kPa expands too as it gets hotter. It is not uncommon for engineers to overlook that one must work in terms of absolute pressure when compensating for temperature. For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the high gage pressures involved (180 psi; 12.4 bar; 1.24 MPa) means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2 bar (200 kPa), failing to adjust to absolute pressure results in a significant error. "Aircraft tire ratings"(PDF). Air Michelin. Archived from the original(PDF) on 2010-02-15.[better source needed]
^A difference of 100 kPa is used here instead of the 101.325 kPa value of one standard atmosphere. In 1982, the International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the physical properties of substances, the standard pressure (atmospheric pressure) should be defined as precisely 100 kPa (≈ 750.062 Torr). Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 m, which is closer to the 194 m, worldwide median altitude of human habitation. For especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. "Standard pressure". Compendium of Chemical Terminology (online 3rd ed.). International Union of Pure and Applied Chemistry. 2014. doi:10.1351/goldbook.S05921.
^ abPlanck, M. (1945). Treatise on Thermodynamics. Dover Publications. §§90, 137, eqs. (39), (40), and (65).
^Here, need to add a reason of requiring the function g(T) to be a monotonic function. The Carnot efficiency (efficiency of all reversible engines) may be a reason.
^Fermi, E. (1956). Thermodynamics. Dover Publications. p. 48. eq.(64)
^Lambert, Johann Heinrich (1779). Pyrometrie. Berlin: Haude & Spener.
^Thomson, William (October 1848). "On an Absolute Thermometric Scale". Philosophical Magazine. Also published in Thomson, William (1882). Mathematical and Physical Papers. Vol. 1. Cambridge University Press. pp. 100–106.
^Lemons, Don S. (2020). "Chapter 4: Absolute Temperature". Thermodynamic weirdness: from Fahrenheit to Clausius (First MIT Press Paperback ed.). MIT Press. ISBN978-0-262-53894-7. OCLC1143850952.
^Lemons, Don S. (2020). "Chapter 8: Absolute Temperature—Again". Thermodynamic weirdness: from Fahrenheit to Clausius (1st paperback ed.). Cambridge, Massachusetts: MIT Press. ISBN978-0-262-53894-7. OCLC1143850952.
^Quinn, Terry (1990). Temperature (2nd ed.). Academic Press. ISBN0-12-569681-7.
^According to The Oxford English Dictionary (OED), the term "Celsius's thermometer" had been used at least as early as 1797. Further, the term "The Celsius or Centigrade thermometer" was again used in reference to a particular type of thermometer at least as early as 1850. The OED also cites this 1928 reporting of a temperature: "My altitude was about 5,800 metres, the temperature was 28° Celsius". However, dictionaries seek to find the earliest use of a word or term and are not a useful resource as regards the terminology used throughout the history of science. According to several writings of Terry Quinn CBE FRS, Director of the BIPM (1988–2004), including Temperature Scales from the early days of thermometry to the 21st century[56] as well as Temperature,[57] the term Celsius in connection with the centigrade scale was not used whatsoever by the scientific or thermometry communities until after the CIPM and CGPM adopted the term in 1948. The BIPM was not even aware that degree Celsius was in sporadic, non-scientific use before that time. The twelve-volume, 1933 edition of the OED did not even have a listing for the word Celsius (but did have listings for both centigrade and centesimal in the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives:
All common temperature scales would have their units named after someone closely associated with them; namely, Kelvin, Celsius, Fahrenheit, Réaumur and Rankine.
Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form, Celsius's name was the obvious choice because it began with the letter C. Thus, the symbol °C that for centuries had been used in association with the name centigrade could continue to be used and would simultaneously inherit an intuitive association with the new name.
The new name eliminated the ambiguity of the term centigrade, freeing it to refer exclusively to the French-language name for the unit of angular measurement.
Thermodynamic temperature is the fundamental physical quantity in thermodynamics that characterizes the thermal state of a system, defined by the relation T=(∂S∂U)V,N, where U is the internal energy, S is the entropy, V is the volume, and N is the number of particles. This definition, derived from the fundamental thermodynamic relation dU=TdS−pdV+∑iμidNi, is independent of any specific material properties or microscopic structure, rendering it universally applicable to diverse systems, including non-material entities such as black holes, where the Hawking temperature follows analogous thermodynamic principles.[1][2][3] In contrast, the kinetic temperature, which relates temperature to the average translational kinetic energy of particles (e.g., 23kT per particle for an ideal gas, with k the Boltzmann constant), is a less general concept primarily suited to classical gases and reliant on assumptions about particle motion.[4] Thermodynamic temperature is realized on an absolute scale beginning at absolute zero, the theoretical minimum temperature of 0 kelvin where thermal motion ceases, measuring the average total internal energy—primarily the kinetic energy due to random motion of particles in a system.[5] It serves as a fundamental parameter in thermodynamics, distinct from empirical temperature scales like Celsius or Fahrenheit, which are based on observable properties such as expansion or freezing points without reference to an absolute lower limit.[6]In the International System of Units (SI), thermodynamic temperature is quantified using the kelvin (K), defined by fixing the Boltzmann constant at exactly 1.380649×10−23 joules per kelvin, linking temperature directly to energy.[5] This definition, adopted in 2019, ensures the kelvin is realized through fundamental physical constants rather than material properties, with the triple point of water fixed at exactly 273.16 K, such that one kelvin is 273.161 of the thermodynamic temperature at the triple point above absolute zero.[7] The scale relates to the Celsius scale by T(°C)=T(K)−273.15, where water freezes at 273.15 K and boils at 373.15 K under standard conditions.[7]The thermodynamic temperature scale emerges from the second law of thermodynamics, where the efficiency of a reversible heat engine operating between two reservoirs is given by η=1−ThTc, with T denoting absolute temperatures, ensuring consistency across all systems independent of the working substance.[6] For an ideal gas, the average translational kinetic energy per particle is 23kT, where k is the Boltzmann constant, illustrating the direct proportionality between temperature and molecular agitation.[5] This absolute framework is essential for precise calculations in fields like statistical mechanics, cryogenics, and cosmology, where temperatures range from near 0 K in deep space to tens of millions of kelvins in stellar cores.[5][8]
Fundamentals
Overview
Thermodynamic temperature is an absolute measure of the average translational kinetic energy of particles in a system, where the temperature is directly proportional to the average kinetic energy per degree of freedom.[4] This concept provides a fundamental quantification of thermal energy at the microscopic level, reflecting the random motion of atoms and molecules.[9]Unlike relative temperature scales such as Celsius or Fahrenheit, which are empirical and defined by arbitrary reference points like the freezing and boiling points of water, thermodynamic temperature uses an absolute scale that avoids negative values and is independent of specific substances.[10] The SI unit for thermodynamic temperature is the kelvin (K), which sets absolute zero as its lower limit, corresponding to the theoretical point where molecular motion ceases.[11]At its core, thermodynamic temperature quantifies the tendency of systems to spontaneously exchange heat until they reach thermal equilibrium, where no net heat transfer occurs between them.[12] This principle underlies the zeroth law of thermodynamics, which establishes temperature as the property ensuring equilibrium.[13]Thermodynamic temperature emerges from the second law of thermodynamics, which governs the direction of heat flow and the increase of entropy, providing a rigorous framework for defining temperature independent of empirical calibrations.[14]
Absolute Zero
Absolute zero is defined as the lowest possible thermodynamic temperature, exactly 0 kelvin (K), which corresponds to -273.15 degrees Celsius (°C) or 0 degrees Rankine (°R).[15] At this temperature, the thermal motion of particles theoretically ceases, representing the point where a system's entropy reaches its minimum value, as dictated by the third law of thermodynamics.[16]Theoretically, absolute zero implies the absence of translational kinetic energy among particles, such that classical thermal agitation vanishes; however, quantum mechanical effects ensure that zero-point energy persists, preventing a complete halt to all motion even at this limit.[17] This residual energy arises from the Heisenberg uncertainty principle, maintaining inherent fluctuations in particle positions and momenta.[18]The third law of thermodynamics further specifies that the entropy of a perfect crystalline substance is zero at absolute zero, implying that no additional heat can be extracted from such a system without altering its structure.[16] Consequently, absolute zero is unattainable through any finite number of thermodynamic processes, as each step removes only a portion of the remaining entropy, requiring an infinite sequence to reach exactly 0 K.[19]Experimental efforts have approached absolute zero using techniques like adiabatic demagnetization, where paramagnetic materials are cooled in a magnetic field and then demagnetized to achieve temperatures below 1 millikelvin (mK).[20]Laser cooling, another method, employs tuned laser light to reduce atomic velocities via the Doppler effect, achieving temperatures around 100 microkelvin, with advanced sub-Doppler and evaporative techniques reaching the nanokelvin range.[21] The lowest temperature achieved experimentally is 38 picokelvin (as of 2021), using ultracold quantum gases in optical lattices.[22] Near these ultra-low temperatures, phenomena such as superfluidity emerge in helium-4 below 2.17 K, where the liquid exhibits zero viscosity and flows without friction due to Bose-Einstein condensation.[23]
Temperature Scales
Kelvin Scale
The kelvin, symbol K, is the base unit of thermodynamic temperature in the International System of Units (SI). Prior to the 2019 revision, it was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water (TTPW).[24] Following the redefinition, the kelvin is now defined by fixing the numerical value of the Boltzmann constantk=1.380649×10−23 J⋅K-1, where the joule is the SI unit of energy.[25] This establishes the magnitude of the kelvin in terms of fundamental physical constants, ensuring an absolute scale with zero at absolute zero, the unattainable lower limit of temperature.[26]The interval of one kelvin equals the interval of one degree Celsius, but the Kelvin scale is absolute, commencing at 0 K rather than shifting to a freezing point reference.[7] The conversion between scales is given by the formula T(K)=T(∘C)+273.15, where temperatures in Celsius are offset by this exact value to align with the absolute zero point.[7] Common reference points include the ice point of water at 0 °C, which corresponds to 273.15 K, and normal human body temperature of approximately 37 °C, equivalent to about 310 K.[7][27]In scientific applications, the Kelvin scale is essential for contexts spanning extreme temperatures, such as cryogenics where liquid nitrogen boils at 77 K under standard atmospheric pressure, facilitating low-temperature experiments and superconductivity studies.[28] At the opposite end, high-temperature plasmas in fusion research routinely exceed millions of kelvins; for instance, experimental confinement has achieved 100 million K for sustained durations.[29] This absolute metric ensures consistent thermodynamic calculations across disciplines, from quantum mechanics to astrophysics.
Rankine Scale
The Rankine scale (°R) is an absolute thermodynamic temperature scale that sets 0°R at absolute zero and uses the same-sized degree intervals as the Fahrenheit scale, such that a change of 1°R equals a change of 1°F.[30][31] It was proposed in 1859 by Scottish engineer and physicist William John Macquorn Rankine (1820–1872), who named it after himself as part of his contributions to thermodynamics.[30]The scale converts from Fahrenheit temperatures by adding 459.67 to the °F value, yielding the formula T(°R)=T(°F)+459.67.[30] It relates to the Kelvin scale through the factor 9/5, given by T(°R)=59T(K).[31]Historically, the Rankine scale emerged alongside the Fahrenheit scale in the 19th century to provide an absolute reference for thermodynamic calculations in English-unit systems, particularly as steam power expanded during the Industrial Revolution.[30] In practical engineering, it remains used in the United States for computations involving imperial units, such as analyzing steam engine efficiency and refrigeration cycles where absolute temperatures are required for equations like the ideal gas law.[31][32]For example, the boiling point of water at standard atmospheric pressure is 672°R (corresponding to 212°F), while a typical room temperature of around 77°F equates to approximately 536°R.[31][30]
Definition and Measurement
Historical Definitions
In the early 19th century, the relationship between gas volume and temperature at constant pressure was established through experimental work, providing an initial empirical basis for temperature measurement. Jacques Charles observed around 1787 that the volume of a gas is directly proportional to its temperature, though he did not publish his findings. This proportionality was independently confirmed and published by Joseph Louis Gay-Lussac in 1802, who demonstrated that various gases expand by approximately the same fraction (about 1/273) for each degree Celsius rise in temperature from 0°C to 100°C, laying the groundwork for later absolute scales.[33][34]Sadi Carnot introduced a foundational concept in 1824 with his analysis of the ideal heat engine cycle, emphasizing temperature ratios in determining efficiency without reference to an absolute scale. In his work Réflexions sur la puissance motrice du feu, Carnot described a reversible cycle operating between a hot reservoir at temperatureThot and a cold reservoir at Tcold, where the efficiencyη is given by η=1−ThotTcold, using temperatures on an arbitrary scale such as Celsius. This formulation highlighted the importance of temperature differences in thermodynamic processes but did not yet define an absolute temperature.[35]In the 1850s, Rudolf Clausius and William Thomson (Lord Kelvin) formalized the concept of absolute thermodynamic temperature by building on Carnot's efficiency formula and integrating it with the emerging principles of energy conservation. Clausius, in his 1854 memoir, derived a temperature scale independent of material properties by considering the integral of heat transfers divided by temperature in reversible processes, effectively defining thermodynamic temperatureT such that for a Carnot cycle, ThotQhot+TcoldQcold=0. Kelvin independently proposed an absolute scale in 1848, scaling it to the ideal gas behavior where temperature is proportional to the inverse of a function derived from Carnot's work, and by 1851, he adopted the ideal gastemperature as the basis for the Kelvin scale.[35]The transition to a practical thermodynamic scale relied on the constant-volume gas thermometer, which measures temperature via the pressure of a gas at fixed volume, approximating ideal gas behavior where pressure is proportional to absolute temperature. This instrument provided an operational definition calibrated against reproducible fixed points, evolving from early 19th-century designs to high-precision versions by the mid-20th century. In 1954, the 10th General Conference on Weights and Measures (CGPM) defined the Kelvin scale by fixing the triple point of water—where ice, liquid water, and water vapor coexist in equilibrium—at exactly 273.16 K, establishing a universal reference for thermodynamic temperature measurements.[36][35]
Modern Redefinition of the Kelvin
The 2019 revision of the SI redefined the kelvin by fixing the numerical value of the Boltzmann constant, establishing thermodynamic temperature as a quantity derived from fundamental physical constants rather than material artifacts. The kelvin, symbol K, is the SI unit of thermodynamic temperature; it is defined by taking the fixed numerical value of the Boltzmann constantk to be exactly 1.380649×10−23 when expressed in the unit J K−1, which is equal to kg m2 s−2K−1.[26] This definition implies that a temperature of 1 K corresponds to a thermal energy change of kT equal to 1.380649×10−23J.[26] By anchoring the kelvin to k, the redefinition integrates temperature measurement with the energy unit (joule), which itself derives from fixed values of the Planck constant, speed of light, and elementary charge, thereby linking the kelvin seamlessly to the broader SI framework.[24]The primary rationale for this redefinition was to eliminate the kelvin's dependence on the triple point of water, which served as an empirical artifact prone to practical challenges in reproducibility and universality across diverse conditions. Prior to 2019, the kelvin had been defined since 1967 as exactly 1/273.16 of the thermodynamic temperature at the triple point of water (TTPW), assigning TTPW the exact value of 273.16 K using Vienna Standard Mean Ocean Water.[24] This artifact-based approach, while stable, limited metrological advancements because it required physical realization of the triple point, which could vary slightly due to isotopic composition or preparation methods, introducing uncertainties on the order of 50 μK among standard cells.[24] The new constant-based definition enhances precision by rooting the unit in a universal property from statistical mechanics, where k relates macroscopic temperature to microscopic energy per degree of freedom, thereby promoting invariance and accessibility independent of specific substances.[26]In comparison to the previous definition, the 2019 redefinition maintains continuity in the scale's size—the temperature of the water triple point remains numerically 273.16 K for practical purposes—but shifts the foundational standard from a reproducible physical event to an exact constant value, reducing reliance on secondary calibrations.[26] This change directly impacts the joule, as k's fixed value in J K−1 now derives temperature intervals from energy measurements, inverting the prior hierarchy where temperature defined energy scales in thermal contexts.[37] The transition ensures no disruption to existing temperature scales, with the Celsius scale still offset by exactly 273.15 from the kelvin, but it fosters innovations in linking thermometry to electrical and quantum standards.[26]The redefinition has profound implications for metrology, enabling direct realization of the kelvin through primary methods that measure thermal fluctuations tied to k, such as Johnson noise thermometry (JNT), which infers temperature from the mean-square voltage noise across a resistor via the Nyquist relation V2=4kTRΔf.[26] In JNT, noise power is compared to pseudo-random waveforms synthesized from Josephson junctions for quantum-accurate voltage references, achieving relative uncertainties of approximately 4 parts per million (ppm) at the water triple point and around 1 mK (∼3 ppm) near 300 K, with suitability for temperatures from 4 K to 1000 K.[38] These advancements support higher precision in calibrations, potentially extending to parts per billion in future implementations, and broaden applications in fields requiring traceable temperature measurements without fixed-point artifacts.[26]
Microscopic Basis
Kinetic Theory and Translational Motion
In the kinetic theory of gases, thermodynamic temperature quantifies the average translational kinetic energy of particles in an ideal gas, providing a microscopic interpretation of macroscopic temperature. For a monatomic ideal gas in three-dimensional space, the average translational kinetic energy per molecule is 23kBT, where kB=1.380649×10−23 J/K is Boltzmann's constant and T is the absolute temperature in kelvin. This relation arises from the equipartition theorem, which assigns 21kBT to each quadratic degree of freedom in the energy; the three translational degrees (along x, y, z) yield the factor of 23. James Clerk Maxwell established this foundational link in his 1860 derivation of the dynamical theory of gases, showing that temperature measures the random translational motion of molecules assuming elastic collisions and negligible intermolecular forces.[39]The root-mean-square speed v\rms, defined as the square root of the mean of the squared speeds, follows directly from the average kinetic energy: 21m⟨v2⟩=23kBT, sov\rms=⟨v2⟩=m3kBT,where m is the mass of a molecule.[39] This expression demonstrates the temperature dependence of molecular speeds, increasing proportionally with T. For dry air (primarily N2 and O2, average m≈4.8×10−26 kg) at room temperatureT=300 K, v\rms≈500 m/s, comparable to the speed of sound in air and illustrating the vigorous motion underlying everyday thermal phenomena.The distribution of molecular speeds is given by the Maxwell-Boltzmann distribution, derived by Maxwell in 1860 and later generalized by Ludwig Boltzmann. The probability density function for speeds v isf(v)dv=4πv2(2πkBTm)3/2exp(−2kBTmv2)dv,which peaks at a most probable speed vp=m2kBT and has a mean speed ⟨v⟩=πm8kBT.[39] As temperature rises, the distribution broadens and shifts to higher speeds, with the exponential tail emphasizing the Boltzmann factor exp(−E/kBT), where E is kinetic energy; higher T populates higher-energy states more fully. This distribution underpins transport properties like diffusion and viscosity in gases.At extremely high temperatures, relativistic effects modify the classical kinetic theory, as molecular speeds approach the speed of lightc. The non-relativistic approximation holds when v\rms≪c, or T≪1013 K for typical atomic masses (e.g., protons), since kBT≪mc2.[40] Above this threshold, as in astrophysical plasmas or early universe conditions, the average energy per particle approaches 3kBT in the ultra-relativistic limit, altering pressure and energy relations significantly.
Internal Energy and Molecular Degrees of Freedom
In classical statistical mechanics, the equipartition theorem asserts that each quadratic term in the energy expression of a system in thermal equilibrium contributes an average of 21kBT to the total energy per molecule, where kB is Boltzmann's constant and T is the thermodynamic temperature./University_Physics_II_-Thermodynamics_Electricity_and_Magnetism(OpenStax)/02%3A_The_Kinetic_Theory_of_Gases/2.04%3A_Heat_Capacity_and_Equipartition_of_Energy) For molecular systems, these degrees of freedom encompass translational motion (three per molecule), rotational motion (two for linear molecules like diatomic gases), and vibrational motion (two per vibrational mode, accounting for both kinetic and potential energy).[41] This theorem provides the microscopic foundation for linking thermodynamic temperature to the distribution of molecular energies across all accessible modes.The total internal energy U of an ideal gas is thus given by U=2fNkBT, where N is the number of molecules and f is the effective number of degrees of freedom, which depends on temperature because not all modes are equally excited at all temperatures.[42] For a monatomic ideal gas, f=3 (translational only), yielding U=23NkBT. For diatomic gases like nitrogen or oxygen at room temperature, f=5 (three translational plus two rotational), but at higher temperatures above approximately 1000 K, the vibrational modes become significantly excited, increasing f to 7 and thus raising the internal energy contribution per molecule.[43] This temperature-dependent activation reflects the quantum nature of vibrational energy levels, where excitation requires thermal energy comparable to the spacing between those levels.At low temperatures, higher-energy modes such as rotation and vibration progressively "freeze out," meaning their populations diminish exponentially, so the internal energy approaches the translational limit of 23NkBT as T nears 0 K. At absolute zero, the internal energy reaches its minimum value for the system but remains non-zero due to zero-point energy, the residual ground-state energy inherent in quantum mechanical systems like harmonic oscillators representing molecular vibrations.[44] The third law of thermodynamics implies that, for a perfect crystal in its ground state, this minimum energy configuration is unique, with entropy approaching zero, underscoring the unattainability of absolute zero in finite processes while defining the baseline for thermodynamic temperature scales.[45]
Macroscopic Relations
Ideal Gas Law
The ideal gas law provides a fundamental macroscopic relation connecting thermodynamic temperature to the pressure, volume, and amount of a gas, expressed asPV=nRTwhere P is the pressure, V is the volume, n is the number of moles, T is the absolute temperature, and R is the universal gas constant, approximately 8.314 J/(mol·K).[46] An equivalent form, PV=NkBT, uses N for the total number of particles and kB for Boltzmann's constant, with R=NAkB linking the molar scale to the per-particle scale via Avogadro's number NA.[47]Historically, the ideal gas law emerged from empirical observations: Boyle's law (1662) established that pressure and volume are inversely proportional at constant temperature, Charles's law (1787) showed volume proportional to temperature at constant pressure, and Avogadro's law (1811) indicated volume proportional to the number of molecules at constant pressure and temperature.[48] These were unified by Émile Clapeyron in 1834 into the modern form.[49] In thermometry, the law underpins constant-volume gas thermometers, where temperature is inferred from pressure changes in a fixed volume of gas, providing a practical realization of the thermodynamic temperature scale for moderate ranges.[50]From kinetic theory, the law derives from the molecular interpretation of pressure as P=31ρv\rms2, where ρ is the mass density and v\rms is the root-mean-square speed of the molecules.[51] This connects to temperature through the average translational kinetic energy per molecule, 21mv\rms2=23kBT, yielding PV=NkBT upon substitution and simplification.[52]The law holds well for gases at low densities (corresponding to low pressures) and high temperatures, where intermolecular forces and molecular volumes are negligible.[53] Deviations occur under conditions of high density or low temperature, such as near condensation, and can be approximated by corrections like the van der Waals equation without altering the core ideal behavior.[54]
Internal Energy and Heat Capacities
In thermodynamics, the internal energy U of a system is a function of temperature T, particularly for processes at constant volume where the change in internal energy is given by dU=CVdT, with CV denoting the heat capacity at constant volume.[55] This relation arises because, for many systems like ideal gases, U depends solely on T, and CV is defined as CV=(∂T∂U)V.[56]For an ideal gas, the molar heat capacity at constant volume is CV,m=2fR, where f is the number of degrees of freedom per molecule and R is the universal gas constant.[56] For example, monatomic gases have f=3, yielding CV,m=23R, while diatomic gases at room temperature have f=5, giving CV,m=25R.[56] The molar heat capacity at constant pressure is then CP,m=CV,m+R, reflecting the additional work done in expansion against pressure.[55]During phase changes, such as melting or boiling, the temperature remains constant while heat is absorbed or released as latent heatL.[57] According to the first law of thermodynamics, ΔU=Q−W, where Q is the heat transfer (equal to mL for massm) and W is the work done, often PΔV for constant-pressure processes; thus, the internal energy change arises from alterations in molecular potential energy without a temperature shift.[58]Heat capacities exhibit temperature dependence due to the progressive excitation of molecular vibrational and rotational modes at higher temperatures.[59] For solids, the Dulong-Petit law states that at sufficiently high temperatures (well above the vibrational characteristic temperature), the molar heat capacity at constant volume approaches approximately 3R per mole of atoms, as each atom contributes three degrees of freedom from vibrational motion.[59] This classical limit holds for most metals at room temperature, providing a benchmark for thermal properties in engineering applications.[59]
Heat Transfer Mechanisms
Conduction and Phonons
Thermal conduction represents a primary mechanism for heat transfer in solids and liquids, driven by temperature gradients that arise from differences in thermodynamic temperature across a material. In these media, heat flows from regions of higher temperature to lower temperature, establishing a diffusive process that aligns with the second law of thermodynamics. This transfer occurs through the collective motion of particles or quasiparticles, such as phonons in insulators and electrons in metals, rather than free molecular collisions as in gases. The macroscopic description of this process is captured by Fourier's law, which posits that the heat fluxq is proportional to the negative gradient of temperature∇T, expressed as q=−κ∇T, where κ is the thermal conductivity, a material-specific property that quantifies the material's ability to conduct heat.[60][61]In insulating solids, thermal conduction is predominantly mediated by phonons, which are quantized vibrational modes of the atomic lattice. These phonons act as carriers of thermal energy, propagating through the crystal lattice much like particles in a gas, with their transport governed by the kinetic theory of heat conduction. The thermal conductivity κ in such materials is limited by the mean free pathl of phonons, which is determined by scattering events from lattice defects, boundaries, or anharmonic interactions; a simplified expression from this theory yields κ≈31Cvl, where C is the specific heat capacity per unit volume and v is the speed of sound in the material, approximating the average phonon velocity.[62][63] This phonon-based mechanism highlights how temperature influences conduction, as higher temperatures increase phonon populations and thus enhance energy transport, though scattering also intensifies, often leading to a peak in κ at intermediate temperatures.In metallic solids, electrons dominate thermal conduction due to their high mobility, contributing significantly more than phonons in many cases. The relationship between thermal and electrical conductivities is encapsulated in the Wiedemann-Franz law, which states that the ratio κ/(σT)=L, where σ is the electrical conductivity, T is the absolute temperature, and L is the Lorenz number, approximately 2.44×10−8 W Ω K−2 for many metals at room temperature.[64] This law arises from the free-electron model, where the same electrons responsible for charge transport also carry heat, with the temperature dependence reflecting the Fermi-Dirac statistics of electrons.[65]The process of heat conduction inherently involves entropy production, ensuring compliance with the second law of thermodynamics. When heat flows down a temperature gradient according to Fourier's law, the local entropy increase due to irreversible diffusion outweighs any entropy decrease from heat transfer, resulting in a net positive entropy production rate S˙=∫q⋅∇(T1)dV>0.[66] At thermal equilibrium, where ∇T=0, no heat flows and entropy production ceases, maintaining a state of maximum entropy for the isolated system. This entropic perspective underscores conduction as an irreversible process that dissipates the potential for work inherent in temperature differences.
Black-Body Radiation
Black-body radiation refers to the electromagnetic radiation emitted by an idealized object known as a black body, which absorbs all incident radiation irrespective of wavelength or direction and re-emits energy solely dependent on its temperature. This idealization serves as a fundamental model for understanding thermal radiation, where the black body acts as both a perfect absorber and a perfect emitter in thermal equilibrium. The concept underpins the spectral distribution of radiation from heated objects, providing a temperature-dependent signature that distinguishes thermal emission from other light sources.The spectral energy density u(ν,T) of black-body radiation within a cavity, representing the energy per unit volume per unit frequency interval at temperatureT, is described by Planck's law:u(ν,T)=c38πhν3exp(kBThν)−11,where ν is the frequency, h is Planck's constant, c is the speed of light in vacuum, and kB is the Boltzmann constant. This formula, derived from quantum considerations of oscillator energies in equilibrium with radiation, resolves the ultraviolet catastrophe predicted by classical theories and accurately matches experimental spectra for thermal emitters. Integrating u(ν,T) over all frequencies yields the total energy densityu=aT4, where a=c4σ and σ is the Stefan-Boltzmann constant.The Stefan-Boltzmann law quantifies the total radiative power emitted by a black body, stating that the energy flux from its surface is j=σT4, or for a body of surface area A, the total power P=σAT4. Here, σ=5.670×10−8 W m−2K−4, an exact value tied to fundamental constants. This law emerges from integrating Planck's spectral distribution and highlights the strong temperature dependence of total emission, with the factor of four arising from the isotropic radiation within the cavity escaping through one surface.Wien's displacement law relates the peak of the black-body spectrum to temperature, asserting that the wavelength λmax at which the spectral radiance is maximum satisfies λmaxT=b, where b≈2.898×10−3 m K is Wien's displacement constant. This implies that as temperature increases, the peak emission shifts to shorter wavelengths, explaining the color progression from red to blue in heated objects. The law provides a practical tool for inferring temperatures from observed spectra, such as in stellar atmospheres.In heat transfer contexts, black-body radiation serves as a primary mechanism for energy exchange, particularly dominant at elevated temperatures (above ~1000 K) or in vacuum, where molecular collisions enabling conduction or convection are negligible. The process involves irreversible entropy production, with the entropyfluxϕs=34Tj for black-body emission, reflecting the thermodynamic cost of non-equilibrium radiation transfer in open systems.
Applications and Examples
Practical Uses
Thermodynamic temperature plays a central role in thermometry, where various devices exploit material properties that vary predictably with temperature to enable precise measurements. Resistance temperature detectors (RTDs) operate on the principle that the electrical resistance of metals, such as platinum, increases with temperature due to the positive temperature coefficient of resistivity, allowing accurate sensing over a wide range from -200°C to 850°C.[67] Thermocouples, another common type, rely on the Seebeck effect, in which a voltage is generated at the junction of two dissimilar metals due to a temperature gradient, providing robust measurements for harsh environments up to 2300°C.[68] For non-contact applications, infrared pyrometers detect thermal radiation emitted by objects, inferring temperature from the intensity of infrared light based on black-body principles, which is essential for measuring moving or inaccessible surfaces without physical interaction.[69]In engineering, thermodynamic temperature is critical for optimizing systems like heating, ventilation, and air conditioning (HVAC), where efficiency is quantified by the coefficient of performance (COP), ideally approaching the Carnot limit of COP = T_hot / (T_hot - T_cold) in Kelvin, guiding design to maximize heat transfer while minimizing energy input.[70] Superconductivity applications, such as in magnetic resonance imaging (MRI) machines and particle accelerators, require maintaining temperatures below 10 K to achieve zero electrical resistance, enabling high-current operations with minimal energy loss and strong magnetic fields.[71]Scientific research leverages thermodynamic temperature to probe extreme conditions, as seen in cosmology where the cosmic microwave background (CMB) radiation maintains a uniform temperature of 2.725 K, serving as a key observable for understanding the universe's early evolution and large-scale structure.[72] In fusion research, plasmas are heated to approximately 10^8 K to overcome electrostatic repulsion between nuclei, facilitating deuterium-tritium reactions in devices like tokamaks for potential clean energy production.[73] Advances in quantum thermometry since 2019 have introduced enhanced precision using non-Gaussian states and topological probes, achieving sub-kelvin sensitivity in noisy environments through quantum coherence and entanglement, surpassing classical limits in cryogenic and quantum information systems.[74]Everyday applications of thermodynamic temperature ensure safety and functionality, such as in food storage where refrigerators are maintained at 4°C (277 K) to inhibit bacterial growth and preserve perishables below the danger zone of 40°C.[75] In electronics, active cooling systems maintain junction temperatures below 125°C (398 K) for semiconductors and surface temperatures below 60°C (333 K) for batteries to prevent thermal runaway, a cascading failure where heat generation exceeds dissipation, leading to device malfunction or fire in batteries and circuits.[76]
Table of Thermodynamic Temperatures
The table below serves as a concise reference for selected thermodynamic temperatures spanning extreme low and high values in natural, biological, and technological contexts, expressed in kelvin (K). These values illustrate the broad applicability of thermodynamic temperature in describing thermal equilibrium across scales, from quantum systems to astrophysical environments. Recent measurements, such as those for planetary interiors and atmospheres, have refined estimates for certain entries, incorporating data from missions like NASA's Parker Solar Probe and ground-based observations up to 2025.
Phenomenon
Temperature (K)
Notes
Absolute zero
0
The theoretical minimum temperature at which thermal motion ceases, defining the Kelvin scale.
In ancient Greek philosophy, Aristotle conceptualized temperature through qualitative properties of hot and cold, associating them with the four elements: fire as hot and dry, air as hot and moist, water as cold and moist, and earth as cold and dry.[77] These qualities were seen as fundamental to natural changes, with heat and cold arising from processes like rarefaction and condensation that altered the density of matter.[78] Building on this, the Roman physician Galen integrated temperature into his humoral theory of medicine around the 2nd century CE, positing four bodily fluids—blood (hot and moist), phlegm (cold and moist), yellow bile (hot and dry), and black bile (cold and dry)—whose balance maintained health and vitality.[79]Galen further linked innate heat, generated by the heart as a kind of furnace, to the sustenance of life, viewing imbalances in hot and cold humors as causes of disease.[80]By the 17th century, empirical approaches emerged with the invention of thermoscopes, simple devices that indicated temperature changes without precise scales. Galileo Galilei constructed the first known air thermoscope around 1592, consisting of a glass bulb connected to a tube immersed in water, where expanding or contracting air caused the water level to rise or fall with temperature variations.[81] This instrument, refined by contemporaries like Santorio Santorio, marked a shift toward quantitative observation of thermal effects.[82] In 1724, Daniel Gabriel Fahrenheit advanced this by developing the first standardized mercury-in-glass thermometer and scale, calibrating it with fixed points: 0°F for a brine mixture of ice, water, and ammonium chloride; 32°F for the freezing point of water; and initially 96°F for human body temperature, later adjusted to include water's boiling point at 212°F.[83]The 18th century saw the rise of the caloric theory, which treated heat as an invisible, weightless fluid called caloric that flowed from hotter to colder bodies until equilibrium.[84] Scottish chemist Joseph Black contributed foundational ideas in the 1760s by distinguishing between sensible heat (perceived temperature change) and latent heat (absorbed or released without temperature change, as in melting or boiling), through precise experiments on substances like ice and steam.[85]Antoine Lavoisier formalized the theory in the 1780s, naming the fluid "caloric" and integrating it into his chemical framework, where it explained combustion and heat capacities but assumed conservation like mass.[84] Despite its successes in accounting for latent heat, the theory faltered in explaining phenomena where heat appeared to be generated indefinitely without a corresponding caloric source.A pivotal challenge came in 1798 from Benjamin Thompson, Count Rumford, who observed excessive heat production during the boring of cannon barrels in Munich, where friction from dull tools generated enough heat to boil water continuously—far exceeding any plausible caloric reservoir in the metal.[85] Rumford argued that heat resulted from the mechanical motion of particles rather than a conserved fluid, suggesting an early kinetic interpretation that undermined caloric theory's core assumption.[86]
Evolution of the Kelvin Scale
In 1848, William Thomson, later known as Lord Kelvin, proposed the first absolute temperature scale derived from Sadi Carnot's theory of heat engine efficiency, which posits that the efficiency of a reversible heat engine depends solely on the temperatures of its reservoirs.[36] Thomson argued that an absolute zero must exist where no heat could be extracted, estimating this point at approximately -273°C based on the properties of air under constant volume. This scale provided a thermodynamic foundation independent of arbitrary fixed points, marking a shift from empirical scales like Celsius to one grounded in physical principles.By the 1870s, the Kelvin scale gained traction internationally following the 1875 Treaty of the Metre, which established the International Committee for Weights and Measures to standardize scientific measurements. In 1887, the committee adopted the normal hydrogen thermometer scale as a practical realization of the absolute scale, enabling consistent measurements across laboratories; this was advanced by Hugh Longbourne Callendar's experiments at the Cavendish Laboratory, where he demonstrated the first reliable absolute temperature determinations using a platinum resistance thermometer calibrated against gas thermometry. International adoption accelerated through the early 20th century, culminating in the 10th General Conference on Weights and Measures (CGPM) in 1954, which formally incorporated the kelvin into the International System of Units (SI) and defined the triple point of water—where ice, liquid water, and water vapor coexist in equilibrium—as exactly 273.16 K to anchor the scale.From the 1960s onward, refinements to the Kelvin scale emphasized primary thermometry methods to bridge practical scales with thermodynamic ideals, notably through acoustic gas thermometry (AGT), which measures temperature via the speed of sound in a monatomic gas like argon or helium.[87] AGT experiments, beginning with early implementations in the 1970s and achieving high precision by the 2000s, quantified deviations between the International Temperature Scale of 1990 (ITS-90) and true thermodynamic temperature, with uncertainties reduced to parts in 10^6 by 2010. These efforts directly supported the 2019 SI redefinition of the kelvin at the 26th CGPM, which fixed the Boltzmann constant at exactly 1.380649 × 10^{-23} J/K, eliminating the need for physical artifacts like the water triple point and tying the scale to fundamental constants for universal reproducibility.A pivotal event in 1887 was Callendar's validation of absolute measurements, confirming Thomson's zero within 0.5% using resistance thermometry. Post-2019, the fixed Boltzmann constant has enabled unprecedented precision in low-temperature physics, such as Bose-Einstein condensate (BEC) experiments achieving nanokelvin regimes with relative uncertainties below 10^{-6}, facilitating advances in quantum simulation and ultracold atomic clocks.