Hubbry Logo
Chemical potentialChemical potentialMain
Open search
Chemical potential
Community hub
Chemical potential
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Chemical potential
Chemical potential
from Wikipedia

In thermodynamics, the chemical potential of a species is the energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. When both temperature and pressure are held constant, and the number of particles is expressed in moles, the chemical potential is the partial molar Gibbs free energy.[1][2] At chemical equilibrium or in phase equilibrium, the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum.[3][4][5] In a system in diffusion equilibrium, the chemical potential of any chemical species is uniformly the same everywhere throughout the system.[6]

In semiconductor physics, the chemical potential of a system of electrons is known as the Fermi level.[7]

Overview

[edit]

Particles tend to move from higher chemical potential to lower chemical potential because this reduces the free energy. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is the conjugate variable to chemical potential.

A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually, the concentration is the same everywhere. The microscopic explanation for this is based on kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process.

Another example, not based on concentration but on phase, is an ice cube on a plate above 0 °C. An H2O molecule that is in the solid phase (ice) has a higher chemical potential than a water molecule that is in the liquid phase (water) above 0 °C. When some of the ice melts, H2O molecules convert from solid to the warmer liquid where their chemical potential is lower, so the ice cube shrinks. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cube neither grows nor shrinks, and the system is in equilibrium.

A third example is illustrated by the chemical reaction of dissociation of a weak acid HA (such as acetic acid, A = CH3COO):

HA ⇌ H+ + A

Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H+ and A) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H+ and A increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution.

Chemical potentials are important in many aspects of multi-phase equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case the chemical potential of a given species at equilibrium is the same in all phases of the system.[6]

In electrochemistry, ions do not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.)

Thermodynamic definition

[edit]

The chemical potential μi of species i (atomic, molecular or nuclear) is defined, as all intensive quantities are, by the phenomenological fundamental equation of thermodynamics. This holds for both reversible and irreversible infinitesimal processes:[8]

where dU is the infinitesimal change of internal energy U, dS the infinitesimal change of entropy S, dV is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and dNi is the infinitesimal change of particle number Ni of species i as particles are added or subtracted. T is absolute temperature, S is entropy, P is pressure, and V is volume. Other work terms, such as those involving electric, magnetic or gravitational fields may be added.

From the above equation, the chemical potential is given by

This is because the internal energy U is a state function, so if its differential exists, then the differential is an exact differential such as

for independent variables x1, x2, ... , xN of U.

This expression of the chemical potential as a partial derivative of U with respect to the corresponding species particle number is inconvenient for condensed-matter systems, such as chemical solutions, as it is hard to control the volume and entropy to be constant while particles are added. A more convenient expression may be obtained by making a Legendre transformation to another thermodynamic potential: the Gibbs free energy . From the differential (for and , the product rule is applied to) and using the above expression for , a differential relation for is obtained:

As a consequence, another expression for results:

and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply

In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is . It follows that

Use of this equality provides the means to establish the equilibrium constant for a chemical reaction.

By making further Legendre transformations from U to other thermodynamic potentials like the enthalpy and Helmholtz free energy , expressions for the chemical potential may be obtained in terms of these:

These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations.

Applications

[edit]

The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants A and B are related by

where is the number of moles of A and is the number of moles of B. Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation.[9] They are used to explain colligative properties such as melting-point depression by the application of pressure.[10] Henry's law for the solute can be derived from Raoult's law for the solvent using chemical potentials.[11][12]

History

[edit]

Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows:

If to any homogeneous mass in a state of hydrostatic stress we suppose an infinitesimal quantity of any substance to be added, the mass remaining homogeneous and its entropy and volume remaining unchanged, the increase of the energy of the mass divided by the quantity of the substance added is the potential for that substance in the mass considered.

Gibbs later noted[citation needed] also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows the chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole.

In his 1873 paper A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volumeentropyinternal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words from the aforementioned paper, Gibbs states:

If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure P and temperature T, this equation may be written:

Where δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.

In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.

Electrochemical, internal, external, and total chemical potential

[edit]

The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called total chemical potential.[13][14] If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (electric potential energy, gravitational potential energy, etc.), while the rest would be due to "internal" factors (density, temperature, etc.).[13] Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential:

where

i.e., the external potential is the sum of electric potential, gravitational potential, etc. (where q and m are the charge and mass of the species, Vele and h are the electric potential[15] and height of the container, respectively, and g is the acceleration due to gravity). The internal chemical potential includes everything else besides the external potentials, such as density, temperature, and enthalpy. This formalism can be understood by assuming that the total energy of a system, , is the sum of two parts: an internal energy, , and an external energy due to the interaction of each particle with an external field, . The definition of chemical potential applied to yields the above expression for .

The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal.[13] In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential.[16][17][18][19][20]

Systems of particles

[edit]

Electrons in solids

[edit]

Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV).

Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode.

As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however, work function varies from surface to surface even on a completely homogeneous material. Total chemical potential, on the other hand, is usually specified relative to electrical ground.

In atomic physics, the chemical potential of the electrons in an atom is sometimes[21] said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons, i.e.,

Sub-nuclear particles

[edit]

In recent years,[when?] thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth.

In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, at thermodynamic equilibrium, the chemical potential of photons is in most physical situations always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise, if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly - at least, it occurs rapidly in the presence of dense charged matter or also in the walls of the textbook example for a photon gas of blackbody radiation - it is safe to assume that the photon chemical potential here is never different from zero. A physical situation where the chemical potential for photons can differ from zero are material-filled optical microcavities, with spacings between cavity mirrors in the wavelength regime. In such two-dimensional cases, photon gases with tuneable chemical potential, much reminiscent to gases of material particles, can be observed.[22]

Electric charge is different because it is intrinsically conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential.[23] Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out.[24]

In the case of electrons, the behaviour depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons).

The chemical potentials of bosons and fermions is related to the number of particles and the temperature by Bose–Einstein statistics and Fermi–Dirac statistics respectively.

Chemical potential in mixtures and solutions

[edit]
Chemical potentials for various hypothetical non-ideal substances in solution.

In a mixture or solution, the chemical potential of a substance depends strongly on its relative concentration, which is usually quantified by mole fraction . The exact dependence is sensitive to the substance, the solvent, and the presence of any other substances in the solution, however, two universal behaviours appear at the extremes of concentration:[25][26]

  • In the regime of the substance being nearly pure (i.e., it is a nearly pure solvent), the chemical potential approaches a logarithmic dependence:
, as .
where is the chemical potential of the pure substance. This universal form applies since it is a colligative property of all solutions. For a volatile solvent, this corresponds to Raoult's law.
  • In the regime of the substance being very dilute (i.e., it is a very dilute solute), the chemical potential also approaches a logarithmic dependence though with a different offset :
, as .
This universal dependence is a consequence of the dissolved particles of the dilute substance being so far from each other that they act effectively independently, analogous to an ideal gas. For a volatile solute, this corresponds to Henry's law.
  • Note that if the solution contains solutes that dissociate (such as an electrolyte/salt), the above forms do apply but only with certain definitions.[29]

The adjacent figure shows the dependence of on for various hypothetical substances, where a logarithmic scale is used for (so the above limiting forms appear as straight lines). The dashed lines show, for each case, one of the two limiting forms stated above. Note that for the special case of an ideal mixture (ideal solution), the chemical potential is exactly over the entire range, and .

In the study of chemistry, and especially in tabulated data and thermodynamic models for real solutions, it is common to re-parameterize the chemical potential in solution as a dimensionless activity or activity coefficient, that quantifies the deviation of from a chosen logarithmic ideal such as the above. In the case of solutes, the dilute logarithmic ideal may be written instead in terms of molarity, molality, vapor pressure, mass fraction, or others, instead of mole fraction.[11] Which choice is made will necessarily influence the values of the offset, the activity, and the activity coefficient, which may cause some confusion.[30][31]

See also

[edit]

Sources

[edit]

Citations

[edit]
  1. ^ Atkins 2006[page needed]
  2. ^ Opacity, Walter F. Huebner, W. David Barfield, ISBN 1461487978, p. 105.
  3. ^ Atkins 2002, pp. 227, section 9.2
  4. ^ Baierlein, Ralph (April 2001). "The elusive chemical potential" (PDF). American Journal of Physics. 69 (4): 423–434. Bibcode:2001AmJPh..69..423B. doi:10.1119/1.1336839.
  5. ^ Job, G.; Herrmann, F. (February 2006). "Chemical potential–a quantity in search of recognition" (PDF). European Journal of Physics. 27 (2): 353–371. Bibcode:2006EJPh...27..353J. CiteSeerX 10.1.1.568.9205. doi:10.1088/0143-0807/27/2/018. S2CID 16146320. Archived from the original (PDF) on 2015-09-24. Retrieved 2009-02-12.
  6. ^ a b Atkins 2002, pp. 141, section 6.4
  7. ^ Kittel 1980, pp. 357
  8. ^ Statistical Physics, F Mandl, (Wiley, London, 11971) ISBN 0 471 56658 6, page 88.
  9. ^ Atkins 2006, pp. 126, section 4.1
  10. ^ Atkins 2006, pp. 150–155, section 5.5
  11. ^ a b McQuarrie, D. A.; Simon, J. D. Physical Chemistry – A Molecular Approach, p. 968, University Science Books, 1997.
  12. ^ Atkins 2006, pp. 143–145, section 5.3
  13. ^ a b c Kittel 1980, pp. 124
  14. ^ Thermodynamics in Earth and Planetary Sciences by Jibamitra Ganguly, p. 240. This text uses "internal", "external", and "total chemical potential" as in this article.
  15. ^ Mortimer, R. G. Physical Chemistry, 3rd ed., p. 352, Academic Press, 2008.
  16. ^ Electrochemical Methods by Bard and Faulkner, 2nd edition, Section 2.2.4(a), 4–5.
  17. ^ Electrochemistry at Metal and Semiconductor Electrodes, by Norio Sato, pages 4–5.
  18. ^ Physics Of Transition Metal Oxides, by Sadamichi Maekawa, p. 323.
  19. ^ The Physics of Solids: Essentials and Beyond, by Eleftherios N. Economou, page 140. In this text, total chemical potential is usually called "electrochemical potential", but sometimes just "chemical potential". The internal chemical potential is referred to by the unwieldy phrase "chemical potential in the absence of the [electric] field".
  20. ^ Solid State Physics by Ashcroft and Mermin, page 257 note 36. Page 593 of the same book uses, instead, an unusual "flipped" definition, where "chemical potential" is the total chemical potential, which is constant in equilibrium, and "electrochemical potential" is the internal chemical potential; presumably this unusual terminology was an unintentional mistake.
  21. ^ Morell, Christophe, Introduction to Density Functional Theory of Chemical Reactivity: The so-called Conceptual DFT Archived 2017-08-28 at the Wayback Machine, retrieved May 2016.
  22. ^ J. Klaers; J. Schmitt; F. Vewinger & M. Weitz (2010). "Bose–Einstein condensation of photons in an optical microcavity". Nature. 468 (7323): 545–548. arXiv:1007.4088. Bibcode:2010Natur.468..545K. doi:10.1038/nature09567. PMID 21107426. S2CID 4349640.
  23. ^ Baierlein, Ralph (2003). Thermal Physics. Cambridge University Press. ISBN 978-0-521-65838-6. OCLC 39633743.
  24. ^ Hadrons and Quark-Gluon Plasma, by Jean Letessier, Johann Rafelski, p. 91.
  25. ^ Guggenheim (1985). Thermodynamics (8 ed.).
  26. ^ Atkins 2006, section 5
  27. ^ "Raoult's Law". Libretexts. 2 October 2013.
  28. ^ Atkins 2006, p. 163, section 5.9
  29. ^ The mole fraction must be calculated using the actual concentration of solute particles in solution and not just the formal concentration, and these differ by the van 't Hoff factor. Moreover, the dilute limit form applies only to the chemical potentials of the dissociated species -- the undissociated chemical potential instead varies as for van 't Hoff factor .[27] For ionic species where the individual chemical potentials may be ill-defined, and the form really only applies to a charge-neutral average of ionic chemical potentials (or electrochemical potentials).[28]
  30. ^ "9.5: Activity Coefficients in Mixtures of Nonelectrolytes". 5 November 2014.
  31. ^ Přáda, Adam (9 March 2019). "On chemical activities".

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Chemical potential is a fundamental thermodynamic quantity that describes the change in the Gibbs free energy of a system upon the addition or removal of one mole of a specified substance, while keeping temperature, pressure, and the amounts of other substances constant. It is formally defined as the partial derivative of the Gibbs free energy GG with respect to the number of moles nin_i of component ii, expressed mathematically as μi=(Gni)T,P,nj\mu_i = \left( \frac{\partial G}{\partial n_i} \right)_{T,P,n_j}, where TT is temperature, PP is pressure, and njn_j are the moles of other components. This makes it an intensive property, akin to the partial molar Gibbs energy, and it serves as the driving force for processes like diffusion, chemical reactions, and phase changes, where substances spontaneously move from regions of higher to lower chemical potential until equilibrium is reached. Introduced by American physicist J. Willard Gibbs in his seminal 1876–1878 work On the Equilibrium of Heterogeneous Substances, the concept of chemical potential revolutionized the understanding of heterogeneous systems and multicomponent equilibria. Gibbs originally termed it the "intrinsic potential" to denote the energy contribution per unit mass at constant and , later refined to its modern form; the phrase "chemical potential" was popularized by chemist Wilder Dwight Bancroft in 1899 to distinguish it from electrical potentials. In practical terms, for an , it takes the form μ=μ+RTln(PP)\mu = \mu^\circ + RT \ln \left( \frac{P}{P^\circ} \right), where μ\mu^\circ is the standard chemical potential, RR is the , and PP^\circ is the standard pressure (typically 1 bar); for non-ideal systems, activity coefficients adjust this expression to μi=μi+RTln(ai)\mu_i = \mu_i^\circ + RT \ln (a_i), with aia_i as the activity. These formulations highlight its role in predicting equilibrium conditions, such as when chemical potentials equalize across phases in boiling, freezing, or dissolution processes. Beyond classical thermodynamics, chemical potential extends to , where it represents the energy cost of adding a particle to the system in the grand canonical ensemble, influencing particle number fluctuations and distribution functions like the Fermi-Dirac or Bose-Einstein statistics. In applications across chemistry, , and , it governs phase stability—ensuring, for instance, that in a binary mixture, the chemical potentials of each component match in coexisting phases—and drives phenomena like , formation, and electrochemical reactions. Its equality across phases at equilibrium, as per the Gibbs , underpins the design of separation processes in industries such as and pharmaceuticals.

Introduction

Overview and Definition

In , the chemical potential, denoted as μ\mu, quantifies the change in GG associated with the addition or removal of a single particle (or mole) of a substance to a at constant TT and PP. Formally, for a component ii in a , it is expressed as μi=(Gni)T,P,nji\mu_i = \left( \frac{\partial G}{\partial n_i} \right)_{T,P,n_{j \neq i}}, where nin_i is the number of moles of species ii and njin_{j \neq i} are the moles of other components held constant. This partial derivative reflects the contribution of that component to the overall free energy of the . Intuitively, the chemical potential can be understood as a measure of the "escaping tendency" of particles from a system, determining whether particles are more likely to enter or leave under given conditions. Particles naturally diffuse from regions of higher chemical potential to those of lower chemical potential, minimizing the system's total free energy, much like how solutes move across a in . At equilibrium, the chemical potential of each must be uniform throughout connected phases or subsystems, ensuring no net flow occurs. The chemical potential is typically expressed in units of energy per mole, such as joules per mole (J/mol) or kilojoules per mole (kJ/mol) in macroscopic contexts, or electronvolts (eV) per particle in microscopic or solid-state applications. For multi-component systems, the subscript ii distinguishes the chemical potential of each species, μi\mu_i. Analogous to electrical potential, which drives the flow of charges from high to low potential, chemical potential uniquely governs changes in particle number, linking thermodynamic stability to compositional variations.

Physical Significance

The chemical potential plays a central role in determining equilibrium conditions in thermodynamic systems, where its equality across phases or components ensures stability and prevents spontaneous . In processes such as , the chemical potential of a substance in the liquid phase equals that in the vapor phase at the transition temperature, maintaining coexistence without net phase change. Similarly, during dissolution, the chemical potential of the solute balances between the and solution phases, driving limits and preventing further dissolution once equilibrium is reached. This equality criterion underlies the stability of multiphase systems, as any disparity would induce or phase transformation until uniformity is achieved. The chemical potential interacts closely with other thermodynamic variables, particularly influencing mixing behaviors and gas-phase properties. In mixtures, it incorporates contributions from the , which favors spontaneous blending by reducing the overall free energy through increased disorder among components. For gaseous systems, the chemical potential relates to , an effective that accounts for non-ideal deviations, allowing the extension of equilibrium criteria from ideal to real gases. These interplays highlight how chemical potential integrates and effects to govern compositional changes in diverse media. Conceptually, the chemical potential serves as a unifying bridge between classical and , providing a framework for understanding particle exchanges at the molecular level. It quantifies the energy cost of adding or removing particles, enabling predictions of spontaneity in open systems where matter flows to minimize free energy. This perspective connects macroscopic equilibrium to microscopic distributions, such as in Fermi-Dirac statistics for fermions, essential for analyzing complex systems from gases to solids. In everyday contexts, the chemical potential drives biological and technological processes reliant on equilibrium maintenance. In , prevalent in cellular biology, moves across semipermeable membranes to equalize the chemical potential of the solvent, countering solute concentration gradients and regulating cell turgor. In , such as battery operation, differences in chemical potential between electrodes propel intercalation, generating voltage in devices like lithium-ion cells where materials sustain high potentials for . These examples illustrate its practical impact on life-sustaining and energy-harvesting mechanisms.

Theoretical Foundations

Thermodynamic Definition

In classical , the chemical potential arises as a fundamental intensive variable in the description of systems involving variable particle numbers. For a single-component system at constant TT and PP, the chemical potential μ\mu is defined as the GG per mole, given by μ=G/n\mu = G/n. This equality follows from the Euler relation for the in homogeneous systems, where G=μnG = \mu n. For multicomponent systems, the chemical potential of component ii, denoted μi\mu_i, is the of the total with respect to the number of moles of that component, holding TT, PP, and the amounts of all other components fixed: μi=(Gni)T,P,nji\mu_i = \left( \frac{\partial G}{\partial n_i} \right)_{T,P,n_{j \neq i}}. The concept originates from the first law and applied to open s. The UU of a is a function of SS, VV, and mole numbers {ni}\{n_i\}, leading to the : dU=TdSPdV+iμidni,dU = T \, dS - P \, dV + \sum_i \mu_i \, dn_i, where TT is the and PP is the . Here, μi\mu_i represents the energy contribution associated with adding one mole of component ii while keeping SS and VV constant, specifically μi=(Uni)S,V,nji\mu_i = \left( \frac{\partial U}{\partial n_i} \right)_{S,V,n_{j \neq i}}. To adapt this to other natural variables, Legendre transforms yield the differential forms for the remaining thermodynamic potentials. The F=UTSF = U - TS has dF=SdTPdV+iμidnidF = -S \, dT - P \, dV + \sum_i \mu_i \, dn_i, so μi=(Fni)T,V,nji\mu_i = \left( \frac{\partial F}{\partial n_i} \right)_{T,V,n_{j \neq i}}. The H=U+PVH = U + PV satisfies dH=TdS+VdP+iμidnidH = T \, dS + V \, dP + \sum_i \mu_i \, dn_i, giving μi=(Hni)S,P,nji\mu_i = \left( \frac{\partial H}{\partial n_i} \right)_{S,P,n_{j \neq i}}. Finally, the G=HTS=UTS+PVG = H - TS = U - TS + PV obeys dG=SdT+VdP+iμidnidG = -S \, dT + V \, dP + \sum_i \mu_i \, dn_i, confirming the primary definition in terms of GG. A key consequence is the Gibbs-Duhem equation, derived from the Euler integrability condition G=iμiniG = \sum_i \mu_i n_i and the differential dGdG. Differentiating the Euler relation and substituting dGdG yields the intensive form: S \, dT - V \, dP + \sum_i n_i \, d\mu_i = 0. $$ This equation constrains the variations of intensive variables: at constant $T$ and $P$, $\sum_i n_i \, d\mu_i = 0$, implying that changes in composition affect chemical potentials in a interdependent manner. For a single-component system, it simplifies to $ s \, dT - v \, dP + d\mu = 0$, where $ s = S/n $ and $ v = V/n $ are molar quantities, highlighting the interdependence of $\mu$, $T$, and $P$.[](https://chem.libretexts.org/Courses/Millersville_University/CHEM_341-_Physical_Chemistry_I/07%3A_Mixtures_and_Solutions/7.04%3A_The_Gibbs-Duhem_Equation) The chemical potential also connects to partial molar properties through thermodynamic identities. Specifically, $\mu_i = \bar{h}_i - T \bar{s}_i$, where $\bar{h}_i = \left( \frac{\partial H}{\partial n_i} \right)_{T,P,n_{j \neq i}}$ is the partial molar [enthalpy](/page/Enthalpy) and $\bar{s}_i = \left( \frac{\partial S}{\partial n_i} \right)_{T,P,n_{j \neq i}}$ is the partial molar [entropy](/page/Entropy) of component $i$. This relation emerges from substituting the definitions of $G$, $H$, and $S$ into the partial derivative for $\mu_i$, providing insight into $\mu_i$ as a Gibbs energy contribution balancing enthalpic and entropic effects at constant $T$ and $P$. ### Statistical Mechanics Derivation In [statistical mechanics](/page/Statistical_mechanics), the chemical potential $\mu$ emerges naturally within the grand canonical ensemble, which describes a [system](/page/System) in contact with a [reservoir](/page/Reservoir) that allows exchange of both [energy](/page/Energy) and particles, thereby fixing the [temperature](/page/Temperature) $T$, [volume](/page/Volume) $V$, and $\mu$. The chemical potential $\mu$ serves as the [Lagrange multiplier](/page/Lagrange_multiplier) that controls the average number of particles $\langle N \rangle$ in the system, analogous to how temperature controls average energy. Here, $\mu$ is defined per particle.[](https://www.damtp.cam.ac.uk/user/tong/statphys/statphys.pdf)[](https://scholar.harvard.edu/files/schwartz/files/7-ensembles.pdf) The grand partition function $\Xi(T, V, \mu)$ for the ensemble is defined as \Xi(T, V, \mu) = \sum_{N=0}^{\infty} e^{\beta \mu N} Z(N, V, T), where $\beta = 1/(k_B T)$, $k_B$ is Boltzmann's constant, and $Z(N, V, T)$ is the canonical partition function for $N$ particles. This sum accounts for all possible particle numbers, weighted by the fugacity $z = e^{\beta \mu}$. The grand potential $\Omega(T, V, \mu)$ is then given by \Omega(T, V, \mu) = -k_B T \ln \Xi(T, V, \mu), which relates to the [pressure](/page/Pressure) $p$ via $\Omega = -p V$ and provides the thermodynamic foundation for $\mu$. The average particle number is obtained as \langle N \rangle = k_B T \left( \frac{\partial \ln \Xi}{\partial \mu} \right)_{T,V}, demonstrating that $\mu$ is the conjugate variable to $N$, determined inversely from the desired $\langle N \rangle$. In the thermodynamic limit of large system size, this statistical definition of $\mu$ coincides with the thermodynamic one when using particle numbers, $\mu = \left( \frac{\partial F}{\partial N} \right)_{T,V}$, where $F$ is the [Helmholtz free energy](/page/Helmholtz_free_energy) (per particle). Note that in molar thermodynamic formulations, the chemical potential per mole is $\mu_\mathrm{mol} = N_A \mu$, where $N_A$ is Avogadro's number.[](https://www.damtp.cam.ac.uk/user/tong/statphys/statphys.pdf)[](https://itp.uni-frankfurt.de/~gros/Vorlesungen/TD/10_Grand_canonical_ensemble.pdf)[](https://scholar.harvard.edu/files/schwartz/files/7-ensembles.pdf) For an ideal classical gas, the grand partition function simplifies to $\Xi = \exp\left( z V / \lambda^3 \right)$, where $\lambda = h / \sqrt{2 \pi m k_B T}$ is the thermal de Broglie wavelength and $h$ is Planck's constant. This yields $\langle N \rangle = z V / \lambda^3$, so solving for $\mu$ gives \mu = k_B T \ln \left( \rho \lambda^3 \right), with $\rho = \langle N \rangle / V$ the [number density](/page/Number_density); here, the single-particle translational partition function $Z_1 = V / \lambda^3$ enters implicitly through the density relation. This expression highlights $\mu$'s dependence on density and [temperature](/page/Temperature), negative for dilute gases where quantum effects are negligible.[](https://www.damtp.cam.ac.uk/user/tong/statphys/statphys.pdf)[](https://itp.uni-frankfurt.de/~gros/Vorlesungen/TD/10_Grand_canonical_ensemble.pdf) In quantum statistics, $\mu$ appears directly in the occupation number distributions. For fermions obeying Fermi-Dirac statistics, the average occupation of a state with energy $\varepsilon$ is f(\varepsilon) = \frac{1}{e^{\beta (\varepsilon - \mu)} + 1}, where $\mu$ (often the [Fermi energy](/page/Fermi_energy) at $T=0$) sets the filling level, ensuring the total $\langle N \rangle$ matches the required value; for bosons, it is f(\varepsilon) = \frac{1}{e^{\beta (\varepsilon - \mu)} - 1}, with $\mu < 0$ to avoid divergence. These forms generalize the classical limit when $|\mu| \gg k_B T$, bridging to the Boltzmann distribution $f(\varepsilon) \approx e^{\beta (\mu - \varepsilon)}$. The grand partition function for non-interacting quantum particles is a product over single-particle states, $\Xi = \prod_i (1 \pm e^{-\beta (\varepsilon_i - \mu)})^{\mp 1}$ (upper sign for fermions), from which $\mu$ is extracted via the average $N$.[](https://www.damtp.cam.ac.uk/user/tong/statphys/statphys.pdf) ## Variations and Extensions ### Electrochemical Potential The electrochemical potential $\bar{\mu}_i$ of a charged species $i$ is defined as $\bar{\mu}_i = \mu_i + z_i F \phi$, where $\mu_i$ is the [chemical potential](/page/Chemical_potential), $z_i$ is the charge number (in units of elementary charge), $F$ is the [Faraday constant](/page/Faraday_constant) (approximately 96,485 C/mol), and $\phi$ is the local electric potential at the position of the species.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/Ch21.pdf) This formulation extends the [chemical potential](/page/Chemical_potential) by incorporating the electrostatic contribution to the energy of the charged particle in an electric field.[](https://www.sciencedirect.com/topics/physics-and-astronomy/electrochemical-potential) For neutral species where $z_i = 0$, the electrochemical potential reduces to the pure [chemical potential](/page/Chemical_potential) $\mu_i$, highlighting its specific relevance to ions and charged carriers rather than uncharged particles.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/Ch21.pdf) The derivation of the electrochemical potential stems from the first law of thermodynamics applied to systems with electrical work. The differential change in internal energy $dU$ for such systems is expressed as $dU = T dS - P dV + \sum_j \mu_j dn_j + \phi dQ$, where the term $\phi dQ$ accounts for the reversible electrical work, with $dQ = \sum_i z_i F dn_i$ representing the charge transfer associated with changes in particle numbers $dn_i$. Substituting this into the energy differential yields an effective potential per species that combines the chemical driving force $\mu_i dn_i$ with the electrostatic work $z_i F \phi dn_i$, resulting in the electrochemical potential $\bar{\mu}_i = \mu_i + z_i F \phi$.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/Ch21.pdf) This form ensures the thermodynamic description captures both diffusive and migratory forces on charged species. In electrochemistry, the electrochemical potential governs the equilibrium conditions at interfaces, such as electrodes or membranes in batteries, where $\bar{\mu}_i$ must be equal on both sides to prevent net charge or mass transfer.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/Ch21.pdf) This equality drives processes like ion transport and redox reactions, determining the direction and extent of electrochemical transformations. The Nernst equation relates the electrode potential to the activities of species in a half-cell reaction via $E = E^0 + \frac{RT}{zF} \ln \frac{a_\text{ox}}{a_\text{red}}$, where $E^0$ is the standard electrode potential related to the standard Gibbs free energy change by $\Delta G^0 = -z F E^0 = \sum \nu_i \mu_i^0$, with $\mu_i^0$ the standard chemical potentials of the reactants and products; this arises from setting the electrochemical potentials equal at equilibrium. ### Internal, External, and Total Chemical Potentials The total chemical potential $\mu$ of a species in a system is decomposed into an internal component $\mu_\text{int}$ and an external component $\mu_\text{ext}$, expressed as $\mu = \mu_\text{int} + \mu_\text{ext}$. This decomposition separates the contributions from the system's intrinsic properties and environmental influences, ensuring that equilibrium conditions are maintained across varying spatial or energetic landscapes. The internal chemical potential $\mu_\text{int}$ encompasses the kinetic energy, interparticle interactions, and entropic effects specific to the species within the local environment, reflecting changes in the system's free energy upon adding particles without external perturbations. In contrast, the external chemical potential $\mu_\text{ext}$ arises from position-dependent potential energies imposed by external fields, such as gravitational or electrostatic forces, which shift the effective energy landscape for particle distribution. In the formalism of statistical mechanics, particularly within the grand canonical ensemble, the chemical potential relates to the average partial derivative of the Hamiltonian $H$ with respect to the particle number $N$, adjusted for external terms: $\mu \approx \left\langle \frac{\partial H}{\partial N} \right\rangle$, where external contributions are isolated in $\mu_\text{ext}$ to account for non-uniform fields. For many fluid systems under standard conditions, $\mu_\text{ext} \approx 0$ because gravitational or other field effects are negligible over typical scales, allowing $\mu \approx \mu_\text{int}$. However, in the presence of fields, this separation is crucial for deriving equilibrium distributions, as the total $\mu$ must remain constant throughout the system to balance diffusive and drift fluxes. A representative example is sedimentation equilibrium in colloidal suspensions under gravity, where $\mu_\text{ext} = m g h$ (with $m$ the particle mass, $g$ the gravitational acceleration, and $h$ the height), leading to an exponential decay in particle density with height to maintain uniform total $\mu$. In quantum systems like semiconductors, $\mu_\text{ext}$ incorporates potentials from band structures or applied fields that cause band bending, such as in p-n junctions where electrostatic potentials align the [Fermi level](/page/Fermi_level) while preserving constant total $\mu$ across the interface. The total chemical potential $\mu$ governs overall thermodynamic equilibrium and particle exchange between subsystems, dictating global distributions and phase stability, whereas $\mu_\text{int}$ remains species-specific, varying with local composition, temperature, and interactions to describe intrinsic behavior. This distinction highlights how external fields modulate equilibrium without altering the fundamental species properties encoded in $\mu_\text{int}$. ## Applications in Equilibrium Systems ### Phase Equilibria and Mixtures In multi-phase systems, the condition for thermodynamic equilibrium requires that the chemical potential of each component $i$ be equal in all coexisting phases $\alpha$ and $\beta$, expressed as $\mu_i^\alpha = \mu_i^\beta$. This equality ensures no net transfer of matter between phases, minimizing the total Gibbs free energy of the system.[](https://www.gps.caltech.edu/~asimow/tutorial1.html)[](https://ocw.mit.edu/courses/3-012-fundamentals-of-materials-science-fall-2005/e4abb04acc7a7b31d1c0964d8caa225f_lec17t.pdf) For mixtures, the chemical potential of component $i$ in a solution is given by $\mu_i = \mu_i^0(T, P) + RT \ln a_i$, where $\mu_i^0(T, P)$ is the standard chemical potential at temperature $T$ and pressure $P$, $R$ is the gas constant, and $a_i$ is the activity of the component. The activity $a_i$ accounts for non-ideal behavior and is defined as $a_i = \gamma_i x_i$, with $x_i$ the mole fraction and $\gamma_i$ the activity coefficient. In ideal solutions, $\gamma_i = 1$, so $a_i = x_i$ and interactions between molecules are negligible beyond random mixing; deviations occur in non-ideal solutions where $\gamma_i \neq 1$, reflecting attractive or repulsive intermolecular forces that alter the effective concentration.[](https://serc.carleton.edu/research_education/equilibria/activitymodels.html)[](https://ocw.mit.edu/courses/3-012-fundamentals-of-materials-science-fall-2005/fa8c38555a138d5b6287be8c4905e324_lec11t_note.pdf) Colligative properties arise from the equality of chemical potentials between phases in dilute solutions, particularly when a non-volatile solute lowers the solvent's chemical potential. For vapor-liquid equilibrium, Raoult's law describes the solvent's vapor pressure as $P = x_{\text{solvent}} P^0$, where $P^0$ is the pure solvent vapor pressure, leading to vapor pressure lowering proportional to solute concentration. This reduction shifts the boiling point upward and freezing point downward to restore equilibrium; for example, the boiling point elevation $\Delta T_b = K_b m$ and freezing point depression $\Delta T_f = K_f m$ (with $m$ as molality and $K_b, K_f$ as constants) maintain $\mu_{\text{solvent}}^{\text{liquid}} = \mu_{\text{solvent}}^{\text{vapor}}$ or $\mu_{\text{solvent}}^{\text{liquid}} = \mu_{\text{solvent}}^{\text{solid}}$. Similarly, osmotic pressure occurs across a semipermeable membrane separating a solution from pure solvent, where an external pressure $\pi$ is applied to equalize the solvent's chemical potentials: $\mu_{\text{solvent}}^{\text{pure}} = \mu_{\text{solvent}}^{\text{solution}} + \pi V_m$, yielding $\pi = -\frac{RT}{V_m} \ln a_{\text{solvent}} \approx \frac{RT}{V_m} x_{\text{solute}}$ for dilute solutions, with $V_m$ the molar volume of the solvent. In more dilute systems, Henry's law applies to the solute, $P = K_H x_{\text{solute}}$, where $K_H$ is Henry's constant, further illustrating activity-driven phase shifts.[](https://chemistry.coe.edu/piper/posts/groves-thermo-kinetics/lecture_slides/ColligativeProperties_CHEM361A.pdf)[](https://rmfabicon.files.wordpress.com/2010/08/lecture-6-colligative-properties.pdf)[](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/16:_The_Chemical_Activity_of_the_Components_of_a_Solution/16.12:_Colligative_Properties_-_Osmotic_Pressure) In binary phase diagrams, chemical potentials determine stable phase boundaries through the common tangent construction on plots of Gibbs free energy versus composition at fixed $T$ and $P$. The common tangent to the free energy curves of two phases touches at the equilibrium compositions, ensuring $\mu_1^\alpha = \mu_1^\beta$ and $\mu_2^\alpha = \mu_2^\beta$ for components 1 and 2, with the slope of the tangent equaling the chemical potential of each component. This rule identifies coexisting phases and tie lines, while the lever rule calculates phase fractions based on overall composition.[](https://www.gps.caltech.edu/~asimow/tutorial1.html)[](https://ocw.mit.edu/courses/3-012-fundamentals-of-materials-science-fall-2005/e4abb04acc7a7b31d1c0964d8caa225f_lec17t.pdf) The Gibbs phase rule extends to mixtures by incorporating chemical potential constraints: the degrees of freedom $f = c - p + 2$, where $c$ is the number of components and $p$ is the number of phases. This arises because equilibrium imposes $c(p-1)$ constraints from equal chemical potentials across phases, reducing the independent variables from total composition, temperature, and pressure. For a binary system ($c=2$) with two phases ($p=2$), $f=2$, allowing specification of $T$ and composition to define the state.[](https://www.doitpoms.ac.uk/tlplib/phase-diagrams/gibbs.php) ### Chemical Reactions and Reaction Equilibria In chemical reactions, the chemical potential plays a central role in determining the direction and extent of the reaction at equilibrium. For a general reaction such as $ a \mathrm{A} + b \mathrm{B} \rightleftharpoons c \mathrm{C} + d \mathrm{D} $, the change in Gibbs free energy $\Delta G$ is given by $\Delta G = \sum \nu_i \mu_i$, where $\nu_i$ are the stoichiometric coefficients (positive for products and negative for reactants) and $\mu_i$ is the chemical potential of species $i$. This expression represents the reaction quotient in terms of chemical potentials, serving as the driving force: if $\Delta G < 0$, the reaction proceeds forward; if $\Delta G > 0$, it proceeds backward; and at equilibrium, $\Delta G = 0$.[](https://ronlevygroup.cst.temple.edu/courses/2019_fall/chem5302/lectures/chem5302_lecture8.pdf) At equilibrium, the condition $\sum \nu_i \mu_i = 0$ must hold, ensuring no net change in the system's composition. The chemical potential for each [species](/page/Species) is expressed as $\mu_i = \mu_i^0 + RT \ln a_i$, where $\mu_i^0$ is the standard chemical potential, $[R](/page/R)$ is the [gas constant](/page/Gas_constant), $[T](/page/Temperature)$ is the [temperature](/page/Temperature), and $a_i$ is the activity of [species](/page/Species) $i$. Substituting this into the equilibrium condition yields $\sum \nu_i \mu_i^0 + RT \ln \prod a_i^{\nu_i} = 0$, or $\Delta G^0 + RT \ln Q = 0$, where $\Delta G^0 = \sum \nu_i \mu_i^0$ is the standard [Gibbs free energy](/page/Gibbs_free_energy) change and $Q = \prod a_i^{\nu_i}$ is the [reaction quotient](/page/Reaction_quotient). At equilibrium, $Q = [K](/page/K)$, the [equilibrium constant](/page/Equilibrium_constant), leading to the relation $[K](/page/K) = \exp(-\Delta G^0 / RT) = \prod a_i^{\nu_i}$.[](https://ronlevygroup.cst.temple.edu/courses/2019_fall/chem5302/lectures/chem5302_lecture8.pdf) This derivation highlights how equality of chemical potentials across the reaction balances the free energy contributions from each [species](/page/Species). The standard chemical potential $\mu_i^0$ corresponds to the standard molar [Gibbs free energy](/page/Gibbs_free_energy) $G_{m,i}^0$ for pure [species](/page/Species) $i$ at a reference state, typically 1 bar and the system [temperature](/page/Temperature).[](https://sites.engineering.ucsb.edu/~jbraw/chemreacfun/ch3/slides-thermo.pdf) For real systems deviating from ideality, activities $a_i$ incorporate [corrections](/page/Corrections) via activity coefficients $\gamma_i$, such that $a_i = \gamma_i x_i$ for liquids or $a_i = \gamma_i (p_i / p^0)$ for gases, where $x_i$ is [mole fraction](/page/Mole_fraction) and $p_i$ is [partial pressure](/page/Partial_pressure).[](https://sites.engineering.ucsb.edu/~jbraw/chemreacfun/ch3/slides-thermo.pdf) These [corrections](/page/Corrections) ensure the [equilibrium constant](/page/Equilibrium_constant) accurately reflects non-ideal behaviors in mixtures. Le Chatelier's principle describes how perturbations to equilibrium shift the reaction position to counteract the change, which can be understood through variations in chemical potentials. An increase in temperature raises the chemical potentials of species differently based on enthalpy contributions, shifting endothermic reactions forward via the van 't Hoff equation $\frac{d \ln K}{dT} = \frac{\Delta H^0}{RT^2}$.[](https://sites.esm.psu.edu/~vfm5153/TSM/lecture14.html) Changes in pressure affect gaseous species' chemical potentials as $\mu_i(p) = \mu_i^0 + RT \ln (p_i / p^0)$, favoring the side with fewer moles of gas to minimize the potential increase.[](https://sites.esm.psu.edu/~vfm5153/TSM/lecture14.html) Similarly, altering concentrations modifies $\mu_i$ through activity terms, driving the system to restore balance by shifting toward the side that reduces the perturbation.[](https://sites.esm.psu.edu/~vfm5153/TSM/lecture14.html) In electrochemical reactions, the [Gibbs free energy](/page/Gibbs_free_energy) change relates to the cell potential $E$ via $\Delta G = -nFE$, where $n$ is the number of electrons transferred and $F$ is Faraday's constant, linking [electrochemical potential](/page/Electrochemical_potential)s in [redox](/page/Redox) processes to electrical work.[](https://web.physics.ucsb.edu/~lecturedemonstrations/Composer/Pages/52.48.html) This connection arises because the [electrochemical potential](/page/Electrochemical_potential) differences between oxidized and reduced [species](/page/Species) drive the reaction, with $\Delta G^0 = -nFE^0$ at standard conditions.[](https://web.physics.ucsb.edu/~lecturedemonstrations/Composer/Pages/52.48.html) ## Specific Systems and Contexts ### Electrons in Solids In [solid-state physics](/page/Solid-state_physics), the chemical potential μ of electrons in solids serves as a key parameter governing their distribution across energy states, particularly within the framework of band theory. For fermions like electrons, μ determines the occupancy of states via the Fermi-Dirac distribution function, $ f(E) = \frac{1}{1 + \exp\left(\frac{E - \mu}{kT}\right)} $, where $ k $ is the [Boltzmann constant](/page/Boltzmann_constant) and $ T $ is [temperature](/page/Temperature). This distribution ensures that at [absolute zero](/page/Absolute_zero), all states below μ are fully occupied and those above are empty, reflecting the [Pauli exclusion principle](/page/Pauli_exclusion_principle).[](http://hyperphysics.phy-astr.gsu.edu/hbase/Solids/Fermi.html)[](https://sites.chemengr.ucsb.edu/~ceweb/courses/che142242/pdfs/lecture_3_chex42.pdf) The Fermi level $ E_F $, defined as the chemical potential at $ T = 0 $ K ($ E_F = \mu(T=0) $), plays a crucial role in determining band filling. In metals, where the conduction band is partially filled, $ E_F $ lies within the band, setting the energy up to which states are occupied and influencing properties like electrical conductivity. For typical metals such as [copper](/page/Copper), $ E_F \approx 7 $ eV, corresponding to a Fermi temperature $ T_F = E_F / k \approx 80,000 $ K, far exceeding room temperature, which maintains near-complete degeneracy of the electron gas.[](https://scholar.harvard.edu/files/schwartz/files/13-metals.pdf)[](https://users.physics.ox.ac.uk/~Steane/teaching/Fermi_gas.pdf) At finite temperatures, μ exhibits a weak dependence on $ T $ in metals due to the high degeneracy. The approximation for low $ T $ (where $ kT \ll E_F $) is $ \mu \approx E_F \left[1 - \frac{\pi^2}{12} \left(\frac{kT}{E_F}\right)^2 \right] $, derived from the Sommerfeld expansion of the [Fermi gas](/page/Fermi_gas) model; this slight decrease in μ with increasing $ T $ arises from the thermal smearing of the [Fermi surface](/page/Fermi_surface) while conserving total electron number. In contrast, semiconductors feature a band gap between the valence band (maximum energy $ E_v $) and conduction band (minimum energy $ E_c $), with $ E_F $ (or μ at $ T=0 $) typically in the gap for intrinsic cases, leading to exponentially low carrier densities.[](https://users.physics.ox.ac.uk/~Steane/teaching/Fermi_gas.pdf)[](https://www.physics.rutgers.edu/~chakhalian/CM2019/EXTRA/The%20chemical%20potential%20of%20an%20ideal%20intrinsic%20semiconductor.pdf) For intrinsic semiconductors, the chemical potential $ \mu_i $ is temperature-dependent and positioned near the midgap to balance electron and hole concentrations. It is given by $ \mu_i = \frac{E_c + E_v}{2} + \frac{kT}{2} \ln\left(\frac{N_v}{N_c}\right) $, where $ N_c $ and $ N_v $ are the effective densities of states in the conduction and valence bands, respectively; this expression accounts for differences in effective masses and ensures equal electron ($ n $) and hole ($ p $) densities ($ n_i = p_i = \sqrt{N_c N_v} \exp\left(-\frac{E_g}{2kT}\right) $, with $ E_g = E_c - E_v $). At low $ T $, μ shifts toward $ E_c $ due to the discrete nature of states, but approaches the midgap at higher $ T $. Note that in relativistic nuclear matter, chemical potentials typically include rest mass contributions, unlike in non-relativistic condensed matter contexts.[](https://www.physics.rutgers.edu/~chakhalian/CM2019/EXTRA/The%20chemical%20potential%20of%20an%20ideal%20intrinsic%20semiconductor.pdf)[](https://sites.chemengr.ucsb.edu/~ceweb/courses/che142242/pdfs/lecture_3_chex42.pdf) Doping introduces impurities to shift μ and enhance conductivity. In n-type semiconductors, donor atoms add [electron](/page/Electron)s, raising μ toward $ E_c $ (e.g., by $ \Delta\mu \approx kT \ln(N_D / n_i) $, where $ N_D $ is donor concentration); the [electron](/page/Electron) concentration follows the non-degenerate [approximation](/page/Approximation) $ n = N_c \exp\left(-\frac{E_c - \mu}{kT}\right) \approx N_D $. Conversely, in p-type semiconductors, acceptors create [hole](/page/Hole)s, lowering μ toward $ E_v $ (e.g., $ \Delta\mu \approx -kT \ln(N_A / n_i) $, with $ N_A $ acceptor concentration), yielding [hole](/page/Hole) concentration $ p = N_v \exp\left(-\frac{\mu - E_v}{kT}\right) \approx N_A $. These shifts, often by 0.1–0.3 eV for doping levels around $ 10^{16} $–$ 10^{18} $ cm⁻³ in [silicon](/page/Silicon), enable control over majority carriers without filling the bands as in metals.[](https://sites.chemengr.ucsb.edu/~ceweb/courses/che142242/pdfs/lecture_3_chex42.pdf)[](https://courses.physics.ucsd.edu/2019/Spring/physics230/LECTURES/PHY152B_CH03.pdf) In applications like p-n junctions, the difference in μ between n-type (higher μ) and p-type (lower μ) regions creates a gradient that drives [diffusion](/page/Diffusion) of carriers across the junction upon contact, forming a [depletion region](/page/Depletion_region) and built-in potential $ V_{bi} = (\mu_n - \mu_p)/e \approx 0.7 $ V for [silicon](/page/Silicon). This [electrochemical potential](/page/Electrochemical_potential) gradient sustains equilibrium with no net current, but under bias, it enables rectification; the Fermi-Dirac statistics directly dictate carrier injection and transport, underpinning [diode](/page/Diode) operation. The position of $ E_F $ (μ at low T) relative to the [density of states](/page/Density_of_states) also governs junction characteristics, with μ aligning across the device in equilibrium to minimize free energy.[](http://galileo.phys.virginia.edu/classes/312/problems/assign7/ass7-3.PDF)[](https://fab.cba.mit.edu/classes/862.16/notes/semiconductor_materials_devices.pdf) ### Nuclear and Sub-nuclear Particles In nuclear matter, such as that found in the cores of neutron stars, the chemical potentials of neutrons ($\mu_n$) and protons ($\mu_p$) govern the composition and stability of the dense medium. These potentials exceed the rest mass energies due to the extreme densities, with the excess over rest mass (μ_n - m_n c^2) typically around 200–300 MeV in the core, depending on the equation of state, and are intimately linked to the nuclear binding energy per particle, which influences the overall equation of state. The difference $\mu_n - \mu_p$ maintains beta equilibrium with electrons (μ_n - μ_p ≈ μ_e ≈ 100–200 MeV), while the high Fermi momenta associated with these chemical potentials generate degeneracy pressure from neutrons and protons, providing the primary support against gravitational collapse in neutron stars.[](https://arxiv.org/pdf/1802.08427)[](https://arxiv.org/pdf/1911.01980)[](https://iopscience.iop.org/article/10.1088/1402-4896/ad03c8/pdf)[](https://link.aps.org/doi/10.1103/PhysRevLett.132.232701) At the sub-nuclear level, the baryon chemical potential $\mu_B$ becomes particularly relevant in the quark-gluon plasma (QGP) phase of quantum chromodynamics (QCD), where it quantifies deviations from zero net baryon density. For light quarks, $\mu_B = 3 \mu_q$, reflecting the baryon number conservation with each quark carrying one-third of a baryon's charge; this relation holds in the deconfined phase accessed in high-energy conditions. Thermodynamically, $\mu_B$ is defined as the partial derivative of the energy density $\epsilon$ with respect to the baryon number density $n_B$ at fixed entropy, $\mu_B = \partial \epsilon / \partial n_B$, enabling the mapping of the QCD phase diagram. For massless bosons like photons and gluons in thermal equilibrium, the chemical potential is zero ($\mu = 0$), as their particle number is not conserved and adjusts freely to maintain equilibrium without a associated charge. In contrast, neutrinos in core-collapse supernovae exhibit a non-zero chemical potential $\mu_\nu$, which drives lepton number asymmetry and influences neutrino transport, contributing to the explosion mechanism by enhancing degeneracy effects. In relativistic systems, chemical potentials often exceed the rest mass (μ > m c^2) for fermions, corresponding to high degeneracy without instability.[](https://arxiv.org/pdf/nucl-th/9408029)[](https://link.aps.org/doi/10.1103/PhysRevD.100.076009)[](https://arxiv.org/abs/1012.5266)[](https://arxiv.org/pdf/1908.11382)[](https://academic.oup.com/mnras/article/327/4/1307/1009533) For particle-antiparticle pairs, such as electron-positron or quark-antiquark production, the process becomes energetically favorable when 2|μ| > 2 m c^2 (i.e., |μ| > m c^2), leading to spontaneous pair creation that alters the medium's composition and [thermodynamics](/page/Thermodynamics). These considerations are critical in relativistic environments like [neutron star](/page/Neutron_star) interiors or QGP.[](https://web.pa.msu.edu/people/pratts/phy831/lectures/chapter2.pdf)[](https://physics.stackexchange.com/questions/831746/chemical-potentials-in-a-pair-creation-process) Recent heavy-ion collision experiments at the [Large Hadron Collider](/page/Large_Hadron_Collider) (LHC), including data from the 2024 Pb-Pb run at $\sqrt{s_{NN}} = 5.02$ TeV and preliminary analyses as of late 2025, reveal variations in the baryon chemical potential $\mu_B$ across collision centralities and energies, typically $\mu_B \sim 0-100$ MeV near mid-rapidity. These measurements, extracted from [hadron](/page/Hadron) yields and fluctuations, indicate $\mu_B$ gradients that signal phase transitions from hadronic matter to QGP, with lower $\mu_B$ values at higher energies suppressing baryon density while enhancing [strangeness](/page/Strangeness) production. Such results constrain QCD models and highlight the role of $\mu_B$ in mapping the critical endpoint of the [phase diagram](/page/Phase_diagram).[](https://arxiv.org/html/2511.01413v1)[](https://arxiv.org/pdf/2405.03174)[](https://arxiv.org/html/2408.08501v2)[](https://home.cern/news/news/accelerators/record-data-lhc-2024) ## Non-Equilibrium and Advanced Applications ### Non-Equilibrium Thermodynamics In [non-equilibrium thermodynamics](/page/Non-equilibrium_thermodynamics), the chemical potential is extended beyond uniform equilibrium states to describe systems where spatial or temporal variations drive [transport](/page/Transport) and reaction processes. Under the local equilibrium hypothesis, the chemical potential μ_i for [species](/page/Species) i is defined locally at each point in space, allowing thermodynamic relations to hold approximately in small volumes even as the system as a whole departs from equilibrium. This local μ_i(r) can vary spatially, creating gradients that act as thermodynamic forces propelling fluxes of [matter](/page/Matter), [heat](/page/Heat), or charge.[](https://pmc.ncbi.nlm.nih.gov/articles/PMC58686/) The fluxes of [species](/page/Species) i, denoted J_i, are linearly related to the thermodynamic forces in the regime near equilibrium, according to Onsager's reciprocal relations: $$ \mathbf{J}_i = \sum_j L_{ij} \mathbf{X}_j , $$ where for chemical potential contributions (isothermal) \mathbf{X}_j = -\frac{1}{T} \nabla \mu_j , L_{ii} is the phenomenological coefficient for direct [diffusion](/page/Diffusion), L_{ij} are cross-coefficients satisfying L_{ij} = L_{ji}, and other \mathbf{X}_j are conjugate forces (e.g., [temperature](/page/Temperature) or [electric field](/page/Electric_field) gradients). This framework, derived from microscopic time-reversal [symmetry](/page/Symmetry), ensures that the [entropy production](/page/Entropy_production) remains non-negative, quantifying dissipation in irreversible processes. For uncoupled isothermal [diffusion](/page/Diffusion), the relation simplifies to \mathbf{J}_i = -\frac{L_{ii}}{T} \nabla \mu_i , highlighting how chemical potential inhomogeneities directly govern mass transport.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/CH22.pdf) For chemical reactions in non-equilibrium settings, the chemical potential enters through the reaction affinity A, defined as A = -\sum \nu_k \mu_k, where \nu_k are the stoichiometric coefficients for [species](/page/Species) k. The affinity measures the driving force away from equilibrium, with the reaction flux J_r proportional to A in the linear regime: J_r = L_{rr} A / T. The associated entropy production for reactions contributes to the total σ = J_r (A / T) \geq 0, ensuring the second law holds locally. In diffusive systems, the full entropy production rate includes terms from matter fluxes: σ = \sum_i \mathbf{J}_i \cdot \mathbf{X}_i > 0, where the force \mathbf{X}_i = -\nabla \mu_i / T for isothermal diffusion, linking chemical potential gradients to irreversible heat generation.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/CH22.pdf) Near equilibrium, these relations recover classical transport laws. Fick's first law of diffusion, J_i = -D_i \nabla c_i (with c_i the concentration), emerges from the chemical potential gradient for ideal solutions where \nabla \mu_i \approx RT \nabla \ln c_i, yielding the diffusion coefficient D_i = \frac{L_{ii} R T}{c_i} (for molar concentration c_i and gas constant R; adjust to k_B T / n_i for number density n_i). This connection underscores how non-equilibrium chemical potentials unify phenomenological diffusion with thermodynamic driving forces.[](https://personal.colby.edu/personal/t/twshattu/PhysicalChemistryText/Part1/CH22.pdf) A prominent example is the proton motive force in biological membranes, where the electrochemical potential gradient for protons, \Delta \tilde{\mu}_H^+ = \Delta \mu_H^+ + F \Delta \psi (F the Faraday constant, \psi the membrane potential), drives ATP synthesis via chemiosmosis. This gradient, maintained by electron transport chains, powers proton fluxes through ATP synthase, exemplifying how chemical potential differences sustain non-equilibrium steady states in living systems. In reaction-diffusion systems, such as the Belousov-Zhabotinsky reaction, spatial μ gradients couple autocatalytic reactions with diffusion, generating spatiotemporal patterns like waves or spirals that propagate due to local affinity imbalances. These linear approximations, however, break down far from equilibrium, where nonlinear responses dominate and Onsager relations no longer hold strictly. In such regimes, [stochastic](/page/Stochastic) descriptions via master equations become essential, treating chemical potentials as effective fields in probabilistic transitions between microstates, with [entropy production](/page/Entropy_production) emerging from trajectory averages rather than local gradients. This ties non-equilibrium chemical potentials to fluctuation theorems, providing a bridge to mesoscopic and nanoscale dynamics. ### Computational and Nanoscale Systems In [computational chemistry](/page/Computational_chemistry), the grand canonical Monte Carlo (GCMC) method is widely employed in [molecular dynamics](/page/Molecular_dynamics) simulations to model systems where the chemical potential μ is fixed, allowing the particle number to fluctuate and enabling control over average densities in open systems such as porous materials or interfaces.[](https://espressomd.github.io/tutorials/grand_canonical_monte_carlo/grand_canonical_monte_carlo.html) This ensemble is particularly useful for studying adsorption and phase transitions, where μ dictates the equilibrium particle exchange with a [reservoir](/page/Reservoir). To compute μ itself, the Widom insertion technique is a standard approach, involving the insertion and deletion of test particles to evaluate the Boltzmann factor, providing an estimate of the excess chemical potential through ensemble averaging.[](https://pubs.aip.org/aip/jcp/article/156/13/134110/2840996/Widom-insertion-method-in-simulations-with-Ewald) Recent refinements address challenges in charged systems, such as [Ewald summation](/page/Ewald_summation) corrections for ionic chemical potentials in electrolytes.[](https://arxiv.org/abs/2203.09625) Density functional theory (DFT) provides another cornerstone for computing chemical potentials in nanoscale systems, where μ is defined as the derivative of the total energy E with respect to the number of particles N at fixed volume and entropy: \mu = \left( \frac{\partial E}{\partial N} \right)_{V,S} undefined \mu = \left( \frac{\partial E}{\partial n} \right)_{S,V} This quantity represents the change in energy when an infinitesimal amount of the component is added at constant entropy and volume to the system. Gibbs originally termed this quantity the "intrinsic potential." The phrase "chemical potential" was later popularized by chemist Wilder Dwight Bancroft in 1899.[](https://www.physics.rutgers.edu/grad/601/CM2019/EXTRA/chemical_potential_meaning.pdf) Gibbs integrated $\mu$ into the differential form of the first law of thermodynamics for open systems, yielding the fundamental relation: dE = T, dS - P, dV + \sum_i \mu_i , dn_i where $T$ is temperature, $P$ is pressure, and the sum is over all components $i$. This equation encapsulates how chemical potential governs the exchange of matter alongside heat and work. A pivotal insight from Gibbs' analysis was the equilibrium condition for heterogeneous systems: for each component, the chemical potential must be equal in all coexisting phases, such as $\mu_i^\alpha = \mu_i^\beta$ for phases $\alpha$ and $\beta$. This equality ensures no net transfer of matter occurs, serving as the criterion for phase stability and chemical equilibrium. From this, Gibbs derived the phase rule, quantifying the variance of a system with $c$ components and $p$ phases as $f = c - p + 2$, where $f$ denotes the number of intensive variables (like temperature and pressure) that can be independently varied without altering the number of phases. The rule provided a predictive tool for phase diagrams, linking composition, phases, and external constraints. Gibbs' formulation drew directly from prior thermodynamic potentials developed by [Rudolf Clausius](/page/Rudolf_Clausius), who emphasized [entropy](/page/Entropy) maximization and [energy conservation](/page/Energy_conservation) in his 1865 work, and by William Thomson (Lord Kelvin), whose absolute temperature scale and availability concepts informed equilibrium criteria.[](https://doi.org/10.1063/1.881258) Gibbs was the first to employ the [symbol](/page/Symbol) $\mu$ specifically for this "potential for the intensive influence of the component on the [energy](/page/Energy)," distinguishing it from mechanical or thermal potentials. Initially, Gibbs' ideas met with limited recognition outside a small circle, including James Clerk Maxwell, who praised their rigor but noted the challenges in accessibility; the dense mathematical exposition and absence of experimental illustrations contributed to this delay, with broader impact emerging only in the early [20th century](/page/20th_century).[](https://doi.org/10.1063/1.881258) This thermodynamic definition of chemical potential endures as the cornerstone of Gibbs' legacy in describing equilibrium in diverse systems.[](https://doi.org/10.1063/1.881258) ### Post-Gibbs Evolution and Modern Refinements Concurrently, [Gilbert N. Lewis](/page/Gilbert_N._Lewis) introduced the concept of activity for non-ideal solutions in 1907, which accounts for deviations from ideality through the [activity coefficient](/page/Activity_coefficient), defined as $ a_i = \gamma_i x_i $, where $ \mu_i = \mu_i^0 + RT \ln a_i $; this was further refined in his 1923 [book](/page/Book) with Merle Randall, enhancing predictions for [electrolyte](/page/Electrolyte) and solution behaviors.[](https://web.phys.ntnu.no/~stovneng/TFY4165_2022/oving10/ov9/chemical_potential.pdf)[](https://www.britannica.com/biography/Gilbert-N-Lewis) The 1920s and 1930s saw deeper integration with [statistical mechanics](/page/Statistical_mechanics), notably through Enrico Fermi's [1926](/page/1926) derivation of Fermi-Dirac statistics for indistinguishable fermions, where the chemical potential $ \mu $ determines the occupation number $ f(\epsilon) = \frac{1}{e^{(\epsilon - \mu)/kT} + 1} $, crucial for electrons in metals. [Paul Dirac](/page/Paul_Dirac) independently contributed to this framework in the same year, solidifying the role of chemical potential in quantum distributions for half-integer spin particles.[](https://insti.physics.sunysb.edu/~cpherzog/phys305fall2008/distributionremarks.pdf) Building on these, the [grand canonical ensemble](/page/Grand_canonical_ensemble)—originally inspired by Gibbs—was formalized in quantum contexts during [the 1930s](/page/The_1930s), allowing fluctuations in particle number at fixed $ \mu $, as detailed in works by [Wolfgang Pauli](/page/Wolfgang_Pauli) and others applying it to ideal gases and early quantum many-body problems.[](https://www.damtp.cam.ac.uk/user/tong/statphys/statphys.pdf) Mid-20th-century developments extended chemical potential to non-equilibrium regimes, with Lars Onsager's 1931 reciprocal relations describing coupled [transport phenomena](/page/Transport_phenomena) driven by gradients in chemical potential, such as $ \mathbf{J}_i = \sum_k L_{ik} \nabla (-\mu_k / T) $, linking [diffusion](/page/Diffusion) and heat flow in irreversible processes near equilibrium.[](https://arxiv.org/html/2508.00207v1) [Ilya Prigogine](/page/Ilya_Prigogine) further revolutionized the field from the 1940s through the 1970s by incorporating chemical potential into dissipative structures, where far-from-equilibrium systems self-organize through [entropy production](/page/Entropy_production) involving $ \mu $-driven reaction fluxes, as exemplified in his analysis of chemical oscillations like the Belousov-Zhabotinsky reaction.[](https://www.nobelprize.org/uploads/2018/06/prigogine-lecture.pdf) In the late 20th and early 21st centuries, chemical potential found applications in [quantum field theory](/page/Quantum_field_theory) for [particle physics](/page/Particle_physics) starting in the 1960s, where it parameterizes [baryon](/page/Baryon) or [quark](/page/Quark) number conservation in finite-density systems, such as in the grand canonical partition function $ Z = \mathrm{Tr} [e^{-\beta (H - \mu N)}] $. This evolved into relativistic formulations within [quantum chromodynamics](/page/Quantum_chromodynamics) (QCD) in the 1970s, introducing [quark](/page/Quark) chemical potentials $ \mu_q $ to model phase transitions in hot, dense matter, as in [lattice QCD](/page/Lattice_QCD) simulations of the quark-gluon plasma.[](https://www.osti.gov/biblio/1785953) [Density functional theory](/page/Density_functional_theory) (DFT), formalized by Pierre Hohenberg and [Walter Kohn](/page/Walter_Kohn) in 1964, redefined chemical potential as $ \mu = \left( \frac{\partial E}{\partial N} \right)_{v} $, the [functional derivative](/page/Functional_derivative) of ground-state energy with respect to particle number at fixed external potential, enabling efficient computations of electronic structures in materials.[](https://people.chem.ucsb.edu/metiu/horia/OldFiles/115C/KH_Ch4.pdf) Key refinements in the [2000s](/page/2000s) addressed nanoscale effects, incorporating finite-size corrections to chemical potential in mesoscopic systems, such as surface energy contributions $ \Delta \mu \approx \frac{2 \gamma}{r} $ for nanoparticles, where $ \gamma $ is [surface tension](/page/Surface_tension) and $ r $ [radius](/page/Radius), improving models for quantum dots and colloids. More recently, in the 2020s, [artificial intelligence](/page/Artificial_intelligence) and [machine learning](/page/Machine_learning) have enabled predictive modeling of chemical potentials in complex materials, with graph neural networks trained on DFT datasets forecasting $ \mu $ for [alloy](/page/Alloy) phase stability and battery electrolytes, achieving errors below 0.1 eV in high-throughput screenings.[](https://pubs.acs.org/doi/abs/10.1021/acs.jpclett.2c02612)
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.