Recent from talks
Nothing was collected or created yet.
Energy level
View on Wikipedia
| Part of a series of articles about |
| Quantum mechanics |
|---|
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons.[1] Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.[2]
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it.
Explanation
[edit]
Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave.[3] States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator.
Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy.
History
[edit]The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926.[4]
Atoms
[edit]Intrinsic energy levels
[edit]In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative.
Orbital state energy level: atom/ion with nucleus + one electron
[edit]Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by:
(typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is the Planck constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n.
This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = hν = hc / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data.
An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.
Electron–electron interactions in atoms
[edit]If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.
For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number.
In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule.
Fine structure splitting
[edit]Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell[which?] electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV.
Hyperfine structure
[edit]This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV.
Energy levels due to external fields
[edit]Zeeman effect
[edit]There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by
with
- .
Additionally taking into account the magnetic momentum arising from the electron spin.
Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin
- ,
with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ,
- .
The interaction energy therefore becomes
- .
Stark effect
[edit]Molecules
[edit]Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. [5] In polyatomic molecules, different vibrational and rotational energy levels are also involved.
Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.
The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance.
Energy level diagrams
[edit]There are various types of energy level diagrams for bonds between atoms in a molecule.
- Examples
- Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams.
Energy level transitions
[edit]

Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels.
Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).[5]
- ΔE = hf = hc / λ,
since c, the speed of light, equals to fλ[5]
Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum.
An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital.[5][6] Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n.
A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics.
Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow.
An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.[7]
Crystalline materials
[edit]Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal.
See also
[edit]References
[edit]- ^ Re: Why do electron shells have set limits ? madsci.org, 17 March 1999, Dan Berger, Faculty Chemistry/Science, Bluffton College
- ^ Electron Subshells. Corrosion Source. Retrieved on 1 December 2011.
- ^ Tipler, Paul A.; Mosca, Gene (2004). Physics for Scientists and Engineers, 5th Ed. Vol. 2. W. H. Freeman and Co. p. 1129. ISBN 0716708108.
- ^ Ruedenberg, Klaus; Schwarz, W. H. Eugen (February 13, 2013). "Three Millennia of Atoms and Molecules". Pioneers of Quantum Chemistry. ACS Symposium Series. Vol. 1122. American Chemical Society. pp. 1–45. doi:10.1021/bk-2013-1122.ch001. ISBN 9780841227163.
- ^ a b c d UV-Visible Absorption Spectra
- ^ Theory of Ultraviolet-Visible (UV-Vis) Spectroscopy
- ^ "Electron Density and Potential Energy". Archived from the original on 2010-07-18. Retrieved 2010-10-07.
Energy level
View on GrokipediaFundamental Concepts
Definition and Explanation
In quantum mechanics, an energy level refers to a specific, discrete value of total energy that a quantum system, such as an atom or molecule, can possess. Unlike classical systems where energy can vary continuously, quantum systems are constrained to these quantized states due to the fundamental principles of wave-particle duality.[1] This quantization arises because particles exhibit wave-like behavior, and in confined spaces—like the potential well around a nucleus—the wave function must satisfy boundary conditions, leading to standing waves with only certain allowed wavelengths and thus discrete energies.[7][8] A basic example is the electron in an atom, where orbitals correspond to fixed energy levels rather than arbitrary values; an electron can occupy these levels but cannot have energies in between. This discrete nature was first postulated in Niels Bohr's 1913 model of the hydrogen atom, where he introduced the idea of stationary states—non-radiating orbits with quantized energies—to explain atomic stability without delving into detailed derivations.[9] The concept of energy levels is crucial for understanding the stability of matter, as systems naturally occupy the lowest available energy state (ground state) unless excited. These levels also govern atomic and molecular spectra, where transitions between them produce or absorb light at specific wavelengths, enabling technologies like lasers and spectroscopy. Furthermore, energy levels dictate how quantum systems interact with external fields or other particles, influencing chemical bonds, electronic properties, and quantum computing applications.[10][5]Quantum Mechanical Framework
In quantum mechanics, the theoretical foundation for energy levels is provided by the time-independent Schrödinger equation, which describes stationary states of a quantum system: where is the Hamiltonian operator representing the total energy, is the wave function, and is the energy eigenvalue. Solutions to this equation for bound systems, where the particle is confined by a potential, yield discrete energy eigenvalues , corresponding to quantized energy levels, rather than a continuum as in classical mechanics. This equation poses an eigenvalue problem, in which the possible energy levels are the eigenvalues of the Hamiltonian operator, and the associated wave functions are the eigenfunctions that define the probability distribution of the particle. Boundary conditions imposed by the system's potential enforce quantization; for instance, in the introductory model of a particle in a one-dimensional infinite potential well of width , the wave function must vanish at the boundaries and , leading to energy levels where is the particle mass and is the reduced Planck's constant.[11] Quantum systems can exist in superpositions of these energy eigenstates, , where the coefficients determine the amplitude for each level. Upon measurement of energy, the system collapses to one of the eigenstates with probability , selecting a specific discrete energy level . Energy levels in quantum systems, particularly atomic ones, are typically expressed in electronvolts (eV), where 1 eV = 1.602176634 × 10^{-19} J, facilitating comparisons with experimental data; conversions to joules or wavenumbers (cm^{-1}, where 1 cm^{-1} ≈ 1.2398 × 10^{-4} eV) are common for spectroscopic applications.[12]Historical Background
Classical Precursors
In the late 19th century, spectroscopy revealed discrete spectral lines in atomic emissions, challenging the classical view of continuous energy transitions. Johann Balmer's 1885 analysis of hydrogen's visible spectrum identified a series of lines fitting an empirical formula relating wavelengths to integer values, suggesting quantized energy changes rather than smooth variations.[13] This discovery implied that atoms could only emit or absorb radiation at specific frequencies, hinting at underlying discrete energy states, though Balmer himself interpreted it within classical optics without proposing atomic mechanisms.[13] The Rayleigh-Jeans law, derived in 1900 from classical electromagnetism, attempted to describe blackbody radiation but failed dramatically at short wavelengths, predicting infinite energy density in the ultraviolet region—known as the ultraviolet catastrophe.[14] This inadequacy exposed limitations in classical theory for explaining thermal radiation from atomic oscillators, as the law assumed continuous energy distribution without bounds.[14] To resolve this, Max Planck introduced his quantum hypothesis in 1900, proposing that energy is exchanged in discrete packets, or quanta, given by , where is a constant and is frequency, for oscillators in blackbody radiation.[15] This discreteness successfully matched experimental spectra, marking the first departure from classical continuity, though Planck initially viewed it as a mathematical expedient rather than a fundamental atomic property.[15] Early 20th-century atomic models, such as J.J. Thomson's 1904 plum pudding model, depicted atoms as uniform spheres of positive charge embedding electrons, assuming continuous energy levels for electron oscillations.[16] Similarly, Ernest Rutherford's 1911 nuclear model concentrated positive charge in a central nucleus with orbiting electrons, yet it relied on classical mechanics predicting continuous energies and spiral decay, conflicting with observed stable, discrete spectral lines.[17] These models highlighted anomalies in line spectra, paving the way for quantum resolutions like the Bohr model.[17]Development in Quantum Mechanics
In 1913, Niels Bohr introduced a seminal model for the hydrogen atom that marked the beginning of quantized energy levels in atomic structure. Bohr postulated that electrons orbit the nucleus in stable, circular paths where the angular momentum is quantized according to , with as a positive integer and as the reduced Planck's constant. This quantization condition prevented classical radiation losses, resulting in discrete energy levels , which successfully derived the empirical Balmer series of spectral lines observed in hydrogen emissions.[18] Building on Bohr's framework, Arnold Sommerfeld extended the model in 1916 to account for relativistic effects and more complex atomic spectra. By allowing electrons to follow elliptical orbits in three dimensions, Sommerfeld incorporated special relativity into the quantization rules, introducing additional quantum numbers and the fine structure constant , defined as , where is the elementary charge, the vacuum permittivity, and the speed of light. This extension explained the fine splitting of spectral lines beyond Bohr's predictions, laying groundwork for understanding relativistic corrections in atomic energy levels. The wave-particle duality underpinning modern quantum mechanics emerged with Louis de Broglie's 1924 hypothesis that all matter possesses wave-like properties. De Broglie proposed that particles, such as electrons, have an associated wavelength , where is Planck's constant and the momentum, extending the dual nature already accepted for light to massive particles. This idea bridged classical mechanics and wave optics, suggesting that electron orbits in atoms could be standing waves, which inspired subsequent wave-based formulations of quantum theory. In 1925, Werner Heisenberg developed matrix mechanics, the first complete quantum mechanical formalism, which reframed atomic dynamics without classical trajectories. Heisenberg represented physical quantities like position and momentum as infinite arrays (matrices), with non-commuting relations leading to quantized energy levels as eigenvalues of the Hamiltonian matrix. This approach resolved inconsistencies in the old quantum theory by emphasizing observable quantities, and the Heisenberg uncertainty principle, formalized in 1927, further clarified how quantum confinement in bound systems inherently discretizes energy due to the trade-off between position and momentum uncertainties . Complementing Heisenberg's work, Erwin Schrödinger formulated wave mechanics in 1926, providing an equivalent yet more intuitive description through differential equations. Schrödinger's time-independent equation treats the electron's state as a wave function , with discrete energy eigenvalues corresponding to bound solutions that satisfy boundary conditions, unifying the matrix and wave pictures and confirming Bohr's energy quantization as a general eigenvalue problem. A relativistic synthesis arrived in 1928 with Paul Dirac's equation for the electron, , which merged quantum mechanics and special relativity. This linear wave equation naturally incorporated electron spin and predicted spin-orbit coupling effects on energy levels, explaining fine structure phenomena more rigorously than prior semi-classical models, though it also anticipated the existence of positrons.Energy Levels in Atoms
Hydrogen-like Atoms
Hydrogen-like atoms, also known as hydrogenic atoms, consist of a nucleus with atomic number and a single electron, such as the hydrogen atom () or ions like () and (). These systems provide the simplest exact solutions to the quantum mechanical description of atomic energy levels due to the absence of electron-electron interactions. The time-independent Schrödinger equation for such a system, in the center-of-mass frame, treats the electron's motion relative to the nucleus using the reduced mass , where is the electron mass and is the proton (or nuclear) mass.[19][20] The Schrödinger equation separates into radial and angular parts in spherical coordinates owing to the Coulomb potential's spherical symmetry. The angular part yields spherical harmonics , characterized by the azimuthal quantum number (integers from 0 to ) and magnetic quantum number (integers from to ). The radial part, involving associated Laguerre polynomials, introduces the principal quantum number (positive integers ), which determines the number of radial nodes (). The full wavefunction is .[20] The bound-state energy levels depend solely on : where the constant derives from the Bohr radius , scaled by for hydrogen-like atoms; the negative sign indicates bound states relative to the zero-energy continuum. This formula arises from quantizing the radial equation, analogous to a 1D infinite well but with an effective centrifugal potential. In the non-relativistic approximation, levels with the same but different and are degenerate, with degeneracy , as the energy is independent of angular momentum quantum numbers.[19][20] As , , marking the ionization threshold where the electron is unbound. The ground-state binding (ionization) energy is thus ; for example, hydrogen requires 13.6 eV, needs 54.4 eV, and demands 122.4 eV to ionize from . These predictions are verified experimentally through atomic spectroscopy, where transitions between levels produce spectral series matching the Rydberg formula , with derived from the energy spacing. The Lyman series (transitions to , ultraviolet) was observed in 1906, the Balmer series (to , visible and ultraviolet) in 1885, and the Paschen series (to , infrared) in 1908, all aligning precisely with quantum mechanical calculations.[21][22]Multi-Electron Atoms
In multi-electron atoms, the presence of electron-electron repulsion significantly complicates the determination of energy levels compared to hydrogen-like atoms, where the potential is purely Coulombic and energies depend solely on the principal quantum number . The mutual repulsion creates an effective potential that varies with the angular momentum quantum number , as inner electrons imperfectly screen the nuclear charge, making subshells within the same (e.g., and ) non-degenerate, with orbitals lower in energy than . This shielding effect reduces the penetration of outer electrons toward the nucleus, leading to energy levels that increase more slowly with atomic number . The Pauli exclusion principle governs the occupancy of these levels, stating that no two electrons in an atom can share the same set of four quantum numbers: principal , azimuthal , magnetic , and spin . Formulated by Wolfgang Pauli in 1925 to explain atomic spectra, this principle ensures that each orbital holds at most two electrons with opposite spins, resulting in the filling of shells (up to electrons) and subshells (). It underpins the electronic structure of all elements, preventing collapse into the lowest state and enforcing the periodic table's shell-based organization.[23][24] To approximate the many-body Hamiltonian, the Hartree-Fock method treats electrons in a self-consistent mean field, where each electron moves in an effective potential combining nuclear attraction and the average repulsion from all others, represented via a Slater determinant of one-electron orbitals. Introduced by Douglas Hartree in 1928 as a numerical self-consistent field approach and refined by Vladimir Fock in 1930 to include antisymmetrization and exchange effects, this method yields orbital energies that approximate the total ground-state energy, though it neglects instantaneous correlations. For the helium atom's ground state, Hartree-Fock predicts an energy of approximately -77.8 eV, underestimating the experimental value of -79.0 eV by about 1.5% due to correlation omission.[25][26]/01%3A_Chapters/1.08%3A_Helium_Atom) For greater accuracy, configuration interaction (CI) extends the Hartree-Fock wavefunction by linearly combining the reference determinant with those from excited configurations, capturing electron correlation through explicit multi-electron excitations. This post-Hartree-Fock approach, pioneered in atomic calculations like those for helium by Egil Hylleraas in 1929, improves energy estimates by accounting for deviations from mean-field behavior. In helium, Hylleraas-CI methods with thousands of terms achieve ground-state energies accurate to within 10 picohartrees (about cm) of the exact non-relativistic value, demonstrating CI's power for few-electron systems. The ordering of filled configurations follows the Aufbau principle, which builds atomic ground states by occupying orbitals from lowest to highest energy, typically sequenced by increasing (Madelung rule), with same filled by increasing . For degenerate subshells, Hund's rules determine the lowest-energy term: first, maximize total spin for highest multiplicity ; second, for that , maximize total orbital angular momentum ; third, align and appropriately for lighter elements. Developed by Friedrich Hund in 1925–1927 to interpret atomic spectra, these empirical rules arise from minimizing Coulomb repulsion while respecting Pauli exclusion, explaining configurations like carbon's with triplet ground state (). These principles manifest in periodic trends, such as ionization energies, which reflect the stability of filled shells. Ionization energy generally increases across a period due to rising effective nuclear charge tightening electron binding, with jumps at noble gases (e.g., He 24.6 eV, Ne 21.6 eV) from closed shells, and decreases down a group from enhanced screening by added shells (e.g., Li 5.4 eV, Na 5.1 eV). Exceptions occur at half-filled subshells (e.g., N 14.5 eV > O 13.6 eV) per Hund's maximization of exchange stabilization./Descriptive_Chemistry/Periodic_Trends_of_Elemental_Properties/Periodic_Trends)Relativistic and Spin Effects
In atomic physics, relativistic effects and electron spin introduce corrections to the non-relativistic energy levels, leading to the fine structure observed in spectral lines. The Dirac equation, which combines quantum mechanics with special relativity, provides an exact treatment for the hydrogen atom, incorporating spin naturally. The resulting energy levels depend on the principal quantum number and the total angular momentum quantum number , given approximately by where is the non-relativistic Bohr energy, and is the fine-structure constant. This formula splits the degenerate levels (characterized by orbital angular momentum ) according to , with the shift scaling as times the Rydberg energy, explaining the fine splitting in hydrogen's Lyman and Balmer series.[27] A key component of the fine structure is spin-orbit coupling, arising from the interaction between the electron's spin magnetic moment and the magnetic field generated by its orbital motion in the nuclear Coulomb field. The perturbation Hamiltonian for this coupling is where and are the orbital and spin angular momentum operators, respectively; the constant of proportionality depends on the nuclear charge and decreases with increasing . This interaction splits levels with the same and but different , such as and , with the higher- state having lower energy for . In multi-electron atoms like sodium, this manifests as the splitting of the and levels in the first excited state, producing the closely spaced D lines in the yellow sodium spectrum at approximately 589.0 nm and 589.6 nm.[28] Hyperfine structure further refines these levels through the interaction between the electron's total angular momentum and the nuclear spin , primarily via the magnetic dipole mechanism. The nuclear magnetic dipole moment couples to the magnetic field produced by the electrons at the nucleus, yielding an energy splitting proportional to where is the nuclear g-factor, the nuclear magneton, the Bohr magneton, and the expectation value of the inverse cube of the electron-nucleus distance (non-zero for ; for s-states, it involves the contact term from spin density at the nucleus). The total angular momentum labels the hyperfine levels, with splitting scaling as the magnetic moments' product and inversely with atomic size. In neutral hydrogen, this interaction splits the ground state (, ) into and components separated by 1420 MHz, corresponding to the 21 cm radio emission line pivotal in astrophysics for mapping interstellar hydrogen.[29] Quantum electrodynamics (QED) introduces additional corrections beyond the Dirac theory, most notably the Lamb shift, which arises from vacuum fluctuations and the electron's interaction with virtual photons. This radiative correction shifts the energy levels by an amount scaling as times the Rydberg energy (about 10^{-6} of the fine structure), primarily affecting s-states more than p-states due to higher probability near the nucleus. In hydrogen, it lifts the degeneracy between the and fine-structure levels, with the measured shift of 1057.8 MHz observed in the 1947 microwave experiment by Willis Lamb and Robert Retherford using a beam of excited atoms and stimulated transitions. This anomaly, initially unexplained by Dirac theory, validated QED as the perturbative framework for atomic structure.[30] In alkali atoms, such as lithium, sodium, and potassium, hyperfine structure is prominently observed in microwave spectra due to their single valence electron enhancing the interaction. For instance, the ground-state hyperfine splitting in ^7Li (2S_{1/2}, I=3/2) between F=2 and F=1 is 803.5 MHz, and in ^{23}Na (3S_{1/2}, I=3/2) it is 1771.6 MHz, measured via atomic beam microwave spectroscopy; these transitions enable precise atomic clocks and quantum sensing applications.[31][32]External Field Perturbations
External magnetic and electric fields perturb the energy levels of atoms by coupling to their magnetic and electric dipole moments, respectively, leading to shifts and splittings that depend on the field strength and atomic structure. These perturbations are analyzed using time-independent perturbation theory in quantum mechanics, where the interaction Hamiltonian is added to the unperturbed atomic Hamiltonian. For weak fields, the effects are linear in field strength, while stronger fields can cause nonlinear responses or decoupling of angular momenta.[33] The Zeeman effect describes the splitting of atomic energy levels in a weak external magnetic field , arising from the interaction of the atom's magnetic moment with the field. In the normal Zeeman effect, observed in transitions without electron spin involvement, the energy shift is , where is the Bohr magneton, is the orbital magnetic quantum number, and is the magnetic field magnitude along the quantization axis. This was first observed by Pieter Zeeman in 1896 for spectral lines of sodium and calcium.[34] The anomalous Zeeman effect occurs when electron spin contributes, as in most atomic transitions, leading to more complex splittings due to the total angular momentum . The energy shift is given by , where is the projection of along , and is the Landé g-factor, , which accounts for the relative orientations of orbital and spin angular momenta. This formulation was developed by Alfred Landé in 1921 to explain observed splittings inconsistent with the normal effect, building on the fine structure as the unperturbed basis.[34] For example, in sodium atoms, the Zeeman splitting of the D-line transition is used in atomic magnetic resonance experiments to probe hyperfine interactions and field strengths up to several tesla.[35] In the Paschen-Back regime, for strong magnetic fields where exceeds the fine-structure splitting, the coupling between and decouples, and the energy levels are approximately , with and as good quantum numbers. This regime transitions the spectrum toward the normal Zeeman pattern but with spin contributions doubled due to the electron's g-factor of 2. The effect was discovered by Friedrich Paschen and Ernst Back in 1912 through observations of spectral lines in strong fields, and theoretically explained by Arnold Sommerfeld in 1913 using anisotropic electron orbits.[34] The Stark effect refers to the shifting and splitting of energy levels in an external electric field , due to the interaction , where is the electric dipole operator. For non-degenerate states, the shift is quadratic, , with the polarizability. However, in degenerate states like the level of hydrogen, linear shifts occur, , where is the Bohr radius, arising from first-order perturbation mixing states of opposite parity; the matrix element is , with . This linear Stark effect was first observed by Johannes Stark in 1913 in hydrogen and helium spectra from electric discharges.[33] In such discharges, the splitting of hydrogen Balmer lines provides a direct measure of electric field strengths in plasmas.[33] The AC Stark effect, or light shift, arises from dynamic perturbations by off-resonant laser fields oscillating at frequency , effectively shifting levels by an amount proportional to the laser intensity , , where is the dynamic polarizability. For far-detuned fields ( much different from atomic transitions), this creates conservative potentials for trapping neutral atoms in optical dipole traps, with red-detuned lasers forming attractive potentials and blue-detuned repulsive ones. This effect underpins laser cooling and trapping techniques, as detailed in the foundational review on optical dipole traps.[36]Energy Levels in Molecules
Electronic, Vibrational, and Rotational Levels
In molecules, energy levels arise from the combined contributions of electronic, vibrational, and rotational degrees of freedom, forming a hierarchical structure where electronic transitions occur on the scale of electronvolts, vibrational on hundreds of wavenumbers, and rotational on tens of wavenumbers. The Born-Oppenheimer approximation underpins this separation by treating nuclear motion as slow compared to electronic motion due to the mass disparity between electrons and nuclei, allowing the electronic wavefunction to depend parametrically on fixed nuclear positions and yielding potential energy curves V(R) that govern nuclear dynamics. This approximation, introduced by Max Born and J. Robert Oppenheimer, enables the Schrödinger equation for molecules to be decoupled into electronic and nuclear parts, with the nuclear Hamiltonian incorporating the electronic potential V(R).[37] Electronic energy levels in molecules resemble those in atoms but are modified by internuclear interactions, forming molecular orbitals from linear combinations of atomic orbitals that result in bonding orbitals (lower energy, increased electron density between nuclei) and antibonding orbitals (higher energy, nodal planes between nuclei).[38] In conjugated π systems, such as benzene, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) define the frontier orbitals, with the HOMO-LUMO gap influencing reactivity and optical properties; for example, in ethylene, the π bonding orbital lies above the σ bonding orbitals, serving as the HOMO, while the π* antibonding orbital is above. Vibrational energy levels describe nuclear oscillations along bonds, modeled initially as a harmonic oscillator with energies given by where is the vibrational quantum number and is the angular frequency, with the force constant and the reduced mass for a diatomic molecule. Real bonds exhibit anharmonicity due to finite dissociation energies, leading to corrections that decrease level spacings at higher and enable overtones. For polyatomic molecules, vibrations decompose into 3N-6 (nonlinear) or 3N-5 (linear) normal modes, each treated as independent harmonic oscillators with distinct frequencies corresponding to collective atomic displacements like stretches or bends. Rotational energy levels arise from nuclear tumbling, approximated as a rigid rotor with energies where is the rotational quantum number and the rotational constant, with the moment of inertia. At high , centrifugal forces elongate bonds, introducing distortion corrections that reduce and level spacings. For the hydrogen molecule (H₂), the ground-state vibrational spacing is approximately 4400 cm⁻¹ (ω_e = 4401.21 cm⁻¹), while rotational spacings are about 120 cm⁻¹ (2B_e ≈ 121.7 cm⁻¹ with B_e = 60.853 cm⁻¹), illustrating the scale separation; in polyatomics like water, normal modes include symmetric stretch (~3650 cm⁻¹), asymmetric stretch (~3750 cm⁻¹), and bend (~1595 cm⁻¹).[39]Potential Energy Surfaces and Diagrams
In quantum chemistry, a potential energy surface (PES) represents the potential energy of a molecule as a function of its nuclear coordinates, denoted as , where specifies the positions of the nuclei.[40] The minima on a PES correspond to stable bound states, such as equilibrium molecular geometries, while saddle points indicate transition states associated with reaction barriers.[41] These surfaces provide a multidimensional hypersurface that underpins the Born-Oppenheimer approximation, separating nuclear and electronic motion to map out the energy landscape for molecular configurations.[42] PES can be described in adiabatic or diabatic representations. Adiabatic surfaces arise from solving the electronic Schrödinger equation for fixed nuclear positions, yielding eigenstates that avoid crossings due to non-adiabatic coupling, often manifesting as avoided crossings in excited states.[40] In contrast, diabatic surfaces maintain consistent electronic character across geometries, allowing direct curve crossings, which simplifies modeling non-adiabatic dynamics like electron transfer or photochemical processes.[43] This distinction is crucial for interpreting excited-state behavior, where adiabatic surfaces reflect the instantaneous electronic states, while diabatic ones facilitate the analysis of state mixing.[44] Energy level diagrams visualize the discrete vibrational and rotational levels superimposed on electronic PES, often schematically stacking them to illustrate molecular spectra. The Frank-Condon principle governs vertical electronic transitions in these diagrams, positing that the nuclei remain stationary during the ultrafast electron rearrangement, leading to overlaps between vibrational wavefunctions on different electronic surfaces that determine transition intensities.[45] For instance, in diatomic molecules, absorption from the ground electronic state to an excited state appears as a vertical line on the PES, with the most probable transitions occurring where vibrational overlap is maximized, often resulting in progressions of vibrational bands.[46] Jablonski diagrams extend these representations by depicting singlet and triplet electronic states, along with radiative and non-radiative processes. These diagrams illustrate intersystem crossing (ISC), a spin-forbidden transition from a singlet excited state to a triplet state, which enables phosphorescence by populating lower-energy triplet levels that decay slowly to the ground state.[47] In such diagrams, solid arrows denote radiative transitions like fluorescence (singlet-to-singlet) or phosphorescence (triplet-to-singlet), while wavy lines indicate non-radiative pathways such as ISC or internal conversion, providing a qualitative map of excited-state relaxation in molecules.[48] Computational methods, particularly ab initio approaches, are essential for constructing accurate PES. Density functional theory (DFT) and higher-level methods like coupled-cluster theory compute by solving the electronic problem at various geometries, enabling the mapping of global surfaces for dynamics simulations.[49] For vibrational levels in diatomic molecules, the Morse potential serves as a seminal empirical model: where is the dissociation energy, the equilibrium bond length, and a parameter controlling the width, capturing anharmonicity better than the harmonic oscillator while allowing exact quantum solutions for bound states.[50] These techniques, validated against experimental spectra, underpin predictions of molecular stability and reactivity.[51]Transitions Between Energy Levels
Selection Rules and Transition Probabilities
Selection rules dictate which transitions between quantum states are permitted or forbidden under specific interaction mechanisms, primarily arising from conservation laws and symmetry considerations in quantum electrodynamics. For electric dipole (E1) transitions, the dominant mechanism in atomic and molecular spectroscopy, the selection rules require a change in the orbital angular momentum quantum number of Δl = ±1 and in the magnetic quantum number of Δm_l = 0, ±1, reflecting the vector nature of the dipole operator and the photon's angular momentum.[52] Additionally, these transitions necessitate a change in the parity of the wavefunction, as the electric dipole operator is odd under parity inversion, ensuring that only states of opposite parity can couple effectively.[53] Conservation of total angular momentum imposes further restrictions, particularly in the LS (Russell-Saunders) coupling scheme common for light atoms. Here, the change in total angular momentum quantum number must satisfy ΔJ = 0, ±1, with the prohibition of 0 ↔ 0 transitions to avoid violating angular momentum conservation by the spin-1 photon.[54] Spin angular momentum is conserved, yielding ΔS = 0 and ΔM_S = 0, which suppresses spin-flip transitions unless higher-order effects intervene.[54] These rules ensure that only certain energy level transitions, such as those between p and s orbitals in hydrogen-like atoms, are allowed via E1 mechanisms.[52] The probability of an allowed transition is quantified by the transition dipole moment, defined as μ_{if} = ⟨ψ_f | e \mathbf{r} | ψ_i⟩, where ψ_i and ψ_f are the initial and final wavefunctions, e is the electron charge, and \mathbf{r} is the position operator.[55] The transition rate is proportional to the square of this matrix element's magnitude, |μ_{if}|^2, which determines the strength of the coupling between states.[55] In the semiclassical treatment of radiation-matter interactions, these probabilities are encapsulated by Einstein's coefficients: the spontaneous emission coefficient A_{if} governs the rate of decay from upper to lower states, while the absorption and stimulated emission coefficients B_{if} and B_{fi} describe upward and downward transitions induced by radiation fields, related by A_{if} / B_{if} = (8π h ν^3 / c^3) in thermal equilibrium.[56] Transitions violating the E1 selection rules are termed forbidden and proceed via weaker mechanisms like magnetic dipole (M1) or electric quadrupole (E2) interactions, which do not require a parity change or Δl = ±1.[57] For instance, M1 transitions allow Δl = 0 while conserving parity, and E2 permits Δl = 0, ±2, but both have much smaller matrix elements, leading to longer lifetimes—typically on the order of milliseconds for M1 decays in atomic systems compared to nanoseconds for E1.[58] In perturbation theory, the general transition rate for weak interactions between discrete initial and continuum final states is given by Fermi's golden rule: w = (2π / ℏ) |V_{if}|^2 ρ(E), where V_{if} is the perturbation matrix element and ρ(E) is the density of final states at energy E.[59] This formula underpins the calculation of rates for both radiative and non-radiative processes, providing a foundational tool for predicting transition probabilities in quantum systems.[59]Applications in Spectroscopy
Spectroscopy leverages transitions between quantized energy levels in atoms and molecules to probe their structure and dynamics, providing insights into electronic, vibrational, and rotational states through the absorption or emission of light at specific wavelengths.[60] In absorption and emission spectroscopy, atoms or molecules absorb photons to excite electrons from lower to higher energy levels or emit photons during relaxation, producing characteristic spectral lines whose positions reveal energy level spacings.[60] These lines are broadened by mechanisms such as Doppler broadening, arising from the thermal motion of particles, which typically yields linewidths on the order of gigahertz (GHz) in optical spectra for room-temperature gases.[61] Pressure or collisional broadening further widens lines due to interactions between particles, with the extent depending on density and collision rates, often dominating in denser media.[60] Raman spectroscopy extends these techniques by detecting inelastic scattering of light, where the energy shift of scattered photons corresponds to differences between vibrational energy levels in the ground electronic state, enabling non-destructive analysis of molecular vibrations without direct absorption./18%3A_Raman_Spectroscopy/18.01%3A_Theory_of_Raman_Spectroscopy) These Stokes and anti-Stokes shifts, typically in the range of 100–3000 cm⁻¹, provide fingerprints of molecular bonds and conformations, complementing infrared absorption methods./18%3A_Raman_Spectroscopy/18.01%3A_Theory_of_Raman_Spectroscopy) Laser spectroscopy achieves higher resolution by employing tunable narrow-linewidth lasers; for instance, saturated absorption spectroscopy uses a counter-propagating pump-probe configuration to suppress Doppler broadening, resolving hyperfine splittings down to megahertz (MHz) scales in atomic spectra like those of alkali metals.[62] This technique has enabled precise measurements of energy level fine structure, essential for atomic clocks and quantum optics applications.[63] Photoelectron spectroscopy directly measures energy levels by ionizing atoms or molecules with photons and analyzing the kinetic energies of ejected electrons, from which ionization potentials are derived as the difference between photon energy and electron kinetic energy./10%3A_Bonding_in_Polyatomic_Molecules/10.04%3A_Photoelectron_Spectroscopy) In ultraviolet photoelectron spectroscopy (UPS), valence levels are probed, revealing orbital energies and bonding characteristics in molecules, with binding energies typically spanning 5–20 eV for valence electrons./10%3A_Bonding_in_Polyatomic_Molecules/10.04%3A_Photoelectron_Spectroscopy) Time-resolved variants, such as pump-probe spectroscopy using femtosecond lasers, track ultrafast dynamics following photoexcitation; for example, a pump pulse excites vibrational levels, while a delayed probe monitors relaxation processes like intramolecular vibrational redistribution on picosecond timescales.[64] These methods have elucidated energy transfer in photochemical reactions, with resolution down to 100 femtoseconds.[64] In astrophysics, spectroscopy of energy level transitions identifies atomic and molecular compositions in distant celestial objects through redshifted absorption or emission lines, where the wavelength shift indicates recession velocity via the Doppler effect.[65] Fraunhofer lines in the solar spectrum, dark absorption features from the Sun's photosphere, correspond to electronic transitions in elements like hydrogen and metals, allowing determination of solar atmospheric abundance.[65] Similar lines in stellar and galactic spectra, shifted by cosmic expansion, reveal the chemical evolution of the universe, from hydrogen-dominated early stars to metal-enriched later ones.[65]Energy Levels in Crystalline Solids
Band Theory Overview
Band theory describes the formation of continuous energy bands in periodic solids, extending the discrete energy levels of isolated atoms into quasi-continuous spectra due to the periodic lattice potential that allows electron wavefunctions to extend throughout the crystal.[66] In crystalline solids, electrons are not confined to single atoms but delocalize, leading to energy bands separated by band gaps where no states exist.[67] The foundation of band theory is the Bloch theorem, which states that the eigenfunctions of an electron in a periodic potential can be written as plane waves modulated by a periodic function:where has the periodicity of the lattice, and is the wavevector in the reciprocal lattice or Brillouin zone.[68] This form leads to energy eigenvalues that form bands labeled by band index , plotted in k-space, with the dispersion relation determining the band structure.[68] Two key models illustrate band formation. The nearly free electron model treats electrons as nearly free plane waves weakly perturbed by the lattice potential, which causes Bragg scattering and opens energy gaps at Brillouin zone boundaries where wavevectors satisfy (with a reciprocal lattice vector), splitting degenerate states and creating band gaps. In contrast, the tight-binding model starts from localized atomic orbitals on lattice sites, where overlapping orbitals form Bloch states; the resulting bands have widths proportional to the hopping integral , which measures interatomic coupling, yielding narrow bands for weakly overlapping orbitals.[69] Band gaps classify materials: insulators have large eV, preventing conduction; semiconductors have small eV, allowing thermal excitation across the gap; and metals have overlapping valence and conduction bands or zero gap, enabling free carriers.[70] For example, silicon is a semiconductor with an indirect band gap of approximately 1.12 eV at 300 K, where the conduction band minimum occurs at a different k-point than the valence band maximum.[71] The density of states , which counts available states per energy interval, exhibits divergences at van Hove singularities—critical points in the band structure where the dispersion flattens (saddles or extrema), leading to sharp peaks in that influence properties like electronic heat capacity.[70]