Hubbry Logo
Heat transfer physicsHeat transfer physicsMain
Open search
Heat transfer physics
Community hub
Heat transfer physics
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Heat transfer physics
Heat transfer physics
from Wikipedia

Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons.[1][2][3][4][5] Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.

Introduction

[edit]
Variation of equilibrium particle distribution function with respect to energy for different energy carriers.
Kinetics of atomic-level energy transport and transition interaction[5]
Length-time scale regimes for ab initio, MD, Boltzmann transport, and macroscopic treatments of heat transfer.[5]

Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is[6] where q is heat flux vector, ρcp(∂T/∂t) is temporal change of internal energy (ρ is density, cp is specific heat capacity at constant pressure, T is temperature and t is time), and is the energy conversion to and from thermal energy (i and j are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector q is composed of three macroscopic fundamental modes, which are conduction (qk = −kT, k: thermal conductivity), convection (qu = ρcpuT, u: velocity), and radiation (, ω: angular frequency, θ: polar angle, Iph,ω: spectral, directional radiation intensity, s: unit vector), i.e., q = qk + qu + qr.

Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons (p), electrons (e), fluid particles (f), and photons (ph).[7]

Length and time scales

[edit]

Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction.[1] Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics.[5][8] Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule).[9] Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT, CASTEP, Gaussian, Q-Chem, Quantum ESPRESSO, SIESTA, VASP, WIEN2k). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons.[10]

The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics.[11] In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived.[12][13]

At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors (x, p) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations (ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method, etc.[14]

Depending on the length and time scale, the proper level of treatment (ab initio, MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation.

So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale (ab initio, MD, BTE and macroscale) analyses, including low-dimensionality and size effects.[2]

Phonon

[edit]

Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor Kp (W/m-K, from the Fourier law qk,p = -Kp⋅∇ T) for bulk materials, and the phonon boundary resistance ARp,b [K/(W/m2)] for solid interfaces, where A is the interface area. The phonon specific heat capacity cv,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in . Heat transfer physics describes and predicts, cv,p, Kp, Rp,b (or conductance Gp,b) and , based on atomic-level properties.

For an equilibrium potential ⟨φo of a system with N atoms, the total potential ⟨φ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as

where di is the displacement vector of atom i, and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [d(jl,t): displacement vector of the j-th atom in the l-th unit cell at time t] is where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [sα: unit vector of mode α, ωp: angular frequency of wave, and κp: wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation[15][16] where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ωp and the wave vector κp, and this relation is called the phonon dispersion relation. Thus, the phonon dispersion relation is determined by matrices M and D, which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope p/dκp). The Hamiltonian of phonon system with the harmonic approximation is[15][17][18] where Dij is the dynamical matrix element between atoms i and j, and di (dj) is the displacement of i (j) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as where N is the number of normal modes divided by α and ħ is the reduced Planck constant. The creation operator is the adjoint of the annihilation operator, The Hamiltonian in terms of bκ,α and bκ,α is Hp = Σκ,αħωp,α[bκ,αbκ,α + 1/2] and bκ,αbκ,α is the phonon number operator. The energy of quantum-harmonic oscillator is Ep = Σκ,α [fp(κ,α) + 1/2]ħωp,α(κp), and thus the quantum of phonon energy ħωp.

The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space), and the phonon density of states Dp (the number density of possible phonon modes). The phonon group velocity up,g is the slope of the dispersion curve, p/dκp. Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution {fpo = [exp(ħωp/kBT)-1]−1, kB: Boltzmann constant}. Using the phonon density of states and this occupancy distribution, the phonon energy is Ep(T) = Dp(ωp)fp(ωp,T)ħωpp, and the phonon density is np(T) = Dp(ωp)fp(ωp,T)p. Phonon heat capacity cv,p (in solid cv,p = cp,p, cv,p : constant-volume heat capacity, cp,p: constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is[19] where TD is the Debye temperature, m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3n). This gives the Debye T3 law at low temperature and Dulong-Petit law at high temperatures.

From the kinetic theory of gases,[20] thermal conductivity of principal carrier i (p, e, f and ph) is where ni is the carrier density and the heat capacity is per carrier, ui is the carrier speed and λi is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λp represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τp or rate (= 1/τp) through λp= upτp. Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λp combines these interaction mechanisms through the Matthiessen rule. At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2TD. The interaction rates are reviewed in[21] and includes quantum perturbation theory and MD.

A number of conductivity models are available with approximations regarding the dispersion and λp.[17][19][21][22][23][24][25] Using the single-mode relaxation time approximation (∂fp/∂t|s = −fp/τp) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as[21][26]

With the Debye model (a single group velocity up,g, and a specific heat capacity calculated above), this becomes

where a is the lattice constant a = n−1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as[27][28] where M is the mean atomic weight of the atoms in the primitive cell, Va=1/n is the average volume per atom, TD,∞ is the high-temperature Debye temperature, T is the temperature, No is the number of atoms in the primitive cell, and ⟨γ2G⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals.

Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy.

Phonon transport across hetero-structure boundaries (represented with Rp,b, phonon boundary resistance) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models.[29] Larger phonon transmission (small Rp,b) occurs at boundaries where material pairs have similar phonon properties (up, Dp, etc.), and in contract large Rp,b occurs when some material is softer (lower cut-off phonon frequency) than the other.

Electron

[edit]

Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (-ħ22/2me) and potential energy terms (φe). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom, can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) is a measure of excitability of the molecules.

In a crystal structure of metallic solids, the free electron model (zero potential, φe = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal), there is periodic crystal potential, so the electron Hamiltonian becomes[19] where me is the electron mass, and the periodic potential is expressed as φc (x) = Σg φgexp[i(gx)] (g: reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation) where the eigenfunction ψe,κ is the electron wave function, and eigenvalue Ee(κe), is the electron energy (κe: electron wavevector). The relation between wavevector, κe and energy Ee provides the electronic band structure. In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software (ABINIT, CASTEP, Quantum ESPRESSO, SIESTA, VASP, WIEN2k, etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = Ke + Kp.

Electrons are affected by two thermodynamic forces [from the charge, ∇(EF/ec) where EF is the Fermi level and ec is the electron charge and temperature gradient, ∇(1/T)] because they carry both charge and thermal energy, and thus electric current je and heat flow q are described with the thermoelectric tensors (Aee, Aet, Ate, and Att) from the Onsager reciprocal relations[30] as

Converting these equations to have je equation in terms of electric field ee and ∇T and q equation with je and ∇T, (using scalar coefficients for isotropic transport, αee, αet, αte, and αtt instead of Aee, Aet, Ate, and Att)

Electrical conductivity/resistivity σe−1m−1)/ ρe (Ω-m), electric thermal conductivity ke (W/m-K) and the Seebeck/Peltier coefficients αS (V/K)/αP (V) are defined as,

Various carriers (electrons, magnons, phonons, and polarons) and their interactions substantially affect the Seebeck coefficient.[31][32] The Seebeck coefficient can be decomposed with two contributions, αS = αS,pres + αS,trans, where αS,pres is the sum of contributions to the carrier-induced entropy change, i.e., αS,pres = αS,mix + αS,spin + αS,vib (αS,mix: entropy-of-mixing, αS,spin: spin entropy, and αS,vib: vibrational entropy). The other contribution αS,trans is the net energy transferred in moving a carrier divided by qT (q: carrier charge). The electron's contributions to the Seebeck coefficient are mostly in αS,pres. The αS,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula where feo = N/Na is the ratio of electrons to sites (carrier concentration). Using the chemical potential (μ), the thermal energy (kBT) and the Fermi function, above equation can be expressed in an alternative form, αS,mix = (kB/q)[(Eeμ)/(kBT)]. Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by αS,spin = ΔSspin/q = (kB/q)ln[(2s + 1)/(2s0 +1)], where s0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e., where Dp(ω) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as αS,vib = (ΔSvib/q) = (kB/qi(-Δωi/ωi).

The Seebeck coefficient derived in the above Onsager formulation is the mixing component αS,mix, which dominates in most semiconductors. The vibrational component in high-band gap materials such as B13C2 is very important.
Considering the microscopic transport (transport is a results of nonequilibrium),

where ue is the electron velocity vector, fe (feo) is the electron nonequilibrium (equilibrium) distribution, τe is the electron scattering time, Ee is the electron energy, and Fte is the electric and thermal forces from ∇(EF/ec) and ∇(1/T). Relating the thermoelectric coefficients to the microscopic transport equations for je and q, the thermal, electric, and thermoelectric properties are calculated. Thus, ke increases with the electrical conductivity σe and temperature T, as the Wiedemann–Franz law presents [ke/(σeTe) = (1/3)(πkB/ec)2 = 2.44×10−8 W-Ω/K2]. Electron transport (represented as σe) is a function of carrier density ne,c and electron mobility μe (σe = ecne,cμe). μe is determined by electron scattering rates (or relaxation time, ) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries.

Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating. Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode, solar photovoltaic cells, etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach.

Fluid particle

[edit]

Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are Hf,t = −(ħ2/2m)∇2, Hf,v = −(ħ2/2m)∇2 + Γx2/2 and Hf,r = −(ħ2/2If)∇2 for translational, vibrational and rotational modes. (Γ: spring constant, If: the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state Ef and partition functions Zf [with the Maxwell–Boltzmann (MB) occupancy distribution] are found as[33]

  • translational
  • vibrational
  • rotational
  • total

Here, gf is the degeneracy, n, l, and j are the transitional, vibrational and rotational quantum numbers, Tf,v is the characteristic temperature for vibration (= ħωf,v/kB, : vibration frequency), and Tf,r is the rotational temperature [= ħ2/(2IfkB)]. The average specific internal energy is related to the partition function through Zf,

With the energy states and the partition function, the fluid particle specific heat capacity cv,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, cv,f has different formulas depending on the configuration,[33]

  • monatomic ideal gas
  • diatomic ideal gas
  • nonlinear, polyatomic ideal gas

where Rg is the gas constant (= NAkB, NA: the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, No is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity cp,f has a larger value and the difference depends on the temperature T, volumetric thermal expansion coefficient β and the isothermal compressibility κ [cp,fcv,f = 2/(ρfκ), ρf : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and cv,f and cp,f would change accordingly. The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux qu = ρfcp,fufT. Conduction heat flux qk for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is where ⟨uf21/2 is the RMS (root mean square) thermal velocity (3kBT/m from the MB distribution function, m: atomic mass) and τf-f is the relaxation time (or intercollision time period) [(21/2π d2nfuf⟩)−1 from the gas kinetic theory, ⟨uf⟩: average thermal speed (8kBT/πm)1/2, d: the collision diameter of fluid particle (atom or molecule), nf: fluid number density].

kf is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of kf, the equilibrium MD with Green–Kubo relations, which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed.

Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO2 gas laser.[34][35] Also, fluid particles can be adsorbed on solid surfaces (physisorption and chemisorption), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e-h+ pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule.[36]

Photon

[edit]
Spectral photon absorption coefficient for typical gas, liquid, and solid phases. For the solid phase, examples of polymer, oxide, semiconductor, and metals are given.

Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer. The EM wave is governed by the classical Maxwell equations, and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe). The quanta EM wave (photon) energy of angular frequency ωph is Eph = ħωph, and follows the Bose–Einstein distribution function (fph). The photon Hamiltonian for the quantized radiation field (second quantization) is[37][38] where ee and be are the electric and magnetic fields of the EM radiation, εo and μo are the free-space permittivity and permeability, V is the interaction volume, ωph,α is the photon angular frequency for the α mode and cα and cα are the photon creation and annihilation operators. The vector potential ae of EM fields (ee = −∂ae/∂t and be = ∇×ae) is where sph,α is the unit polarization vector, κα is the wave vector.

Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal (uph = d ωph/ = ωph/κ, uph: photon speed) and the Debye (used for dispersionless photon) density of states is Dph,b,ω = ωph2ph/π2uph3. With Dph,b,ω and equilibrium distribution fph, photon energy spectral distribution dIb,ω or dIb,λ (λph: wavelength) and total emissive power Eb are derived as (Planck law), (Stefan–Boltzmann law).

Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δω). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition (stimulated emission) between electronic energy states.[39]

Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites.[40][41][42]

The BTE for photon particle momentum pph = ħωphs/uph along direction s experiencing absorption/emission (= uphσph,ω[fph(ωph,T) - fph(s)], σph,ω: spectral absorption coefficient), and generation/removal , is[43][44]

In terms of radiation intensity (Iph,ω = uphfphħωphDph,ω/4π, Dph,ω: photon density of states), this is called the equation of radiative transfer (ERT)[44] The net radiative heat flux vector is

From the Einstein population rate equation, spectral absorption coefficient σph,ω in ERT is,[45] where is the interaction probability (absorption) rate or the Einstein coefficient B12 (J−1 m3 s−1), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and ne is electron density (in ground state). This can be obtained using the transition dipole moment μe with the FGR and relationship between Einstein coefficients. Averaging σph,ω over ω gives the average photon absorption coefficient σph.

For the case of optically thick medium of length L, i.e., σphL >> 1, and using the gas kinetic theory, the photon conductivity kph is 16σSBT3/3σph (σSB: Stefan–Boltzmann constant, σph: average photon absorption), and photon heat capacity nphcv,ph is 16σSBT3/uph.

Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is[46] where pe is the dipole moment vector and a and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level).[47][48] The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling.[49][50][51]

Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function (electrical permittivity, εe,ω), spectral absorption coefficient (σph,ω), and the complex refraction index (mω), are calculated for various interactions between photons and electric/magnetic entities in matter.[52][53] For example, the imaginary part (εe,c,ω) of complex dielectric function (εe,ω = εe,r,ω + i εe,c,ω) for electronic transition across a bandgap is[3]
where V is the unit-cell volume, VB and CB denote the valence and conduction bands, wκ is the weight associated with a κ-point, and pij is the transition momentum matrix element. The real part is εe,r,ω is obtained from εe,c,ω using the Kramers-Kronig relation[54] Here, denotes the principal value of the integral.

In another example, for the far IR regions where the optical phonons are involved, the dielectric function (εe,ω) are calculated as where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. εe,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential.

From these dielectric function (εe,ω) calculations (e.g., Abinit, VASP, etc.), the complex refractive index mω(= nω + i κω, nω: refraction index and κω: extinction index) is found, i.e., mω2 = εe,ω = εe,r,ω + i εe,c,ω). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as[55] R = [(nω - 1)2 + κω2]/[(nω + 1)2 + κω2]. The spectral absorption coefficient is then found from σph,ω = 2ω κω/uph. The spectral absorption coefficient for various electric entities are listed in the below table.[56]

Mechanism Relation (σph,ω)
Electronic absorption transition (atom, ion or molecule) , [ne,A: number density of ground state, ωe,g: transition angular frequency, : spontaneous emission rate (s−1), μe: transition dipole moment, : bandwidth]
Free carrier absorption (metal) (ne,c: number density of conduction electrons, : average momentum electron relaxation time, εo: free space electrical permittivity)
Direct-band absorption (semiconductor) (nω: index of refraction, Dph-e: joint density of states)
Indirect-band absorption (semiconductor) with phonon absorption: (aph-e-p,a phonon absorption coupling coefficient, ΔEe,g: bandgap, ωp: phonon energy )
with phonon emission: (aph-e-p,e phonon emission coupling coefficient)

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Heat transfer is the science that examines the exchange of between physical systems due to differences, distinct from which deals with the amount of rather than its rate of transfer. This field encompasses the prediction and analysis of flows in various media, governed by fundamental principles that ensure . The three primary mechanisms of heat transfer—conduction, , and —operate under distinct physical processes, often occurring simultaneously in real-world scenarios. Conduction involves the transfer of thermal energy through direct molecular or atomic interactions within a material, without bulk motion of the substance, and is quantified by Fourier's law: the heat flux is proportional to the negative temperature gradient, with thermal conductivity kk as the proportionality constant (e.g., k=401k = 401 W/m·K for copper and k=0.026k = 0.026 W/m·K for air). This mechanism dominates in solids and stagnant fluids, where energy moves from higher to lower temperature regions via collisions between particles. Thermal conductivity, a key material property, varies with temperature and substance composition, enabling comparisons of heat-conducting abilities across materials like metals (high kk) and insulators (low kk). Convection combines conduction at a surface with motion, transferring between a solid boundary and an adjacent moving , and follows : the rate is proportional to the difference between the surface and , with the convection coefficient hh determining the strength (typically 2–25 W/m²·K for free convection in air). It is classified into natural , driven by from density differences, and , induced by external means like fans or pumps. This mode is prevalent in liquids and gases, influencing phenomena from patterns to designs. Radiation is the emission and absorption of electromagnetic waves from matter at any temperature above , independent of a medium and capable of occurring in a , governed by the , which states that the emissive power of a blackbody is σT4\sigma T^4, where σ=5.67×108\sigma = 5.67 \times 10^{-8} /m²·K⁴ is the ; the heat transfer between surfaces depends on and the fourth-power temperature difference. All bodies emit based on their surface temperature and , with hotter objects radiating more intensely across the . Unlike conduction and , does not require contact or fluid, making it essential for high-temperature applications and interstellar heat transfer. The study of heat transfer integrates these mechanisms through energy balance equations derived from of , applied to control volumes for steady or transient conditions, forming the basis for solving complex problems in physics and . The scientific study of heat transfer emerged in the late 18th and early 19th centuries. Key milestones include Joseph Fourier's 1822 publication of the analytical theory of heat conduction, Isaac Newton's 1701 law of cooling for , and the 1879–1884 derivation of the for by and .

Introduction

Overview and Importance

Heat transfer physics examines the movement of from regions of higher to lower , driven by differences between bodies or substances. This process is distinct from , which occurs due to concentration gradients, and momentum transfer, which results from velocity differences. The field classifies heat transfer into three primary modes: conduction, involving direct transfer through molecular collisions primarily in solids; convection, which relies on the bulk motion of fluids; and radiation, the propagation of energy via electromagnetic waves through vacuum or transparent media. Heat transfer principles are essential in thermodynamics for analyzing energy conversion processes, in materials science for engineering thermal conductivity and insulation, and in climate modeling for simulating atmospheric circulation and energy balance. In engineering, they enable designs like heat exchangers for efficient thermal management in power plants and electronics cooling systems to maintain device reliability under high loads. Everyday applications illustrate these concepts, such as conduction during cooking on a hot pan, convection in weather-driven air currents, and radiation in solar warming of Earth's surface.

Historical Context

The understanding of heat transfer began in the 18th century with the , which posited heat as an invisible, weightless called caloric that flowed from hotter to cooler bodies to equalize temperatures. This theory, advanced by and others, explained phenomena like expansion and melting but failed to account for heat generation from mechanical work. By the early , experiments challenged caloric; (Count Rumford) in 1798 observed unlimited heat from friction in cannon boring, suggesting heat as a form of motion rather than a conserved . The caloric theory was definitively replaced by the in the mid-19th century, viewing heat as the random motion of microscopic particles. James Prescott Joule's 1847 experiments quantified the mechanical equivalent of heat, establishing its convertibility with work and laying the groundwork for . advanced conduction specifically in his 1822 monograph Théorie analytique de la chaleur, introducing the and series expansions to describe steady and transient heat flow in solids, independent of the fluid heat model. Building on this, James Clerk Maxwell in the 1860s formalized the , deriving transport properties like thermal conductivity from molecular collisions in his 1867 paper and 1871 book Theory of Heat. extended these ideas in the 1870s through , linking macroscopic to probabilistic molecular behaviors via the (1872), which underpins heat transfer in dilute gases. The 20th century integrated into heat transfer, particularly for and solids. Max Planck's 1900 derivation of law introduced quantized energy packets (quanta), resolving the and founding quantum theory, with profound implications for radiative heat exchange. In , the 1920s saw quantum descriptions of lattice vibrations as phonons, building on Peter Debye's 1912 continuum model for specific heat and Max Born's 1912 discrete lattice dynamics, enabling microscopic understanding of in insulators. A key milestone in was Lord Rayleigh's 1916 analysis of buoyancy-driven instability in heated fluid layers, deriving the critical for onset of cellular motion in his seminal paper. Post-1950s, electronic computers enabled numerical solutions to complex heat transfer problems, with early methods at institutions like Berkeley and in the 1950s-1960s transforming analysis from analytical approximations to simulations of coupled conduction, , and .

Fundamental Concepts

Heat, Temperature, and Energy Transfer

In thermodynamics, temperature serves as a macroscopic measure of the average kinetic energy associated with the random motion of molecules in a system. For an ideal gas, this corresponds directly to the average translational kinetic energy of the particles, where higher temperatures indicate greater molecular speeds and thus more vigorous collisions. In solids, temperature reflects the average energy of vibrational modes, encompassing both kinetic energy from atomic oscillations and potential energy stored in the intermolecular bonds or lattice deformations. This statistical interpretation arises from the kinetic theory, which equates thermal agitation to the ensemble average of microscopic energies, providing a bridge between macroscopic observables and molecular dynamics. Heat represents the transfer of thermal energy between systems or regions arising solely from a temperature difference, or thermal gradient, without accompanying net work or mass flow. Unlike internal energy, which is the total microscopic energy content of a system—including kinetic, potential, and other forms stored within its constituents—thermal energy specifically denotes the portion attributable to random thermal motion. , the transfer of this thermal energy, is path-dependent. The formalizes this by stating that if two systems are each in with a third , they are in with each other, enabling the consistent definition and measurement of as the property governing such equilibrium. This law underpins the concept of , where no net flows between systems at the same . Energy transfer in thermodynamic systems occurs through three primary modes: , work, and . is uniquely driven by thermal gradients, resulting in the spontaneous flow of from higher- to lower- regions until equilibrium is reached. In contrast, work involves organized, macroscopic exchange, such as through mechanical forces or pressure-volume changes, while accompanies the movement of matter carrying its associated , as in open systems where is exchanged. These distinctions ensure that is identified only when the transfer is indiscernible from random molecular interactions induced by temperature disparities, aligning with of that conserves total across all modes.

Governing Laws and Equations

The macroscopic description of heat transfer is governed by empirical laws that relate to temperature differences or gradients, supplemented by the for time-dependent behavior. These laws provide the foundational constitutive relations for conduction, , and , enabling the formulation of boundary value problems in and physics applications. Fourier's law of heat conduction states that the heat flux q\mathbf{q} through a material is proportional to the negative gradient of temperature T\nabla T, expressed as q=kT\mathbf{q} = -k \nabla T, where kk is the thermal conductivity, a material property with units of W/(m·K). This law, derived empirically from observations of steady-state heat flow in solids, assumes local and isotropic material behavior, with heat flowing from higher to lower temperature regions. It was first systematically formulated by in his 1822 treatise Théorie analytique de la chaleur, based on experiments and of heat propagation in homogeneous media. For convective heat transfer, Newton's law of cooling describes the heat flux qq from a surface at temperature TsT_s to a surrounding fluid at TT_\infty as q=h(TsT)q = h (T_s - T_\infty), where hh is the convective heat transfer coefficient (units W/(m²·K)), which encapsulates fluid motion and property effects. This empirical relation, originating from Isaac Newton's 1701 observations of cooling rates in air, applies to both forced and natural convection and is valid under the assumption of a thin boundary layer where bulk fluid temperature remains constant. Thermal radiation between surfaces is governed by the Stefan-Boltzmann law, which for a gray body states that the net heat flux qq is q=ϵσ(T4Tsur4)q = \epsilon \sigma (T^4 - T_{\text{sur}}^4), where ϵ\epsilon is the (0 ≤ ϵ\epsilon ≤ 1), σ=5.67×108\sigma = 5.67 \times 10^{-8} W/(m²·K⁴) is the Stefan-Boltzmann constant, and TsurT_{\text{sur}} is the surroundings temperature. This law, empirically established by in 1879 through measurements of radiant heat from platinum filaments and theoretically derived by in 1884 using thermodynamic principles, quantifies integrated over all wavelengths and assumes diffuse emission. The unifies these mechanisms into a for temperature evolution: Tt=α2T\frac{\partial T}{\partial t} = \alpha \nabla^2 T, where α=kρcp\alpha = \frac{k}{\rho c_p} is the (units m²/s), with ρ\rho as and cpc_p as . Derived by Fourier by combining with his conduction law, this parabolic PDE describes diffusive heat spread in the absence of sources or ; for conduction-dominated cases, it applies directly, while and introduce additional terms via boundary conditions. Solutions require specifying initial conditions T(x,0)T(\mathbf{x},0) and boundary conditions, such as Dirichlet (fixed TT), (fixed flux via Fourier's law), or Robin (convective, incorporating Newton's law) types, to model realistic interfaces.

Mechanisms of Heat Transfer

Conduction in Solids

Heat conduction in solids occurs through the transfer of via collisions and vibrations of atoms and molecules within the material lattice, without any net displacement or bulk motion of the substance. This process arises from the random motion of particles, where higher-energy atoms or molecules interact with adjacent lower-energy ones, progressively distributing heat from hotter to cooler regions. Unlike other mechanisms, conduction relies solely on direct particle interactions within a stationary medium, making it the primary mode of in rigid solids such as metals, ceramics, and polymers. The quantitative description of conduction in solids is governed by Fourier's law, which posits that the heat flux vector q\mathbf{q} is proportional to the negative gradient of , expressed as q=kT\mathbf{q} = -k \nabla T, where kk is the thermal conductivity of the material. Thermal conductivity kk, with units of W/(m·K), quantifies a solid's ability to conduct heat and depends on intrinsic properties like atomic structure, density, and ; for instance, it typically decreases with rising in metals due to increased . In one-dimensional steady-state scenarios, this simplifies to qx=kdTdxq_x = -k \frac{dT}{dx}, directly linking heat flow rate per unit area to the temperature difference across the material. This law, originally formulated by in 1822, forms the foundation for analyzing conduction in engineering applications like insulation design and performance. In dielectrics and insulators, conduction is predominantly mediated by phonons—quantized lattice vibrations—resulting in relatively low thermal conductivities, such as approximately 1 W/(m·K) for typical at , which enhances their utility in thermal barriers. Conversely, metals exhibit high thermal conductivities, often exceeding 100 W/(m·K) for at 300 K, primarily due to the dominant role of free electrons as energy carriers, though phonons contribute secondarily. These distinctions arise from the : insulators lack free charge carriers, relying on lattice dynamics, while metals' delocalized electrons enable efficient . Phonons and electrons serve as the key microscopic carriers, with their behaviors explored in greater detail under dedicated sections. Conduction in solids can manifest as steady-state, where temperature profiles remain constant over time after initial equilibration, or transient, involving time-dependent temperature evolution. Steady-state conduction is exemplified by uniform heat flow through a long metallic rod with fixed end temperatures, where the linear temperature gradient yields a constant flux calculable via Fourier's law. Transient conduction, in contrast, occurs during heating or cooling phases, such as the initial warmth diffusion through a building exposed to outdoor temperature changes, requiring solutions to the Tt=α2Tx2\frac{\partial T}{\partial t} = \alpha \frac{\partial^2 T}{\partial x^2} (with α=k/(ρcp)\alpha = k / (\rho c_p) as ) to predict spatiotemporal profiles. These regimes are critical for applications like electronic cooling, where transient effects dominate short timescales, versus pipeline insulation, emphasizing steady-state efficiency.

Convection in Fluids

Convection in fluids involves the transport of through the macroscopic motion of elements, where the of by bulk flow is coupled with molecular conduction. This process occurs in liquids and gases, enhancing rates compared to pure conduction due to the mixing induced by velocities. The convective can be expressed as q=h(TsT)q = h (T_s - T_\infty), where hh is the convective , TsT_s is the surface , and TT_\infty is the free-stream , though this form is an empirical extension of conduction principles. Natural convection arises when fluid motion is driven by buoyancy forces resulting from density variations caused by temperature gradients within the fluid. Hotter fluid parcels become less dense and rise, while cooler, denser fluid descends, establishing circulatory patterns that transport heat. The Boussinesq approximation simplifies the modeling of these flows by assuming fluid properties, including , remain constant except in the buoyancy term of the momentum equation, where density variations are linearly related to changes via ρ=ρ0[1β(TT0)]\rho = \rho_0 [1 - \beta (T - T_0)], with β\beta as the thermal expansion coefficient; this approximation is valid for small temperature differences and moderate Rayleigh numbers up to approximately 10910^9. Examples include heat transfer from a hot vertical plate to surrounding air, where the flow develops a thermal boundary layer adjacent to the surface. Forced convection, in contrast, is induced by external mechanical forces such as pumps, fans, or , which impose a bulk on the fluid independent of temperature-induced . This leads to the development of and thermal boundary layers near solid surfaces, where is influenced by the flow regime—laminar or —determined by the balance of inertial and viscous forces. In external flows, like over an , the grows along the surface, transitioning to at higher velocities; internal flows, such as in , exhibit fully developed profiles after an entrance length. The process significantly boosts in engineering applications, including cooling of electronic components and heat exchangers. Key dimensionless numbers characterize convective flows in fluids. The Reynolds number, Re=ρuLμ\text{Re} = \frac{\rho u L}{\mu}, where ρ\rho is density, uu is characteristic velocity, LL is length scale, and μ\mu is dynamic viscosity, quantifies the ratio of inertial to viscous forces, with Re<5×105\text{Re} < 5 \times 10^5 typically indicating laminar flow and higher values suggesting turbulence in external boundary layers. The Prandtl number, Pr=να=μcpk\text{Pr} = \frac{\nu}{\alpha} = \frac{\mu c_p}{k}, with kinematic viscosity ν\nu, thermal diffusivity α\alpha, specific heat cpc_p, and thermal conductivity kk, compares momentum diffusivity to thermal diffusivity, influencing the relative thickness of velocity and thermal boundary layers; for air, Pr0.7\text{Pr} \approx 0.7, while for water, Pr7\text{Pr} \approx 7. For natural convection, the Grashof number, Gr=gβ(TsT)L3ν2\text{Gr} = \frac{g \beta (T_s - T_\infty) L^3}{\nu^2}, where gg is gravitational acceleration, measures the ratio of buoyancy to viscous forces, with high Gr\text{Gr} promoting vigorous convection; the Rayleigh number, Ra=GrPr\text{Ra} = \text{Gr} \cdot \text{Pr}, further combines these effects to predict onset of instability in enclosed flows. These parameters enable scaling and prediction of heat transfer rates across similar systems.

Thermal Radiation

Thermal radiation is the emission of electromagnetic waves from a material due to its thermal motion, enabling heat transfer without physical contact or a propagating medium. This mechanism dominates in high-temperature scenarios and vacuum conditions, where conduction and convection are absent. The intensity and spectrum of thermal radiation depend solely on the body's temperature, as established by foundational principles in radiative physics. A blackbody represents the ideal case of thermal radiation, defined as a perfect absorber that emits the maximum possible radiation at a given temperature. The spectral distribution of blackbody radiation is described by Planck's law, which quantifies the radiance B(ν,T)B(\nu, T) as a function of frequency ν\nu and temperature TT: B(ν,T)=2hν3c21ehν/kT1,B(\nu, T) = \frac{2 h \nu^3}{c^2} \frac{1}{e^{h\nu / kT} - 1}, where hh is Planck's constant, cc is the speed of light, and kk is Boltzmann's constant. This law resolved the ultraviolet catastrophe of classical theory by introducing energy quantization. From Planck's law, Wien's displacement law follows, stating that the wavelength λmax\lambda_{\max} at which the spectral radiance peaks is inversely proportional to temperature: λmaxT=b\lambda_{\max} T = b, with Wien's constant b2898μmKb \approx 2898 \, \mu\mathrm{m \cdot K}. This relation explains why hotter objects emit shorter-wavelength radiation, shifting from infrared to visible light. Real materials approximate blackbodies through the gray body model, where the emissivity ϵ\epsilon (a measure of emission efficiency, 0<ϵ10 < \epsilon \leq 1) is assumed constant across wavelengths. Kirchhoff's law of thermal radiation asserts that, for a body in thermal equilibrium, the emissivity equals the absorptivity α\alpha at each wavelength: ϵ(λ)=α(λ)\epsilon(\lambda) = \alpha(\lambda). This equivalence ensures detailed balance in radiation exchange, allowing gray body approximations to simplify calculations by treating surfaces as diffuse emitters and absorbers with uniform ϵ\epsilon. In enclosures, such as cavities or bounded systems, radiative heat transfer between surfaces is analyzed using view factors and the radiosity method. The view factor FijF_{ij} quantifies the fraction of radiation leaving surface ii that directly intercepts surface jj, accounting for geometry and orientation. Radiosity JiJ_i represents the total outgoing radiation from surface ii, comprising emitted and reflected components: Ji=ϵiσTi4+ρijJjFijJ_i = \epsilon_i \sigma T_i^4 + \rho_i \sum_j J_j F_{ij}, where ρi=1αi\rho_i = 1 - \alpha_i is reflectivity and σ\sigma is the Stefan-Boltzmann constant (referenced briefly from governing laws). The net heat transfer rate between surfaces is then qi=Ai(JijJjFij)q_i = A_i (J_i - \sum_j J_j F_{ij}), enabling efficient computation for complex configurations. Thermal radiation is critical in vacuum environments, such as spacecraft thermal control, where it serves as the primary mode for dissipating excess heat to deep space. In high-temperature industrial processes like furnaces, radiation becomes the dominant mode of heat transfer at temperatures above 1000 K, influencing design for uniform heating and energy efficiency.

Microscopic Carriers

Phonons

Phonons are quasiparticles that represent the quantized collective vibrations of atoms in a crystal lattice, serving as the primary mediators of heat transfer in insulating solids where electronic contributions are minimal. These vibrations arise from the harmonic interactions between atoms, but deviations from perfect harmony introduce anharmonicity, enabling phonon scattering essential for finite thermal conductivity. In crystalline materials, phonon modes are dispersed into branches: acoustic branches, where adjacent atoms move in phase akin to sound waves, and optical branches, involving out-of-phase oscillations typically prominent in compounds with multiple atoms per unit cell. The phonon contribution to the specific heat capacity of solids is captured by the , which approximates the lattice vibrations as a continuum of modes with linear dispersion up to a Debye frequency, treating phonons as a bosonic gas. Developed by , this model resolves the limitations of earlier approaches by accounting for the density of states proportional to ω², yielding a low-temperature heat capacity C ∝ T³ due to the excitation of only long-wavelength acoustic phonons. At higher temperatures, the model approaches the classical Dulong-Petit limit of 3R per mole of atoms, where R is the gas constant, reflecting full excitation of all modes. This T³ dependence has been experimentally verified across numerous insulators, underscoring phonons' role in thermal equilibrium. In dielectrics, thermal conductivity κ arises from the diffusive transport of phonon energy, expressed via kinetic theory as κ = (1/3) C v ℓ, with C the volumetric phonon specific heat, v the average phonon group velocity (on the order of sound speed, ~10³ m/s), and ℓ the mean free path determined by scattering. in the interatomic potential introduces nonlinear terms that enable phonon-phonon interactions, primarily through normal (momentum-conserving) and Umklapp (momentum-reversing) processes; Umklapp scattering, first elucidated by Peierls, dominates at higher temperatures by randomizing phonon momentum and limiting ℓ to values typically 10–100 nm, resulting in a peak in κ followed by inverse temperature dependence. In pure dielectrics like diamond or silicon, this mechanism yields exceptionally high room-temperature κ (>1000 W/m·K), highlighting phonons' efficiency in non-metallic heat conduction while ensures realistic bounds.

Electrons

In metallic systems, free electrons serve as the primary carriers of heat through conduction, distinct from the bosonic phonon-mediated transport dominant in insulators. This fermionic nature arises from the , enabling electrons to efficiently transfer via their drift under a . At , electronic contributions typically account for over 90% of the total thermal conductivity in pure metals like and silver, far exceeding lattice contributions. The classical description of electronic heat transport originates from the Drude model, proposed in 1900, which treats conduction electrons as a classical ideal gas of non-interacting particles moving freely between ionic lattice sites and scattering isotropically. In this framework, the thermal conductivity kk emerges from the kinetic energy flux carried by electrons, yielding k=13CevFlk = \frac{1}{3} C_e v_F l, where CeC_e is the electronic specific heat per unit volume, vFv_F is the Fermi velocity, and ll is the mean free path determined by scattering events. The model further links thermal and electrical conductivities through the Wiedemann-Franz law, empirically observed in 1853 and theoretically derived by Drude, stating kσ=LT\frac{k}{\sigma} = L T, with L=π23(kBe)2L = \frac{\pi^2}{3} \left( \frac{k_B}{e} \right)^2 as the Lorenz number (kBk_B is Boltzmann's constant and ee is the electron charge). This proportionality holds because both heat and charge currents are driven by the same electron population under similar relaxation times. Quantum refinements, introduced by Sommerfeld in 1928, incorporate Fermi-Dirac statistics to describe the degenerate gas in metals, where the EFE_F (typically 2–10 eV) greatly exceeds thermal energy kBTk_B T at , rendering the electron distribution nearly step-like at the . This degeneracy suppresses classical equipartition, resulting in an electronic specific heat Ce=γTC_e = \gamma T, with the linear coefficient γ=π23N(EF)kB2\gamma = \frac{\pi^2}{3} N(E_F) k_B^2 proportional to the at EFE_F (N(EF)N(E_F)). Consequently, only electrons within kBT\sim k_B T of EFE_F participate in thermal excitation, limiting CeC_e to about 1% of the classical Dulong-Petit value for lattice heat capacity at 300 K. Electronic thermal transport is limited by scattering mechanisms, primarily from impurities (static defects) and phonons (dynamic lattice vibrations), which determine the resistivity ρ=1/σ\rho = 1/\sigma. Matthiessen's rule, formulated in the 1860s, approximates the total resistivity as the sum of temperature-independent impurity scattering ρi\rho_i and temperature-dependent phonon scattering ρph\rho_{ph}, so ρ=ρi+ρph(T)\rho = \rho_i + \rho_{ph}(T), valid when scattering rates are independent and relaxation times add inversely. Impurity scattering dominates at low temperatures (T<10T < 10 K), while phonon scattering, scaling as T5T^5 at low TT and linearly at high TT, prevails near room temperature, reducing the mean free path to 10100\sim 10–100 nm in typical metals. This electronic dominance persists in most metals at room temperature, enabling high thermal conductivities like 400 W/m·K in copper, but exceptions occur in superconductors below their critical temperature TcT_c, where electron pairing into Cooper pairs eliminates resistive scattering, drastically altering heat transport to quasiparticle-mediated conduction with near-zero electrical resistivity. No bulk superconductor exists at room temperature under ambient pressure, confining such exceptions to cryogenic conditions.

Fluid Particles

In convective heat transfer, fluid particles—ranging from individual molecules to macroscopic parcels—serve as carriers of thermal energy through bulk motion and interactions within the fluid. In gases, the kinetic theory describes heat transfer primarily through molecular collisions, where faster-moving molecules from hotter regions collide with slower ones from cooler regions, redistributing kinetic energy. This process is formalized in the Chapman-Enskog theory, which derives transport coefficients like thermal conductivity kk and viscosity μ\mu from the Boltzmann equation by expanding the distribution function in powers of the Knudsen number. The first-order approximation yields expressions such as k=154kBmμk = \frac{15}{4} \frac{k_B}{m} \mu, linking thermal conductivity directly to viscosity via molecular properties like mass mm and Boltzmann constant kBk_B, highlighting the collisional nature of energy transfer over mean free paths on the order of micrometers. In liquids, the mechanism shifts to predominantly short-range intermolecular interactions, such as van der Waals forces and , due to the close packing of molecules (intermolecular distances 0.3–0.5 nm), contrasting with the longer-range, collision-dominated transfers in gases where mean free paths are much larger ( tens of nm at STP). These short-range forces enable rapid energy exchange through vibrational and rotational modes, resulting in thermal conductivities typically 10–100 times higher than in gases for similar substances, though still lower than in solids. Molecular dynamics simulations confirm that in non-polar liquids like argon, about 80–90% of heat conduction arises from potential energy transfer via these interactions, rather than purely kinetic contributions. Flow regime significantly influences the role of fluid particles: in laminar convection, energy is carried by orderly molecular or parcel motion along streamlines, limited by viscous diffusion, whereas in turbulent flows, chaotic eddies act as effective "particles" that mix hot and cold fluid parcels across scales from millimeters to the flow domain size, dramatically enhancing transfer rates by factors of 10–100 over laminar cases. These eddies, described in models like Prandtl's mixing-length theory, promote intermittent bursts of entrainment and ejection in the near-wall region, increasing the effective diffusivity of heat. Fluid particles play a critical role in thermal boundary layers, thin regions (~0.1–1 mm thick) adjacent to surfaces where velocity and temperature gradients steepen, driving convective transfer. Here, particles advect heat from the surface into the bulk fluid, with the enhancement quantified by the Nusselt number Nu=hLkNu = \frac{h L}{k}, where hh is the convective heat transfer coefficient, LL a characteristic length, and kk the fluid's thermal conductivity; Nu>1Nu > 1 indicates convection dominates conduction, as originally proposed in Nusselt's similitude analysis for pipe flows. In boundary layers, NuNu scales with the ReRe and PrPr (e.g., Nu0.023Re0.8Pr0.4Nu \approx 0.023 Re^{0.8} Pr^{0.4} for turbulent ), reflecting how particle motion amplifies transfer relative to pure conduction across the layer thickness.

Photons

Photons serve as the quanta of responsible for radiative , carrying across vacuums or media where other carriers like phonons or electrons cannot operate. Each photon possesses an energy given by E=hνE = h \nu, where hh is Planck's constant and ν\nu is the of the electromagnetic wave. This quantization resolves the classical in , enabling a discrete description of transport. In , the spectrum of emerges from the statistical distribution of occupation numbers, following Bose-Einstein statistics for indistinguishable bosons with zero . The average occupation number for photons of frequency ν\nu is n=1ehν/kT1\langle n \rangle = \frac{1}{e^{h\nu / kT} - 1}, where kk is Boltzmann's constant and TT is ; this yields the u(ν,T)=8πhν3c3nu(\nu, T) = \frac{8\pi h \nu^3}{c^3} \langle n \rangle, with cc the . Photons thus mediate by populating modes according to this distribution, with higher temperatures increasing both the number and energy of emitted photons. The processes of absorption and emission by matter are governed by the , which quantify transition probabilities between atomic or molecular levels. The coefficient B12B_{12} describes the rate of absorption ( uptake raising an from lower to higher state), while B21B_{21} and A21A_{21} govern stimulated and ( release), respectively; for non-degenerate levels, B12=B21B_{12} = B_{21}. In , ensures that the rate of absorptions equals the combined rate of stimulated and spontaneous emissions, leading to the relation A21B21=8πhν3c3(ehν/kT1)\frac{A_{21}}{B_{21}} = \frac{8\pi h \nu^3}{c^3} \left( e^{h\nu / kT} - 1 \right), which reproduces the Planck spectrum. Within media, transport depends on interactions like and absorption, distinguishing transparent (optically thin, low ) from opaque (optically thick, high ) regimes. In transparent media, photons propagate with minimal interruption, enabling direct across distances. In opaque media, frequent and absorption randomize photon directions, approximating akin to conduction. The Rosseland approximation models this by defining a mean opacity κR\kappa_R ( over , weighted by blackbody intensity) and a radiative conductivity kr=16[σ](/page/Sigma)T33κRρk_r = \frac{16 [\sigma](/page/Sigma) T^3}{3 \kappa_R \rho}, where σ\sigma is the Stefan-Boltzmann constant and ρ\rho is ; the flux becomes qr=krT\mathbf{q}_r = -k_r \nabla T. This holds for optical depths τ1\tau \gg 1, where contributes to effective opacity without altering the diffusion form. Photons dominate heat transfer at high temperatures or in vacuums, where their propagation is unimpeded by matter. A prominent example is the (CMB), relic photons from the early at T2.725T \approx 2.725 K, carrying thermal energy isotropically across cosmic voids and illustrating radiative equilibrium on vast scales.

Characteristic Scales

Length Scales

In heat transfer physics, length scales play a crucial role in determining the dominant mechanisms and applicable approximations, spanning from atomic dimensions to macroscopic system sizes. At the microscopic level, atomic spacing represents the fundamental interatomic distance in materials, typically on the order of 1–5 Ångstroms (0.1–0.5 nm), which sets the baseline for lattice vibrations and molecular interactions underlying conduction and radiation. The , denoted as ll or λ\lambda, is another key microscopic scale, defined as the average distance traveled by heat-carrying particles (such as molecules, phonons, or electrons) between collisions. In gases at (e.g., air at 300 and 1 ), this length is approximately 65 nm, though it can range from 10 nm to 100 nm depending on , , and gas type, influencing the transition between collision-dominated and free-molecular regimes. In solids, the for phonons—the primary carriers of heat in insulators—is typically 10–100 nm at in crystalline solids due to by lattice defects and , though smaller (∼1 nm) in amorphous materials and values up to hundreds of nm occur in high-purity crystals at low temperatures. Moving to mesoscopic scales, the boundary layer thickness δ\delta emerges in convective heat transfer as the region near a surface where velocity and temperature gradients are significant, typically scaling as δL/Re\delta \sim L / \sqrt{\mathrm{Re}}
Add your contribution
Related Hubs
User Avatar
No comments yet.