Recent from talks
Nothing was collected or created yet.
Adiabatic theorem
View on WikipediaThe adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:
In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged.
Adiabatic pendulum
[edit]At the 1911 Solvay conference, Einstein gave a lecture on the quantum hypothesis, which states that for atomic oscillators. After Einstein's lecture, Hendrik Lorentz commented that, classically, if a simple pendulum is shortened by holding the wire between two fingers and sliding down, it seems that its energy will change smoothly as the pendulum is shortened. This seems to show that the quantum hypothesis is invalid for macroscopic systems, and if macroscopic systems do not follow the quantum hypothesis, then as the macroscopic system becomes microscopic, it seems the quantum hypothesis would be invalidated. Einstein replied that although both the energy and the frequency would change, their ratio would still be conserved, thus saving the quantum hypothesis.[2]
Before the conference, Einstein had just read a paper by Paul Ehrenfest on the adiabatic hypothesis.[3] We know that he had read it because he mentioned it in a letter to Michele Besso written before the conference.[4][5]
Diabatic vs. adiabatic processes
[edit]| Diabatic | Adiabatic |
|---|---|
| Rapidly changing conditions prevent the system from adapting its configuration during the process, hence the spatial probability density remains unchanged. Typically there is no eigenstate of the final Hamiltonian with the same functional form as the initial state. The system ends in a linear combination of states that sum to reproduce the initial probability density. | Gradually changing conditions allow the system to adapt its configuration, hence the probability density is modified by the process. If the system starts in an eigenstate of the initial Hamiltonian, it will end in the corresponding eigenstate of the final Hamiltonian.[6] |
At some initial time a quantum-mechanical system has an energy given by the Hamiltonian ; the system is in an eigenstate of labelled . Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian at some later time . The system will evolve according to the time-dependent Schrödinger equation, to reach a final state . The adiabatic theorem states that the modification to the system depends critically on the time during which the modification takes place.
For a truly adiabatic process we require ; in this case the final state will be an eigenstate of the final Hamiltonian , with a modified configuration:
The degree to which a given change approximates an adiabatic process depends on both the energy separation between and adjacent states, and the ratio of the interval to the characteristic timescale of the evolution of for a time-independent Hamiltonian, , where is the energy of .
Conversely, in the limit we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged:
The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of corresponds to ). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap.[7]
Comparison with the adiabatic concept in thermodynamics
[edit]The term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process.
The classical and quantum mechanics definition[8] is instead closer to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static, and a pressure wave is not). Adiabatic in the context of mechanics is often used as a synonym for slow process.
In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous).
The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided, and the system tries to conserve the state and the quantum numbers.[9]
The quantum mechanical concept of adiabatic is related to adiabatic invariant, it is often used in the old quantum theory and has no direct relation with heat exchange.
Example systems
[edit]Simple pendulum
[edit]As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here.[10]
Quantum harmonic oscillator
[edit]
The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian.
If is increased adiabatically then the system at time will be in an instantaneous eigenstate of the current Hamiltonian , corresponding to the initial eigenstate of . For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, , remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions.
For a rapidly increased spring constant, the system undergoes a diabatic process in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, , that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of which sum to reproduce the form of the initial state.
Avoided curve crossing
[edit]
For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field.[11] The states, labelled and using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states:
With the field absent, the energetic separation of the diabatic states is equal to ; the energy of state increases with increasing magnetic field (a low-field-seeking state), while the energy of state decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written
where is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ( and ), however, as is not a diagonal matrix, it is clear that these states are not eigenstates of due to the off-diagonal coupling constant.
The eigenvectors of the matrix are the eigenstates of the system, which we will label and , with corresponding eigenvalues
It is important to realise that the eigenvalues and are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies and correspond to the expectation values for the energy of the system in the diabatic states and .
Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field will ensure the system remains in an eigenstate of the Hamiltonian throughout the process (follows the red curve). A diabatic increase in magnetic field will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state . For finite magnetic field slew rates there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.
These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules.
Mathematical statement
[edit]Under a slowly changing Hamiltonian with instantaneous eigenstates and corresponding energies , a quantum system evolves from the initial stateto the final statewhere the coefficients undergo the change of phase
with the dynamical phase
and geometric phase
In particular, , so if the system begins in an eigenstate of , it remains in an eigenstate of during the evolution with a change of phase only.
Proofs
[edit]| Sakurai in Modern Quantum Mechanics[12] |
|---|
|
This proof is partly inspired by one given by Sakurai in Modern Quantum Mechanics.[12] The instantaneous eigenstates and energies , by assumption, satisfy the time-independent Schrödinger equation at all times . Thus, they constitute a basis that can be used to expand the state at any time . The evolution of the system is governed by the time-dependent Schrödinger equation where (see Notation for differentiation § Newton's notation). Insert the expansion of , use , differentiate with the product rule, take the inner product with and use orthonormality of the eigenstates to obtain This coupled first-order differential equation is exact and expresses the time-evolution of the coefficients in terms of inner products between the eigenstates and the time-differentiated eigenstates. But it is possible to re-express the inner products for in terms of matrix elements of the time-differentiated Hamiltonian . To do so, differentiate both sides of the time-independent Schrödinger equation with respect to time using the product rule to get Again take the inner product with and use and orthonormality to find Insert this into the differential equation for the coefficients to obtain This differential equation describes the time-evolution of the coefficients, but now in terms of matrix elements of . To arrive at the adiabatic theorem, neglect the right hand side. This is valid if the rate of change of the Hamiltonian is small and there is a finite gap between the energies. This is known as the adiabatic approximation. Under the adiabatic approximation, which integrates precisely to the adiabatic theorem with the phases defined in the statement of the theorem. The dynamical phase is real because it involves an integral over a real energy. To see that the geometric phase is purely real, differentiate the normalization of the eigenstates and use the product rule to find that
Thus, is purely imaginary, so the geometric phase is purely real. |
| Adiabatic approximation[13][14] |
|---|
|
Proof with the details of the adiabatic approximation[13][14] We are going to formulate the statement of the theorem as follows: For a slowly varying Hamiltonian in the time range T the solution of the Schrödinger equation with initial conditions
where is the eigenvector of the instantaneous Schrödinger equation can be approximated as: where the adiabatic approximation is: and also called Berry phase And now we are going to prove the theorem. Consider the time-dependent Schrödinger equation with Hamiltonian We would like to know the relation between an initial state and its final state at in the adiabatic limit First redefine time as : At every point in time can be diagonalized with eigenvalues and eigenvectors . Since the eigenvectors form a complete basis at any time we can expand as: where The phase is called the dynamic phase factor. By substitution into the Schrödinger equation, another equation for the variation of the coefficients can be obtained: The term gives , and so the third term of left side cancels out with the right side, leaving Now taking the inner product with an arbitrary eigenfunction , the on the left gives , which is 1 only for m = n and otherwise vanishes. The remaining part gives For the will oscillate faster and faster and intuitively will eventually suppress nearly all terms on the right side. The only exceptions are when has a critical point, i.e. . This is trivially true for . Since the adiabatic theorem assumes a gap between the eigenenergies at any time this cannot hold for . Therefore, only the term will remain in the limit . In order to show this more rigorously we first need to remove the term. This can be done by defining We obtain: This equation can be integrated: or written in vector notation Here is a matrix and is basically a Fourier transform. It follows from the Riemann-Lebesgue lemma that as . As last step take the norm on both sides of the above equation: and apply Grönwall's inequality to obtain Since it follows for . This concludes the proof of the adiabatic theorem. In the adiabatic limit the eigenstates of the Hamiltonian evolve independently of each other. If the system is prepared in an eigenstate its time evolution is given by: So, for an adiabatic process, a system starting from nth eigenstate also remains in that nth eigenstate like it does for the time-independent processes, only picking up a couple of phase factors. The new phase factor can be canceled out by an appropriate choice of gauge for the eigenfunctions. However, if the adiabatic evolution is cyclic, then becomes a gauge-invariant physical quantity, known as the Berry phase. |
| Generic proof in parameter space |
|---|
|
Let's start from a parametric Hamiltonian , where the parameters are slowly varying in time, the definition of slow here is defined essentially by the distance in energy by the eigenstates (through the uncertainty principle, we can define a timescale that shall be always much lower than the time scale considered). This way we clearly also identify that while slowly varying the eigenstates remains clearly separated in energy (e.g. also when we generalize this to the case of bands as in the TKNN formula the bands shall remain clearly separated). Given they do not intersect the states are ordered and in this sense this is also one of the meanings of the name topological order. We do have the instantaneous Schrödinger equation: And instantaneous eigenstates: The generic solution: plugging in the full Schrödinger equation and multiplying by a generic eigenvector: And if we introduce the adiabatic approximation: for each We have and where And C is the path in the parameter space, This is the same as the statement of the theorem but in terms of the coefficients of the total wave function and its initial state.[15] Now this is slightly more general than the other proofs given we consider a generic set of parameters, and we see that the Berry phase acts as a local geometric quantity in the parameter space. Finally integrals of local geometric quantities can give topological invariants as in the case of the Gauss-Bonnet theorem.[16] In fact if the path C is closed then the Berry phase persists to gauge transformation and becomes a physical quantity. |
Example applications
[edit]Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation.[17]
This does explain many phenomena in the scope of:
- thermodynamics: Temperature dependence of specific heat, thermal expansion, melting
- transport phenomena: the temperature dependence of electric resistivity of conductors, the temperature dependence of electric conductivity in insulators, Some properties of low temperature superconductivity
- optics: optic absorption in the infrared for ionic crystals, Brillouin scattering, Raman scattering
Deriving conditions for diabatic vs adiabatic passage
[edit]This section's factual accuracy is disputed. (January 2016) |
We will now pursue a more rigorous analysis.[18] Making use of bra–ket notation, the state vector of the system at time can be written
where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator
It is instructive to examine the limiting cases, in which is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change).
Consider a system Hamiltonian undergoing continuous change from an initial value , at time , to a final value , at time , where . The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation
which is equivalent to the Schrödinger equation.
along with the initial condition . Given knowledge of the system wave function at , the evolution of the system up to a later time can be obtained using
The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of on .
To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition , we have:
We can expand
In the perturbative limit we can take just the first two terms and substitute them into our equation for , recognizing that
is the system Hamiltonian, averaged over the interval , we have:
After expanding the products and making the appropriate cancellations, we are left with:
giving
where is the root mean square deviation of the system Hamiltonian averaged over the interval of interest.
The sudden approximation is valid when (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by
which is a statement of the time-energy form of the Heisenberg uncertainty principle.
Diabatic passage
[edit]In the limit we have infinitely rapid, or diabatic passage:
The functional form of the system remains unchanged:
This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged:
Adiabatic passage
[edit]In the limit we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions,
If the system is initially in an eigenstate of , after a period it will have passed into the corresponding eigenstate of .
This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state:
Calculating adiabatic passage probabilities
[edit]The Landau–Zener formula
[edit]In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener,[19] for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).
The key figure of merit in this approach is the Landau–Zener velocity:where is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and and are the energies of the two diabatic (crossing) states. A large results in a large diabatic transition probability and vice versa.
Using the Landau–Zener formula the probability, , of a diabatic transition is given by
The numerical approach
[edit]For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations.
The equations to be solved can be obtained from the time-dependent Schrödinger equation:
where is a vector containing the adiabatic state amplitudes, is the time-dependent adiabatic Hamiltonian,[11] and the overdot represents a time derivative.
Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system:for a system that began with .
See also
[edit]References
[edit]- ^ Born, M. and Fock, V. A. (1928). "Beweis des Adiabatensatzes". Zeitschrift für Physik A. 51 (3–4): 165–180. Bibcode:1928ZPhy...51..165B. doi:10.1007/BF01343193. S2CID 122149514.
- ^ Instituts Solvay, Brussels Institut international de physique Conseil de physique; Solvay, Ernest; Langevin, Paul; Broglie, Maurice de; Einstein, Albert (1912). La théorie du rayonnement et les quanta : rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M.E. Solvay. University of British Columbia Library. Paris, France: Gauthier-Villars. p. 450.
- ^ EHRENFEST, P. (1911): ``Welche Züge der Lichtquantenhypothese spielen in der Theorie der Wärmestrahlung eine wesentliche Rolle?'' Annalen der Physik 36, pp. 91–118. Reprinted in KLEIN (1959), pp. 185–212.
- ^ "Letter to Michele Besso, 21 October 1911, translated in Volume 5: The Swiss Years: Correspondence, 1902-1914 (English translation supplement), page 215". einsteinpapers.press.princeton.edu. Retrieved 2024-04-17.
- ^ Laidler, Keith J. (1994-03-01). "The meaning of "adiabatic"". Canadian Journal of Chemistry. 72 (3): 936–938. Bibcode:1994CaJCh..72..936L. doi:10.1139/v94-121. ISSN 0008-4042.
- ^ Kato, T. (1950). "On the Adiabatic Theorem of Quantum Mechanics". Journal of the Physical Society of Japan. 5 (6): 435–439. Bibcode:1950JPSJ....5..435K. doi:10.1143/JPSJ.5.435.
- ^ Avron, J. E. and Elgart, A. (1999). "Adiabatic Theorem without a Gap Condition". Communications in Mathematical Physics. 203 (2): 445–463. arXiv:math-ph/9805022. Bibcode:1999CMaPh.203..445A. doi:10.1007/s002200050620. S2CID 14294926.
- ^ Griffiths, David J. (2005). "10". Introduction to Quantum Mechanics. Pearson Prentice Hall. ISBN 0-13-111892-7.
- ^ Zwiebach, Barton (Spring 2018). "L15.2 Classical adiabatic invariant". MIT 8.06 Quantum Physics III. Archived from the original on 2021-12-21.
- ^ Zwiebach, Barton (Spring 2018). "Classical analog: oscillator with slowly varying frequency". MIT 8.06 Quantum Physics III. Archived from the original on 2021-12-21.
- ^ a b Stenholm, Stig (1994). "Quantum Dynamics of Simple Systems". The 44th Scottish Universities Summer School in Physics: 267–313.
- ^ a b Sakurai, J. J.; Napolitano, Jim (2020-09-17). Modern Quantum Mechanics (3 ed.). Cambridge University Press. Bibcode:2020mqm..book.....S. doi:10.1017/9781108587280. ISBN 978-1-108-58728-0.
- ^ a b Zwiebach, Barton (Spring 2018). "L16.1 Quantum adiabatic theorem stated". MIT 8.06 Quantum Physics III. Archived from the original on 2021-12-21.
- ^ a b "MIT 8.06 Quantum Physics III".
- ^ Bernevig, B. Andrei; Hughes, Taylor L. (2013). Topological insulators and Topological superconductors. Princeton university press. pp. Ch. 1.
- ^ Haldane. "Nobel Lecture" (PDF).
- ^ Bottani, Carlo E. (2017–2018). Solid State Physics Lecture Notes. pp. 64–67.
- ^ Messiah, Albert (1999). "XVII". Quantum Mechanics. Dover Publications. ISBN 0-486-40924-4.
- ^ Zener, C. (1932). "Non-adiabatic Crossing of Energy Levels". Proceedings of the Royal Society of London, Series A. 137 (6): 692–702. Bibcode:1932RSPSA.137..696Z. doi:10.1098/rspa.1932.0165. JSTOR 96038.
Adiabatic theorem
View on GrokipediaClassical Perspectives
Adiabatic Invariants in Classical Mechanics
In classical mechanics, adiabatic invariants are quantities that remain approximately constant for systems undergoing slow changes in their parameters, provided the variation timescale is much longer than the system's natural oscillation period. These invariants arise in Hamiltonian systems where the Hamiltonian depends on a slowly varying external parameter, such as a time-dependent potential or field strength. The concept ensures that certain phase-space integrals preserve their value, enabling approximate solutions to otherwise complex time-dependent problems.[6] For periodic motion, the primary adiabatic invariant is the action variable , defined as the line integral over one complete cycle in phase space: where is the momentum conjugate to the coordinate . This action integral remains invariant under adiabatic changes to the Hamiltonian parameters, meaning changes by a negligible amount over many oscillation periods.[7] The invariance holds because rapid oscillations average out short-term fluctuations, while slow parameter drifts do not significantly alter the enclosed phase-space area.[8] The concept of adiabatic invariants in classical mechanics was introduced by Paul Ehrenfest in his 1916 paper, where he demonstrated the constancy of such quantities for slowly varying systems as a foundation for broader adiabatic principles. Ehrenfest showed that for a Hamiltonian with slowly varying , the action satisfies when is small compared to the oscillation frequency.[9] A classic example occurs in a harmonic oscillator subject to a slowly varying spring constant or frequency . Here, the energy scales with , so as changes gradually, stays constant, implying the amplitude adjusts as to maintain the invariant.[10] Another prominent case is the motion of a charged particle in a slowly varying magnetic field , where the magnetic moment (with the particle mass and the perpendicular velocity) serves as the adiabatic invariant, preserving the gyro-orbit area despite field strength changes.[11] To derive this invariance, consider a system with separable fast and slow dynamics. The motion decomposes into rapid periodic oscillations around a slowly evolving guiding center. The action is computed by averaging the Hamiltonian over the fast angle variable, yielding an effective slow Hamiltonian that depends on but not its conjugate angle. Perturbation theory then shows that the rate of change is of higher order in the small parameter , vanishing in the adiabatic limit . This averaging over fast oscillations ensures the invariant's approximate conservation.[7] These classical adiabatic invariants lay the groundwork for understanding similar phenomena in quantum mechanics via the correspondence principle.The Adiabatic Pendulum
The adiabatic pendulum serves as a classic illustration of adiabatic invariance in classical mechanics, where the length of the pendulum string varies slowly over time. Consider a simple pendulum consisting of a mass attached to a string of length , suspended from a fixed point, with the variation in occurring at a rate much slower than the natural oscillation period , where is the acceleration due to gravity.[6][10] This slow change ensures that the motion remains nearly periodic throughout the evolution, allowing the system to follow adiabatic conditions without significant excitation of higher modes.[6] The key result is that the adiabatic invariant for this system is the action variable , which remains conserved under the slow variation of . For small-amplitude oscillations, where the pendulum behaves as a harmonic oscillator, this invariant simplifies to , with the total energy and the angular frequency. Thus, the ratio stays constant, implying that the energy adjusts proportionally to the frequency as the length changes.[6][10] To derive this, approximate the pendulum motion for small angles , yielding the Hamiltonian where is the angular momentum conjugate to . The frequency is , scaling as . In action-angle variables, the action is the area enclosed by the phase-space trajectory divided by . For the elliptical trajectory of the harmonic approximation, , where is the angular amplitude. Under slow variation of , canonical perturbation theory or time-averaging over one oscillation period shows that to first order, as the perturbation from averages to zero over the fast oscillatory motion. Substituting confirms , so remains invariant.[6][10] As the length decreases slowly, the frequency increases, and to preserve , the amplitude must adjust such that , causing the angular excursions to grow while the linear displacement scales as . This results in the bob swinging with increasing angular vigor but a gradually expanding reach, maintaining the phase-space area constant; conversely, lengthening the string reduces the amplitude, damping the motion without energy loss to non-adiabatic effects.[6]Relation to Thermodynamics
In thermodynamics, an adiabatic process is characterized by the absence of heat exchange between the system and its surroundings, expressed as .[12] For an ideal gas in a reversible adiabatic process, the first law of thermodynamics and the ideal gas law yield the relation , where is pressure, is volume, and is the ratio of specific heats.[12] This relation governs changes in macroscopic variables like pressure and volume under work alone, without thermal interactions. The terminology "adiabatic" in the context of adiabatic invariants and the adiabatic theorem was borrowed from thermodynamics but repurposed in classical mechanics by Paul Ehrenfest in his 1916 paper.[9] Ehrenfest introduced the concept to describe quantities that remain invariant under slow variations of system parameters, drawing an analogy to the insulated, heat-free evolution in thermodynamic processes but applying it to dynamical systems with time-dependent Hamiltonians.[13] A fundamental distinction arises in the nature of the processes: thermodynamic adiabaticity pertains to isolated systems with fixed Hamiltonians, where energy conservation follows from no heat transfer, allowing for both slow and rapid changes as long as insulation is maintained.[14] In contrast, the adiabatic theorem concerns systems with slowly varying parameters in the Hamiltonian, where invariants are preserved only under gradual evolution to avoid disruptions in the system's state.[14] This precludes a direct analogy, as thermodynamic adiabaticity does not guarantee the preservation of mechanical adiabatic invariants; for instance, a sudden compression in a thermodynamic adiabatic process may violate the slow-change condition required for invariant constancy in mechanics.[14] Classical adiabatic invariants thus serve as a conceptual bridge, linking thermodynamic insulation to dynamical stability under controlled parameter shifts.[13]Quantum Mechanical Formulation
Mathematical Statement
The quantum adiabatic theorem provides a precise description of how a quantum system evolves under a slowly varying Hamiltonian. Consider a time-dependent self-adjoint Hamiltonian acting on a Hilbert space, with instantaneous eigenvalues and corresponding orthonormal eigenstates satisfying . Assume the spectrum is non-degenerate, meaning for all and all in the interval , and that there exists a positive minimum energy gap . If the system is prepared at in the exact eigenstate , and evolves according to the time-dependent Schrödinger equation , then for sufficiently slow variation of , the state at time remains approximately in the instantaneous eigenstate up to a phase factor: where the total phase is the sum of the dynamic phase and the geometric phase. This approximation holds with fidelity approaching 1 as the variation slows, ensuring the final state at aligns closely with up to the phase. The dynamic phase arises from the instantaneous energy and is given by The geometric phase, also known as the Berry phase, accounts for the variation in the eigenstate itself and takes the form For cyclic evolutions where the Hamiltonian parameters return to their initial values after time , the geometric phase simplifies to a line integral over the parameter space or, in closed paths, a surface integral involving the Berry curvature. Thus, the total phase is . These phases ensure that while the state tracks the evolving eigenstate, observable quantities tied to the eigenstate (such as probabilities) remain invariant under adiabatic change. The theorem's validity relies on specific assumptions beyond non-degeneracy and the gap condition. The Hamiltonian must vary sufficiently slowly, quantified by the adiabatic parameter where . This parameter measures the ratio of the transition-inducing matrix elements to the squared energy differences, scaled by ; when is small, transitions to other eigenstates are negligible, and the error in the approximation scales as . Additionally, the eigenstates are assumed to be smooth functions of time, and the initial alignment guarantees that the evolution stays within the nth eigensubspace. The final state fidelity quantifies the theorem's accuracy at the boundary.[15]Proofs of the Theorem
The standard proof of the adiabatic theorem employs time-dependent perturbation theory by expanding the wave function in the instantaneous eigenbasis of the time-dependent Hamiltonian .[1] Assume the system starts in the th instantaneous eigenstate of , with . The state at time is written as , where accounts for the dynamic phase.[16] Substituting into the time-dependent Schrödinger equation and projecting onto yields the coefficients' evolution: .[1] The coupling terms for arise from differentiating the eigenvalue equation, giving , assuming non-zero energy gaps.[16] For slow variations, parameterize with slowness parameter and where . The transition amplitude integrates to order , vanishing as , while up to a phase, ensuring the system tracks the th eigenstate.[1] This perturbative approach highlights that transitions are suppressed by the inverse energy gaps and the rate of change .[16] A rigorous sketch of the proof follows directly from this framework, emphasizing the smallness of the coupling under adiabatic conditions. The diagonal term is pure imaginary and can be absorbed into a geometric phase, while off-diagonal terms drive transitions. Integrating the equation for from 0 to shows boundary contributions at and vanish if initial conditions are eigenstates and the final projection is onto the evolved basis, with the integral bounded by for large , assuming bounded and persistent gaps. This establishes that the overlap as the change slows, rigorously for finite-dimensional or gapped systems.[15] Alternative proofs include the original formulation by Born and Fock in 1928, which used a series expansion in powers of the slowness parameter to show convergence to the adiabatic limit for analytic Hamiltonians, even near eigenvalue crossings of finite multiplicity. In 1950, Kato provided a more general proof for self-adjoint operators with isolated eigenvalues, employing time-dependent perturbation theory for spectral projections and demonstrating uniform convergence without assuming analyticity, applicable to unbounded operators under smoothness conditions. Messiah's textbook derivation extends this by integrating the coefficient equations explicitly, showing that boundary terms dominate and vanish in the adiabatic limit, with error estimates tied to the minimal gap and variation rate.[17] These proofs hold under assumptions of non-degenerate spectra with finite gaps; when energy gaps close (e.g., near avoided crossings), the denominators diverge, allowing significant transitions even for slow changes.[15] Similarly, if the slowness parameter is not sufficiently small relative to inverse gaps, diabatic effects emerge, invalidating the approximation.[1]Process Distinctions and Conditions
Diabatic vs. Adiabatic Processes
In quantum mechanics, an adiabatic process refers to the slow variation of a system's Hamiltonian over time, such that if the system begins in an instantaneous eigenstate, it remains in the corresponding evolving eigenstate, accumulating only dynamic and geometric phases without significant population transfer to other states. This behavior is governed by the adiabatic theorem, originally formulated by Born and Fock, which ensures negligible non-adiabatic coupling between eigenstates during such evolution.[18] In contrast, a diabatic process arises from rapid changes in the Hamiltonian, causing the system to deviate from the instantaneous eigenstates and undergo transitions, resulting in a superposition of multiple eigenstates. Qualitatively, adiabatic processes minimize these transitions, achieving high fidelity (approximately 1) between the initial and final states in the adiabatic basis, whereas diabatic processes maximize transitions, often leading to excitations and reduced fidelity. In the extreme limit of the sudden approximation, where the Hamiltonian changes instantaneously, the wavefunction remains unchanged in the original basis, but its projection onto the new eigenstates determines the outcome.[18] The key distinction between these processes lies in the timescale of Hamiltonian variation relative to the inverse of the energy gaps between eigenstates: adiabaticity holds when the evolution time greatly exceeds (where is the minimum energy gap), ensuring the system tracks the eigenstates; diabatic behavior dominates when is much shorter, as the system cannot adapt quickly enough.[18] Extensions to open quantum systems, where environmental interactions introduce decoherence, modify the adiabatic theorem by requiring the dynamical superoperator to evolve independently for each eigenstate subspace, though decoherence can disrupt state preservation even in slow evolutions.[19]Conditions for Adiabaticity
The quantitative conditions for a quantum process to remain adiabatic are derived from the time-dependent Schrödinger equation using the instantaneous eigenbasis of the time-varying Hamiltonian . Consider a system starting in the -th instantaneous eigenstate at , with . The wavefunction is expanded as , where is the dynamical phase, and the coefficients satisfy . The non-adiabatic coupling term is for . To minimize transitions, the magnitude of this coupling must be small compared to the oscillation frequency induced by the energy difference, leading to the adiabatic parameter (in units where ), ensuring the probability of staying in approaches 1 as the process slows. This condition implies a requirement on the spectral gap . For the process to be adiabatic at all times, the minimum gap must satisfy everywhere along the evolution path, preventing significant excitations due to rapid relative changes near close energy levels. In terms of the total evolution time , adiabaticity holds if , where is the smallest gap encountered; this timescale ensures the Hamiltonian varies slowly enough relative to the inverse gap frequency, bounding non-adiabatic errors to order . Extensions of these conditions distinguish local adiabaticity, where holds instantaneously at each , from global adiabaticity, requiring the integrated non-adiabatic amplitude over the full path to remain small, which may necessitate splitting the evolution into segments if local violations occur. Near degeneracies, where approaches zero, the standard gap condition breaks down, necessitating modified theorems that rely on smooth spectral projections rather than strict gaps to maintain approximate adiabatic following, though the required scales unfavorably with the closeness of the degeneracy.Characteristics of Diabatic Passage
In diabatic passages, the Hamiltonian parameters of a quantum system vary rapidly, preventing the system from following the instantaneous eigenstates and leading to non-adiabatic transitions between energy levels.[20] This regime contrasts with adiabatic evolution, where slow changes ensure state preservation.[21] A key feature of highly diabatic processes is the sudden approximation, applicable when the timescale of change satisfies , with representing the relevant energy scale such as the minimum spectral gap or energy fluctuation in the initial state.[21] Under this condition, the wavefunction remains effectively frozen in the initial basis during the perturbation, resulting in an instantaneous projection onto the final instantaneous eigenstates upon completion of the change.[21] This approximation simplifies calculations of post-transition state distributions but highlights the system's inability to adapt dynamically. Diabatic passages generally induce transition dynamics characterized by partial population transfer from the initial eigenstate to excited or orthogonal states, rather than complete retention in the ground state.[20] Accompanying this transfer is a loss of quantum coherence, as the rapid evolution disrupts phase relationships between basis states, often leading to decoherence-like effects even in closed systems. Representative examples of diabatic regimes include ultrafast laser pulses applied to molecular systems, where femtosecond-scale excitations drive non-adiabatic electron dynamics and populate transient excited states. Similarly, quench dynamics in condensed matter systems, such as abrupt changes in interaction strength in ultracold atomic gases, exemplify diabatic evolution by generating defects and excitations beyond the ground state manifold.[22] In quantum computing, diabatic control enables fast gate operations, such as single-qubit rotations in superconducting qubits, by intentionally operating in the rapid-change regime to bypass the slowdown required for adiabatic fidelity.[23]Illustrative Examples
Simple Pendulum
In the quantum mechanical treatment of the simple pendulum, the time-independent Schrödinger equation for the angular coordinate is given by where is the mass, is the length, is gravity, and is the reduced Planck's constant. After nondimensional scaling by setting and introducing parameters related to the energy and potential depth (proportional to ), this reduces to the Mathieu equation with characteristic values determining the discrete energy eigenvalues . The corresponding eigenfunctions are the periodic Mathieu functions: even cosine-elliptic functions for even parity states and sine-elliptic functions for odd parity states, labeled by the quantum number . These form a complete basis for the Hilbert space on , reflecting the periodic boundary conditions of the pendulum. When the pendulum length varies slowly with time, , the Hamiltonian acquires time dependence through both the kinetic and potential terms, altering the parameter . If the variation rate is much smaller than the frequency spacing between adjacent energy levels, for low , the process is adiabatic. In this regime, the adiabatic theorem ensures that if the system is prepared in the th instantaneous eigenstate at initial time, the evolved state remains up to a dynamic and geometric phase, preserving the occupation of the th level. Transitions to other levels are exponentially suppressed, maintaining quantum coherence within the manifold. This quantum preservation of the quantum number mirrors the classical adiabatic invariant, the action , which remains constant under slow changes in . In the semiclassical limit of large , Bohr-Sommerfeld quantization relates , so the invariance of directly corresponds to the fixed , bridging classical and quantum descriptions. For the anharmonic pendulum spectrum, this quantization captures the libration (small oscillations) or rotation (full swings) regimes, with level spacings decreasing nonlinearly as increases. Visually, as decreases slowly, the energy levels compress due to increasing effective potential depth and frequency, narrowing spacings particularly for higher where anharmonicity dominates; however, the probability distribution remains locked to the th Mathieu function at each instant, avoiding population transfer. This contrasts with sudden changes, where excitations to higher states occur, highlighting the theorem's role in controlled quantum evolution.Quantum Harmonic Oscillator
The quantum harmonic oscillator provides an exactly solvable model to illustrate the adiabatic theorem when the Hamiltonian varies slowly through a time-dependent frequency. The Hamiltonian is given by where is the mass, is the momentum operator, is the position operator, and is the angular frequency that varies slowly with time.[24] In the adiabatic regime, where the rate of change is much smaller than , the instantaneous eigenstates (number states labeled by quantum number ) diagonalize with eigenvalues .[24][25] If the system is initially prepared in an eigenstate at , the adiabatic theorem predicts that it will evolve to remain in the corresponding instantaneous eigenstate at later time , up to a dynamical phase and a geometric (Berry) phase accumulated due to the slow parameter variation.[24] This evolution preserves the quantum number , which serves as an adiabatic invariant analogous to the classical action variable in the limit of slow changes.[24] The theorem holds rigorously to all orders in the slowness parameter for this model, as the transition probabilities between different levels vanish in the adiabatic limit.[24] For pure number states, the adiabatic evolution maintains their purity, with the state remaining a number state of the instantaneous Hamiltonian throughout the process.[24] In contrast, coherent states—superpositions of number states—undergo a more complex transformation under slow frequency variation. A slow change in induces squeezing in the coherent state, displacing it in phase space while preserving its overall coherence properties relative to the instantaneous basis; however, the state evolves into a squeezed coherent state in the original basis.[25][26] This squeezing arises from the Bogoliubov transformation connecting the creation and annihilation operators at different times, but the adiabatic approximation ensures minimal population transfer to other number states.[25] A special case is the linear frequency ramp, where varies linearly from an initial value to a final value over time , such that . In this scenario, the exact evolution can be computed using invariant operators, revealing that the phase accumulation includes both the dynamical component proportional to the integral of and a geometric phase that depends on the path in parameter space.[24] For sufficiently slow ramps (large ), the fidelity to the target state approaches unity, confirming the adiabatic following without excitations.[25]Avoided Level Crossings
In quantum mechanics, avoided level crossings arise in the adiabatic theorem when two nearly degenerate energy levels of a time-dependent Hamiltonian approach each other but repel due to off-diagonal coupling, preventing an actual degeneracy except at specific parameter values. This phenomenon is central to understanding transitions near near-degeneracies, where the adiabatic approximation may break down if the passage is not sufficiently slow.[27] A canonical example is the two-level system described by the Hamiltonian where and are Pauli matrices, is a constant coupling strength, and varies slowly through zero. The instantaneous eigenvalues are , forming hyperbolic branches that avoid crossing at with a minimum energy gap of . In the adiabatic limit, a slow passage through the crossing causes the system to follow one continuous energy branch, effectively switching the labels of the instantaneous eigenstates at the avoidance point.[27] When the parameters trace a closed loop encircling the degeneracy point in parameter space, the adiabatic evolution imparts a Berry phase of to the wavefunction, manifesting as a sign change and influencing interference effects in cyclic processes.[28] In molecular physics, avoided level crossings appear as conical intersections, where electronic potential energy surfaces touch at a point, as seen in the Jahn-Teller effect for systems with degenerate ground states coupled to vibrational modes, leading to distortion and observable geometric phases in photodissociation and spectroscopy.[29] If the passage through such crossings is rapid, diabatic transitions between levels can occur, bypassing adiabatic following.[27]Applications and Quantitative Methods
Key Applications
The adiabatic theorem underpins adiabatic quantum computation, a paradigm for solving optimization problems by evolving a quantum system slowly from an initial Hamiltonian with a known ground state to a final Hamiltonian encoding the problem instance, ensuring the system remains in the ground state if the evolution is sufficiently gradual. This approach was formalized by Farhi et al., who demonstrated its potential for NP-complete problems like satisfiability through adiabatic evolution of the quantum state. Commercial implementations, such as D-Wave's quantum annealers, apply this principle to real-world optimization tasks including portfolio management, machine learning, and materials simulation by leveraging superconducting qubits in slowly varying magnetic fields to minimize energy landscapes.[30] In atomic and molecular physics, the theorem enables stimulated Raman adiabatic passage (STIRAP), a coherent technique for transferring population between quantum states without populating intermediate levels, thereby suppressing spontaneous emission losses. Introduced by Gaubatz et al., STIRAP uses counter-intuitive pulse sequences—where the Stokes pulse precedes the pump pulse—to follow a dark state that decouples the system from the excited state, achieving near-unity transfer efficiencies in systems like alkali atoms and molecules. This method has become essential for quantum state preparation in cold atom experiments, Bose-Einstein condensate manipulation, and precision spectroscopy, with extensions to fractional STIRAP for multi-level systems.[31] In condensed matter physics, adiabatic pumping exploits the theorem to achieve quantized charge or spin transport in mesoscopic systems without net voltage bias, by cyclically varying system parameters like gate voltages or fluxes. Seminal work by Thouless established that the pumped charge per cycle is an integer multiple of the elementary charge, determined topologically by the Chern number of the system's band structure, robust against disorder in the adiabatic limit. This phenomenon manifests in topological insulators and quantum Hall systems, where slow parameter sweeps induce directional particle flow, as demonstrated in experiments with optical lattices and semiconductor nanowires for applications in robust quantum transport and metrology.[32][33] The adiabatic theorem also informs techniques in nuclear magnetic resonance (NMR) spectroscopy, where adiabatic pulses—frequency- and amplitude-modulated radiofrequency fields—ensure uniform spin manipulation insensitive to magnetic field inhomogeneities or offsets. As reviewed by Tannús and Garwood, these pulses, such as hyperbolic secant or BIR-4 designs, follow adiabatic rapid passage principles to achieve broadband inversion or refocusing, enhancing signal quality in high-resolution and in vivo NMR studies of biomolecules and materials. In molecular dynamics simulations, adiabatic bias methods apply the theorem to drive conformational changes across energy barriers by introducing a time-dependent biasing potential that evolves slowly, maintaining the system near equilibrium paths. Developed by Marchi and Ballone, this approach computes free-energy profiles for rare events like protein folding or ligand binding, offering computational efficiency over unbiased simulations while preserving statistical accuracy for complex biomolecular systems.[35]Adiabatic Passage Probabilities
The probability of successful adiabatic passage in quantum systems is determined by the likelihood that the system remains in the instantaneous eigenstate of the time-dependent Hamiltonian throughout the evolution, versus the probability of non-adiabatic transitions to other eigenstates. In the adiabatic basis, the time evolution of the state coefficients satisfies coupled differential equations where the off-diagonal elements represent non-adiabatic couplings. For a system starting in eigenstate , the first-order transition amplitude to another eigenstate (with ) is given by the integral expression derived from time-dependent perturbation theory in the adiabatic frame: where , the exponential includes the dynamical phase from energy differences and the geometric (Berry) phase difference , and is the total evolution time.[36] The corresponding transition probability is then , which quantifies the deviation from perfect adiabatic following and is typically small under adiabatic conditions.[36] When the Hamiltonian varies slowly, parameterized by a small dimensionless rate (such as for total time ), the adiabatic approximation holds to leading order, but finite introduces small non-adiabatic corrections of order . These corrections arise from the non-zero , which scales with times the inverse energy gap, ensuring that transition probabilities remain suppressed () as long as the adiabaticity condition is satisfied globally.[37] Near points of small energy gaps, such as avoided crossings, these corrections can become locally significant, potentially increasing transition risks, though the overall passage success depends on the integrated effect.[37] In multi-level systems, adiabatic passage enables coherent population transfer along a chain of coupled states, where the system follows a dark state—a superposition decoupled from lossy intermediate levels—to achieve near-unity transfer efficiency without populating excited states. This is exemplified in techniques like stimulated Raman adiabatic passage (STIRAP), where counter-intuitive pulse sequences drive the population from an initial ground state to a target state via multiple intermediate levels, with transition probabilities to unwanted states minimized by maintaining adiabaticity throughout the chain. The general framework extends naturally, with off-diagonal couplings between consecutive states in the chain determining the fidelity of the transfer. Recent advancements post-2020 have employed machine learning to optimize the design of adiabatic passages, particularly for complex Hamiltonians in quantum state preparation, by parameterizing control fields and using neural networks to minimize non-adiabatic losses and reduce optimization costs to logarithmic scaling in evolution depth.[38]The Landau–Zener Formula
The Landau–Zener model provides an exact analytic solution for the transition probability in a two-level quantum system driven linearly through an avoided energy level crossing, serving as a cornerstone for understanding nonadiabatic dynamics in the adiabatic theorem. The system's Hamiltonian is where is the constant sweep rate controlling the time variation of the diagonal energy difference, is the constant off-diagonal coupling that sets the minimum gap between the adiabatic energy levels at , and , are the Pauli matrices acting on the two diabatic basis states.[39] Assuming the system starts in the ground diabatic state at , the probability of transitioning to the excited diabatic state at is given by the Landau–Zener formula: This result was derived exactly by solving the time-dependent Schrödinger equation using parabolic cylinder functions, though an equivalent approximate form arises from the WKB (semiclassical) method applied to the adiabatic approximation breakdown near the crossing.[39] The formula highlights the transition from adiabatic to diabatic behavior as a function of the sweep rate : for slow sweeps where (large adiabatic parameter ), , so the system remains in the instantaneous adiabatic ground state with high fidelity; conversely, for fast sweeps where (small adiabatic parameter), , and the system follows the diabatic state, impulsively crossing without transitioning between adiabatic branches.[39] The formula originated from independent works in 1932 by Lev Davidovich Landau, who applied it to atomic collision processes, and Clarence Zener, who considered it in the context of molecular potential curve crossings.[40] Extensions of the Landau–Zener formula address more complex scenarios, such as multi-level systems or multiple sequential avoided crossings, often using the independent crossing approximation where transition probabilities at each crossing multiply under weak inter-crossing interference.[41]Numerical Calculation Approaches
Numerical approaches to simulating adiabatic and diabatic dynamics often rely on solving the time-dependent Schrödinger equation (TDSE) for systems where analytic solutions are unavailable. The split-operator method, introduced by Feit, Fleck, and Steiger in 1982, propagates the wavefunction by decomposing the evolution operator into kinetic and potential energy components, enabling efficient Fourier transform-based computations for wavepacket dynamics in atomic and molecular systems. This technique preserves unitarity and is particularly suited for studying adiabatic passages in one- and multi-dimensional potentials, with applications in laser-driven processes where high accuracy is achieved over long times. Complementing this, the Crank-Nicolson method provides an implicit, unitary scheme for discretizing the TDSE on a spatial grid, offering second-order accuracy in time and unconditional stability, which is advantageous for simulating nonadiabatic transitions near adiabatic limits in quantum chemical reactions.[42][43] For periodically driven systems, Floquet theory extends the adiabatic theorem by incorporating quasi-energy states, which diagonalize the time-evolution operator over one period and allow tracking of adiabatic following under high-frequency driving. This framework identifies conditions for adiabaticity based on quasi-energy spacings and driving frequency, enabling numerical simulations of stroboscopic evolution where the system remains close to instantaneous Floquet eigenstates despite periodic perturbations. Computational implementations involve diagonalizing the Floquet Hamiltonian to compute quasi-energies and monitor transitions, as demonstrated in driven two-level systems like the Schwinger-Rabi model. In many-body systems, where direct TDSE solution becomes intractable, Monte Carlo methods adapted for adiabatic evolution provide stochastic sampling of ground-state properties during slow parameter changes. The adiabatic quantum Monte Carlo (AQMC) algorithm, proposed in 2021, mitigates the fermion sign problem by gradually increasing interactions, yielding variational upper bounds on energies with exponential improvement in average sign for models like the Hubbard lattice, achieving accuracy comparable to exact diagonalization for doped systems up to dozens of sites. Tensor network methods, such as matrix product states (MPS) and projected entangled pair states (PEPS), simulate adiabatic preparation by evolving frustration-free Hamiltonians along gapped paths, with time-dependent variational principles (TDVP) enabling efficient computation for one-dimensional chains up to 5000 sites and two-dimensional lattices up to 10x10, reaching fidelities above 0.99 in polylogarithmic times. These approaches validate against benchmarks like Landau-Zener transitions for transition probabilities.[44][45][46] Quantum optimal control techniques further enhance numerical simulations by engineering adiabatic paths to minimize diabatic errors. The GRAPE (Gradient Ascent Pulse Engineering) algorithm, developed by Khaneja et al. in 2005, optimizes control pulses via gradient-based iterations on the TDSE fidelity, applied to counteract transitions at avoided crossings by shortening evolution times while maintaining near-unitary adiabatic fidelity, as shown in two-level systems where gate times are reduced by factors of 10 compared to linear ramps. This method has been extended to many-body contexts for robust state transfer in quantum information processing.[47]References
- https://doi.org/10.1002/(SICI)1099-1492(199712)10:8<423::AID-NBM488>3.0.CO;2-X
