Hubbry Logo
Quantum electrodynamicsQuantum electrodynamicsMain
Open search
Quantum electrodynamics
Community hub
Quantum electrodynamics
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Quantum electrodynamics
Quantum electrodynamics
from Wikipedia

In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics.[1][2][3] In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved.[2] QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.[2][3]

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[2]: Ch1  It is the most precise and stringently tested theory in physics.[4][5]

History

[edit]
Paul Dirac

The first formulation of a quantum theory describing radiation and matter interaction is attributed to Paul Dirac, who during the 1920s computed the coefficient of spontaneous emission of an atom.[6] He is credited with coining the term "quantum electrodynamics".[7]

Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and Enrico Fermi,[8] physicists came to believe that, in principle, it was possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck,[9] and Victor Weisskopf,[10] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[11] At higher orders in the series infinities emerged, making such computations meaningless and casting doubt on the theory's internal consistency. This suggested that special relativity and quantum mechanics were fundamentally incompatible.

Hans Bethe

Difficulties increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom,[12] later known as the Lamb shift and magnetic moment of the electron.[13] These experiments exposed discrepancies that the theory was unable to explain.

A first indication of a possible solution was given by Hans Bethe in 1947.[14][15] He made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Willis Lamb and Robert Retherford.[14] Despite limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result with good experimental agreement. This procedure was named renormalization.

Feynman (center) and Oppenheimer (right) at Los Alamos.

Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga,[16] Julian Schwinger,[17][18] Richard Feynman[1][19][20] and Freeman Dyson,[21][22] it was finally possible to produce fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Tomonaga, Schwinger, and Feynman were jointly awarded the 1965 Nobel Prize in Physics for their work in this area.[23] Their contributions, and Dyson's, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed unlike the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Dyson later showed that the two approaches were equivalent.[21] Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, became one of the fundamental aspects of quantum field theory and is seen as a criterion for a theory's general acceptability. Even though renormalization works well in practice, Feynman was never entirely comfortable with its mathematical validity, referring to renormalization as a "shell game" and "hocus pocus".[2]: 128 

Neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics.[24]

QED is the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s, developed by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on Schwinger's pioneering work, Gerald Guralnik, Dick Hagen, and Tom Kibble,[25][26] Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.

Feynman's view of quantum electrodynamics

[edit]

Introduction

[edit]

Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter,[2] a classic non-mathematical exposition of QED from the point of view articulated below.

The key components of Feynman's presentation of QED are three basic actions.[2]: 85 

A photon goes from one place and time to another place and time.
An electron goes from one place and time to another place and time.
An electron emits or absorbs a photon at a certain place and time.
Feynman diagram elements

These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram.

As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time to another place and time , the associated quantity is written in Feynman's shorthand as , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from to is written . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e.[2]: 91 

QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.

The basic rules of probability amplitudes that will be used are:[2]: 93 

  1. If an event can occur via a number of indistinguishable alternative processes (a.k.a. "virtual" processes), then its probability amplitude is the sum of the probability amplitudes of the alternatives.
  2. If a virtual process involves a number of independent or concomitant sub-processes, then the probability amplitude of the total (compound) process is the product of the probability amplitudes of the sub-processes.

The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead.[2]: 82 

Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled".

Basic constructions

[edit]

Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.[citation needed]

Compton scattering

But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering.[citation needed]

An infinite number of other intermediate "virtual" processes exist in which photons are absorbed or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude.

That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)[2]: 89, 98–99 

Probability amplitudes

[edit]
Feynman replaces complex numbers with spinning arrows, which start at emission and end at detection of a particle. The sum of all resulting arrows gives a final arrow whose length squared equals the probability of the event. In this diagram, light emitted by the source S can reach the detector at P by bouncing off the mirror (in blue) at various points. Each one of the paths has an arrow associated with it (whose direction changes uniformly with the time taken for the light to traverse the path). To correctly calculate the total probability for light to reach P starting at S, one needs to sum the arrows for all such paths. The graph below depicts the total time spent to traverse each of the paths above.

Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers.

Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by

or

The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers.

Addition of probability amplitudes as complex numbers
Multiplication of probability amplitudes as complex numbers

Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.

That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows.[2]: 120–121  There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping.

Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.[2]: 112–113 

Propagators

[edit]

Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows:

where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A.

Mass renormalization

[edit]
Electron self-energy loop

A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process",[2]: 128  and Dirac also criticized this procedure, saying "in mathematics one does not get rid of infinities when it does not please you".[24]

Conclusions

[edit]

Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem."[2]: 152 

Mathematical formulation

[edit]

QED action

[edit]

Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action[27]: 78 

QED Action

where

  • are Dirac matrices.
  • a bispinor field of spin-1/2 particles (e.g. electronpositron field).
  • , called "psi-bar", is sometimes referred to as the Dirac adjoint.
  • is the gauge covariant derivative.
    • e is the coupling constant, equal to the electric charge of the bispinor field.
    • is the covariant four-potential of the electromagnetic field generated by the electron itself. It is also known as a gauge field or a connection.
    • is the external field imposed by external source.
  • m is the mass of the electron or positron.
  • is the electromagnetic field tensor. This is also known as the curvature of the gauge field.

Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field set to zero for simplicity)

where is the conserved current arising from Noether's theorem. It is written

Equations of motion

[edit]

Expanding the covariant derivative in the Lagrangian gives

For simplicity, has been set to zero, with no loss of generality. Alternatively, we can absorb into a new gauge field and relabel the new field as

From this Lagrangian, the equations of motion for the and fields can be obtained.

Equation of motion for ψ

[edit]

These arise most straightforwardly by considering the Euler-Lagrange equation for . Since the Lagrangian contains no terms, we immediately get

so the equation of motion can be written

Equation of motion for Aμ

[edit]
  • Using the Euler–Lagrange equation for the field,

the derivatives this time are

Substituting back into (3) leads to

which can be written in terms of the current as

Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, .)

Interaction picture

[edit]

This theory can be straightforwardly quantized by treating bosonic and fermionic sectors[clarification needed] as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have[27]: 5 

This technique is also known as the S-matrix, and is therefore known as the S-matrix element. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above:[27]: 123 

Which can also be written in terms of an integral over the interaction Hamiltonian density . Thus, one has[27]: 86 

where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series expansion of the probability amplitude is called the Dyson series, and is given by:

As typically contains delta functions that are not physically-measurable, it is convenient to define the invariant amplitude with:

Feynman diagrams

[edit]

Despite the conceptual clarity of the Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta.

Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following[27]: 801–802 

To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric is .

From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case[27]: 158–159 

and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix:

from which we can compute the cross section for this scattering.

Nonperturbative phenomena

[edit]

The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect,[28] cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics.

Renormalizability

[edit]

Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones[27]: ch 10 

that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio.

Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories.

Nonconvergence of series

[edit]

An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero.[29] The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series.

From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy.[30] The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory.

Electrodynamics in curved spacetime

[edit]

This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Quantum electrodynamics (QED) is the fundamental quantum field theory that describes the interactions of electrically charged particles with the electromagnetic field, unifying quantum mechanics and special relativity in the realm of electromagnetism. Developed primarily in the late 1940s, QED resolves longstanding issues in earlier quantum theories by incorporating renormalization techniques to handle infinite quantities arising in calculations, enabling precise predictions of physical phenomena. The theory's modern formulation emerged from the independent work of Sin-Itiro Tomonaga, Julian Schwinger, and Richard P. Feynman, who addressed divergences in quantum electrodynamic calculations through innovative mathematical frameworks, including Schwinger's operator methods and Feynman's path integral approach with diagrammatic representations. For their contributions to re-establishing QED as a consistent and predictive theory, the trio shared the 1965 Nobel Prize in Physics. Tomonaga's efforts focused on relativistically invariant extensions, while Schwinger and Feynman provided practical calculational tools that overcame the infinities plaguing earlier attempts. QED's hallmark successes include its extraordinary precision in matching experimental observations, such as the Lamb shift—a small energy difference in hydrogen atom levels explained by virtual photon exchanges—and the anomalous magnetic moment of the electron, where theoretical predictions agree with measurements to over 10 decimal places. These tests, conducted at facilities like those at NIST and SLAC, confirm QED as the most accurately verified physical theory, with applications extending to particle physics, atomic spectroscopy, and even solid-state phenomena like the quantum Hall effect.

Overview

Definition and Scope

Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics, providing a complete description of the interactions between charged particles and the electromagnetic field. It integrates the principles of quantum mechanics, special relativity, and the classical Maxwell equations to model phenomena such as the emission, absorption, and scattering of light by matter at the quantum level. In QED, the key interacting particles are electrons, which are fermions with spin 12\frac{1}{2} represented by the Dirac spinor field ψ\psi, and photons, which are massless bosons with spin 1 described by the vector field AμA^\mu. These fields capture the quantum nature of matter and light, respectively, enabling precise predictions for processes involving charged leptons and electromagnetic radiation. The scope of QED is confined to electromagnetic interactions, excluding the strong nuclear force (governed by quantum chromodynamics) and the weak force (unified with electromagnetism in electroweak theory). At its core, QED exhibits gauge invariance under local U(1) transformations, a symmetry principle that dictates the form of the interactions and ensures the theory's consistency. The strength of these interactions is quantified by the dimensionless fine-structure constant α1137\alpha \approx \frac{1}{137}, which sets the scale for electromagnetic coupling.

Physical Significance

Quantum electrodynamics (QED) stands as the most precise physical theory ever tested, with predictions matching experimental measurements to an unprecedented degree of accuracy. A prime example is the anomalous magnetic moment of the electron, denoted as ae=(g2)/2a_e = (g-2)/2, where QED calculations agree with observations to more than 10 decimal places, specifically up to ae=0.00115965218046(18)a_e = 0.00115965218046(18) (as of the 2022 CODATA recommended values), demonstrating the theory's reliability in describing subtle quantum corrections to the electron's spin-magnetic moment interaction. This level of precision, achieved through higher-order perturbative calculations involving up to five loops in the Feynman diagram expansion, underscores QED's success in handling infinities via renormalization, a technique that removes divergences while preserving finite, observable predictions. QED achieves a profound unification of quantum mechanics and special relativity specifically for electromagnetic interactions, providing a consistent framework for processes involving charged particles and photons at relativistic speeds. Unlike earlier relativistic quantum mechanics, which suffered from issues like negative probability densities in the Klein-Gordon equation or infinite self-energies in Dirac theory, QED resolves these by treating particles as excitations of quantized fields, ensuring causality and unitarity through its field-theoretic structure. This unification, formalized in the 1940s by Tomonaga, Schwinger, and Feynman, enables accurate descriptions of phenomena such as Compton scattering and pair production, where relativistic effects and quantum fluctuations interplay seamlessly. As the paradigmatic quantum field theory (QFT), QED laid the groundwork for the entire edifice of modern particle physics, inspiring the development of more complex gauge theories within the Standard Model. Its renormalization procedure and use of Feynman diagrams provided tools essential for extending QFT to non-abelian interactions, directly influencing the formulation of quantum chromodynamics (QCD) and electroweak theory. QED's success validated the gauge principle as the cornerstone of fundamental interactions, paving the way for the SU(3) × SU(2) × U(1) symmetry structure of the Standard Model, which unifies all known forces except gravity. Beyond fundamental particle physics, QED's principles underpin key applications in atomic physics, where it explains fine and hyperfine structure splittings in atoms with high fidelity, enabling precise atomic clocks and spectroscopy. In laser technology, QED governs light-matter interactions at the quantum level, facilitating the design of coherent photon sources through stimulated emission and cavity quantum electrodynamics effects in semiconductor lasers. In condensed matter physics, QED-inspired field theories describe emergent phenomena like the quantum Hall effect, where quantized conductance plateaus arise from topological protection analogous to Aharonov-Bohm phases in gauge fields, linking microscopic quantum rules to macroscopic transport properties. QED exemplifies an abelian gauge theory based on the U(1) symmetry group, where the photon mediates interactions without self-coupling, leading to simpler perturbative expansions compared to non-abelian theories. In contrast, QCD relies on the non-abelian SU(3) color group, introducing gluon self-interactions that generate asymptotic freedom and confinement, phenomena absent in QED's perturbative regime. This distinction highlights QED's role as a benchmark for gauge theories, where abelian simplicity allows exact solvability in many limits, while informing strategies for tackling non-abelian complexities in strong interactions.

Historical Development

Precursors in Classical and Quantum Theories

The foundations of quantum electrodynamics (QED) emerged from the integration of classical electromagnetism, special relativity, and early quantum mechanics, each addressing key aspects of electromagnetic interactions but revealing incompatibilities when combined. In the 1860s, James Clerk Maxwell formulated a set of equations that unified electricity, magnetism, and optics into a single coherent theory of electromagnetism, describing how electric and magnetic fields propagate as waves at the speed of light. These equations, presented in Maxwell's seminal 1865 paper, provided a classical framework for electromagnetic phenomena but treated fields as continuous and deterministic, without accounting for quantum discreteness or relativistic effects on matter. The advent of special relativity in 1905, introduced by Albert Einstein, fundamentally altered this picture by establishing that physical laws must be invariant under Lorentz transformations, linking space and time while imposing constraints on simultaneity and causality. Einstein's theory revealed inconsistencies in classical electromagnetism when applied to moving observers, particularly in the transformation of electromagnetic fields, necessitating a relativistic reformulation of any theory involving charged particles and radiation. Meanwhile, the development of non-relativistic quantum mechanics in the mid-1920s offered a probabilistic description of matter: Erwin Schrödinger's 1926 wave equation governed the evolution of quantum states for particles like electrons, while Werner Heisenberg's 1927 uncertainty principle quantified the inherent limits on simultaneously measuring conjugate variables such as position and momentum. However, these quantum formulations were incompatible with special relativity, as they failed to preserve Lorentz invariance and led to acausal effects or negative probabilities for high-speed particles. A pivotal advance came in 1928 with Paul Dirac's relativistic wave equation for the electron, which successfully merged quantum mechanics and special relativity by incorporating spin and yielding solutions consistent with observed atomic spectra. Dirac's equation predicted the existence of negative-energy states, which he later interpreted as "holes" representing positively charged particles with the same mass as electrons—antimatter counterparts—resolving issues like infinite vacuum energy in a preliminary way. This prediction was spectacularly confirmed in 1932 when Carl Anderson observed tracks of these "positive electrons," or positrons, in cosmic ray experiments using a cloud chamber. Building on these insights, early attempts at quantum field theory emerged, notably in the 1928 work of Pascual Jordan and Wolfgang Pauli, who applied quantization procedures to the electromagnetic field itself, treating it as a relativistic quantum system of oscillators. Their formalism, while covariant and extending Dirac's approach to fields, encountered severe divergences—infinite self-energies and probabilities—highlighting the need for a more robust synthesis to handle interactions between quantized fields and matter.

Formulation in the 1940s

The formulation of quantum electrodynamics (QED) in the 1940s addressed fundamental inconsistencies in earlier attempts to reconcile quantum mechanics with special relativity and electromagnetism, particularly those arising from Paul Dirac's relativistic quantum mechanics of the 1920s and 1930s. Dirac's equation successfully described the electron as a relativistic particle and predicted the existence of the positron through his "hole theory," interpreting negative-energy states in the Dirac sea as positron vacancies. However, this framework encountered severe difficulties when applied to interacting fields, including infinite self-energies for electrons and the production of unphysical runaway solutions in external fields, rendering perturbative calculations divergent and non-covariant. These issues prompted renewed efforts during World War II, with Sin-Itiro Tomonaga developing the first fully relativistic and covariant perturbation theory for QED in 1943, though wartime conditions delayed its publication until 1946. Tomonaga's approach, worked out in isolation in Japan, generalized the interaction representation to maintain Lorentz invariance in the S-matrix formalism, allowing consistent calculations of processes like electron-photon scattering without violating causality. Independently, Julian Schwinger in the United States advanced QED through operator methods and his quantum action principle during the mid-1940s. Schwinger's variational framework, building on canonical transformations, provided a systematic way to derive equations of motion and compute Green's functions for interacting fields, enabling precise predictions such as the Lamb shift in hydrogen atom spectra. Richard Feynman introduced a complementary path integral formulation in 1948, offering an intuitive space-time approach to non-relativistic quantum mechanics that he extended to relativistic QED. Feynman's method summed amplitudes over all possible particle paths, naturally incorporating positron propagation as backward-moving electrons and leading to his iconic diagrams for visualizing perturbation series. This resolved limitations in Dirac's hole theory by treating positrons on equal footing with electrons without invoking an infinite sea. In 1949, Freeman Dyson synthesized the approaches of Tomonaga, Schwinger, and Feynman, demonstrating their mathematical equivalence and establishing the renormalizability of QED to all orders in perturbation theory. Dyson's proof showed that infinities could be absorbed into redefined physical parameters like charge and mass, yielding finite, observable predictions that matched experiments.

Postwar Refinements and Acceptance

Following the experimental observation of the Lamb shift in 1947 by Willis Lamb and Robert Retherford, Hans Bethe provided a groundbreaking non-relativistic calculation that attributed the energy level splitting in hydrogen to vacuum polarization effects predicted by quantum electrodynamics (QED), yielding a value of approximately 1040 MHz in close agreement with experiment. This work not only confirmed QED's predictive power but also highlighted the need for renormalization to handle infinities in higher-order corrections. In the 1950s, refinements by physicists such as Robert Karplus, Abraham Klein, and others incorporated relativistic effects and improved the precision of the Lamb shift prediction to within 1% of experimental measurements, solidifying QED's empirical validation. The growing confidence in QED was underscored by prestigious recognitions, including the 1949 Nobel Prize in Physics awarded to Hideki Yukawa for his meson theory of nuclear forces, which paralleled and influenced early QFT approaches relevant to QED's development. More directly, the 1965 Nobel Prize in Physics was jointly awarded to Sin-Itiro Tomonaga, Julian Schwinger, and Richard Feynman for their foundational reformulation of QED, resolving divergences through renormalization and establishing a consistent perturbative framework. These awards marked QED's transition from a problematic theory to a rigorously tested cornerstone of particle physics. A key theoretical advancement came in 1950 with John Ward's derivation of the Ward identities, which ensured the preservation of gauge invariance in QED scattering amplitudes despite renormalization, allowing reliable calculations of processes like electron-photon interactions. These identities linked vertex functions to propagators, providing a consistency check that bolstered the theory's internal coherence. In the 1950s, Freeman Dyson further developed the S-matrix approach, originally proposed for scattering processes, into a powerful tool for QED by demonstrating its equivalence to field-theoretic perturbation theory and enabling systematic summation of diagrams to all orders in the fine-structure constant. Dyson's work, along with contributions from others like Murray Gell-Mann and Francis Low, facilitated practical computations of higher-order effects, enhancing QED's applicability to real-world phenomena. In 1961, QED began integrating into broader unification efforts, notably through Sheldon Glashow's proposal of a gauge theory based on SU(2) × U(1) symmetry that incorporated QED as the low-energy limit of an electroweak interaction, laying groundwork for the Standard Model. This shift positioned QED not as an isolated theory but as the electromagnetic sector of a unified electroweak framework, paving the way for subsequent weak interaction predictions.

Mathematical Formulation

Lagrangian and Action

The Lagrangian density of quantum electrodynamics (QED) provides the foundational framework for describing the interactions between electrons (or other charged fermions) and photons within a relativistic quantum field theory. It combines the free Dirac field for spin-1/2 particles, the free electromagnetic field, and the interaction term via minimal coupling. This structure ensures gauge invariance under U(1) transformations, reflecting the local symmetry of electromagnetism. The complete QED Lagrangian density is given by L=LDirac+LMaxwell+Linteraction,\mathcal{L} = \mathcal{L}_\text{Dirac} + \mathcal{L}_\text{Maxwell} + \mathcal{L}_\text{interaction}, where the Dirac term is LDirac=ψˉ(iγμDμm)ψ,\mathcal{L}_\text{Dirac} = \bar{\psi} (i \gamma^\mu D_\mu - m) \psi, the Maxwell term is LMaxwell=14FμνFμν,\mathcal{L}_\text{Maxwell} = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu}, and the interaction is incorporated through the covariant derivative Dμ=μ+ieAμD_\mu = \partial_\mu + i e A_\mu, with Linteraction=eψˉγμψAμ\mathcal{L}_\text{interaction} = -e \bar{\psi} \gamma^\mu \psi A_\mu emerging from the expansion of the Dirac term. Here, ψ\psi is the Dirac spinor field, mm its mass, AμA_\mu the photon four-potential, ee the elementary charge, γμ\gamma^\mu the Dirac matrices, and Fμν=μAννAμF_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu the electromagnetic field strength tensor. The Dirac and interaction components originate from the relativistic quantization of the electron, incorporating minimal coupling to the electromagnetic potential to preserve gauge invariance. The Maxwell term describes the dynamics of the free photon field in relativistic form, ensuring consistency with special relativity. The action functional SS is then defined as the spacetime integral S=d4xL,S = \int d^4 x \, \mathcal{L}, over Minkowski spacetime, from which the theory's dynamics follow via the principle of least action. This action principle, adapted to quantum fields, underpins both operator-based and path-integral formulations of QED. In the classical limit, it derives from the coupled Dirac-Maxwell equations, where the spinor field sources the electromagnetic field, and vice versa, quantized subsequently to incorporate quantum effects. For quantization, particularly in the path-integral approach, the electromagnetic field's gauge freedom requires a gauge-fixing term to ensure well-defined propagators. A common choice is the Lorenz gauge condition μAμ=0\partial^\mu A_\mu = 0, implemented via a term like Lgf=12ξ(μAμ)2\mathcal{L}_\text{gf} = -\frac{1}{2\xi} (\partial^\mu A_\mu)^2 added to the Lagrangian, with ξ=1\xi = 1 for the Feynman gauge. This classical Dirac-Maxwell action is quantized either through canonical operator methods, generating commutation relations for field operators, or via functional integrals over field configurations weighted by eiS/e^{iS/\hbar}, as developed in the mid-20th century reformulations of QED. Feynman's path-integral evaluation of this action also facilitates the perturbative computation of amplitudes using diagrams.

Equations of Motion

The equations of motion in quantum electrodynamics (QED) are derived from the action principle applied to the QED Lagrangian, which combines the Dirac field for electrons with the electromagnetic field while incorporating their interaction through minimal coupling. The action is given by S=d4x[ψˉ(iγμDμm)ψ14FμνFμν],S = \int d^4x \left[ \bar{\psi} (i \gamma^\mu D_\mu - m) \psi - \frac{1}{4} F_{\mu\nu} F^{\mu\nu} \right], where Dμ=μ+ieAμD_\mu = \partial_\mu + i e A_\mu is the covariant derivative, Fμν=μAννAμF_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu is the electromagnetic field strength tensor, ψ\psi is the Dirac spinor, ψˉ=ψγ0\bar{\psi} = \psi^\dagger \gamma^0, mm is the electron mass, and ee is the elementary charge (with the sign convention for electrons). This form ensures gauge invariance under local U(1) transformations and was central to the covariant formulation of QED. The field equations follow from the Euler-Lagrange equations, δSδϕ=0\frac{\delta S}{\delta \phi} = 0, for each field ϕ\phi. Varying with respect to ψˉ\bar{\psi} yields the Dirac equation in the presence of the electromagnetic field: (iγμDμm)ψ=0,(i \gamma^\mu D_\mu - m) \psi = 0, which describes the dynamics of the electron field coupled to the photon field AμA_\mu. Varying with respect to AμA^\mu produces the inhomogeneous Maxwell equations sourced by the electron current: νFνμ=eψˉγμψ,\partial_\nu F^{\nu\mu} = -e \bar{\psi} \gamma^\mu \psi, where the current jμ=eψˉγμψj^\mu = -e \bar{\psi} \gamma^\mu \psi acts as the source for the electromagnetic field. These coupled equations encapsulate the interaction between matter and radiation in a relativistic quantum framework. In the interaction picture, the total Hamiltonian is split into free and interaction parts to facilitate perturbation theory. The free fields ψ0\psi_0 and A0μA_{0\mu} satisfy the uncoupled Dirac and Maxwell equations, respectively, while the interaction Hamiltonian density is Hint=eψˉγμψAμ,\mathcal{H}_\text{int} = e \bar{\psi} \gamma^\mu \psi A_\mu, obtained by expanding the interaction term from the covariant derivative in the Lagrangian. This separation allows time evolution to be treated as free propagation interrupted by interactions. The covariant structure of these equations, with all indices contracted appropriately and relying on the Minkowski metric, preserves Lorentz invariance, ensuring the theory is consistent with special relativity. This invariance is manifest in the action and directly inherited by the equations of motion. In the classical limit, fixing the electromagnetic potential AμA_\mu as an external field recovers the Dirac equation for a charged particle in an electromagnetic background, describing phenomena like the Zeeman effect relativistically. Conversely, treating the current jμ=eψˉγμψj^\mu = -e \bar{\psi} \gamma^\mu \psi as a fixed classical source yields the sourced Maxwell equations, reducing to standard electrodynamics for macroscopic currents. These limits bridge QED to classical theories while highlighting the quantum-relativistic unification.

Quantization Procedure

The quantization of quantum electrodynamics (QED) begins with the canonical approach, where the classical fields are elevated to operators satisfying specific commutation or anticommutation relations derived from the Poisson brackets of the Lagrangian formalism. For the electromagnetic field, quantization is typically performed in the Coulomb gauge A=0\nabla \cdot \mathbf{A} = 0, where the equal-time commutation relations are [Ai(x),πj(y)]=iδijδ3(xy)[A_i(\mathbf{x}), \pi_j(\mathbf{y})] = i \delta_{ij} \delta^3(\mathbf{x} - \mathbf{y}), with the canonical momentum πj=A˙j\pi_j = -\dot{A}_j (in the mostly minus metric convention), projecting onto the two transverse photon polarizations. The temporal component A0A_0 is not dynamical and is determined by the Gauss law constraint E=eψˉγ0ψ\nabla \cdot \mathbf{E} = e \bar{\psi} \gamma^0 \psi, enforcing gauge invariance. For the fermionic electron field, represented by the Dirac spinor ψα(x)\psi_\alpha(x) and its adjoint ψˉβ(y)\bar{\psi}_\beta(y), the quantization follows the spin-statistics theorem, imposing anticommutation relations at equal times: {ψα(x),ψˉβ(y)}=δαβδ3(xy)\{\psi_\alpha(x), \bar{\psi}_\beta(y)\} = \delta_{\alpha\beta} \delta^3(\mathbf{x} - \mathbf{y}), while {ψα(x),ψβ(y)}=0\{\psi_\alpha(x), \psi_\beta(y)\} = 0 and {ψˉα(x),ψˉβ(y)}=0\{\bar{\psi}_\alpha(x), \bar{\psi}_\beta(y)\} = 0. The canonical momentum for the fermion is πβ(y)=iψˉβ(y)\pi_\beta(y) = i \bar{\psi}_\beta(y), leading to the consistent equal-time anticommutator {ψα(x),πβ(y)}=iδαβδ3(xy)\{\psi_\alpha(x), \pi_\beta(y)\} = i \delta_{\alpha\beta} \delta^3(\mathbf{x} - \mathbf{y}). These relations enforce the Pauli exclusion principle for electrons and are essential for constructing antisymmetric multi-fermion states. An alternative formulation employs the path integral approach, introduced by Feynman, where the generating functional ZZ for QED correlation functions is given by the integral over all field configurations: Z=DψDψˉDAexp(iS[ψ,ψˉ,A]),Z = \int \mathcal{D}\psi \, \mathcal{D}\bar{\psi} \, \mathcal{DA} \, \exp\left(i S[\psi, \bar{\psi}, A]\right), with the action SS from the QED Lagrangian L=ψˉ(im)ψ14FμνFμν\mathcal{L} = \bar{\psi}(i \not{D} - m)\psi - \frac{1}{4} F_{\mu\nu} F^{\mu\nu}, where =γμ(μ+ieAμ)\not{D} = \gamma^\mu ( \partial_\mu + i e A_\mu ). The integrals over the fermionic fields ψ\psi and ψˉ\bar{\psi} are performed using Grassmann variables, which anticommute and yield the determinant det(im)\det(i \not{D} - m) upon integration, while the bosonic photon field integral requires gauge fixing due to the redundancy under AμAμ+μΛA_\mu \to A_\mu + \partial_\mu \Lambda. This path integral framework naturally leads to Feynman rules for perturbation theory and is equivalent to the canonical method for gauge-invariant observables. To handle the gauge invariance in the path integral, the Faddeev-Popov method introduces ghost fields to compensate for the infinite volume of the gauge orbit, effectively fixing the gauge by inserting a δ\delta-function constraint, such as the Lorentz gauge μAμ=0\partial_\mu A^\mu = 0, along with a determinant that manifests as auxiliary anticommuting ghost fields integrated over in the measure. This procedure ensures the path integral is well-defined and independent of the gauge choice for physical quantities. The Hilbert space of QED states is constructed as a Fock space, built from a vacuum state 0|0\rangle annihilated by all annihilation operators, ak,λ0=0a_{\mathbf{k},\lambda} |0\rangle = 0 for photons (with momentum k\mathbf{k} and polarization λ\lambda) and bp,s0=0b_{\mathbf{p},s} |0\rangle = 0, dp,s0=0d_{\mathbf{p},s} |0\rangle = 0 for electrons and positrons (spin ss), respectively. Multi-particle states are generated by applying creation operators, such as k1,λ1;;kn,λn=ak1,λ1akn,λn0| \mathbf{k}_1, \lambda_1; \dots; \mathbf{k}_n, \lambda_n \rangle = a^\dagger_{\mathbf{k}_1,\lambda_1} \cdots a^\dagger_{\mathbf{k}_n,\lambda_n} |0\rangle for photons (symmetric under exchange) and antisymmetric combinations for fermions, forming the tensor product over all particle numbers while respecting the canonical relations. This structure allows the description of arbitrary processes with variable particle content. The interaction picture provides a framework for time evolution in perturbation theory by separating free and interacting Hamiltonians.

Perturbation Theory and Feynman Diagrams

Probability Amplitudes

In quantum electrodynamics (QED), probability amplitudes represent the fundamental quantities used to compute the likelihood of specific physical processes involving electrons, positrons, and photons. These amplitudes arise from Richard Feynman's path integral formulation, which generalizes the time evolution operator from non-relativistic quantum mechanics to relativistic quantum field theory. The amplitude for a transition from an initial state |i⟩ to a final state |f⟩ over time t is given by ⟨f| exp(-i H t / ℏ) |i⟩, where H is the Hamiltonian of the system. This expression encodes the quantum mechanical evolution, ensuring unitary time development that conserves total probability, as the norm of the state vector remains preserved under the unitary operator exp(-i H t / ℏ). Feynman's approach reformulates this amplitude as a path integral over all possible spacetime histories of the fields, extending the non-relativistic sum over paths to interacting relativistic fields. The probability amplitude is thus ∫ \mathcal{D}x , \exp\left( i S/\hbar \right), where the integral is taken over all field configurations x, and S is the classical action functional for the QED Lagrangian. Unlike the classical principle of least action, which selects a single extremal path, the quantum superposition principle requires summing contributions from all histories, weighted by the phase factor exp(i S/ℏ); paths near the classical trajectory interfere constructively, while others cancel due to rapid phase oscillations. This transition from particle paths in non-relativistic quantum mechanics to field configurations in QED accommodates the creation and annihilation of particles, essential for relativistic invariance. The squared modulus of the amplitude yields the observable transition probability, maintaining unitarity across the full Hilbert space of QED states. For instance, in electron-electron scattering, the amplitude is the coherent sum over all possible photon-exchange paths between the electrons, capturing both direct and exchange contributions without classical analogs. Feynman diagrams serve as a mnemonic device for visualizing these path sums, though the underlying amplitudes are computed via the path integral.

Diagram Construction and Rules

Feynman diagrams serve as pictorial representations of the terms in the perturbative expansion of the S-matrix in quantum electrodynamics (QED), facilitating the visualization of particle interactions through photon exchanges between electrons. Introduced by Richard Feynman, these diagrams encode the probability amplitudes for processes involving electrons and photons by depicting the topology of interactions in a spacetime framework. The basic elements of Feynman diagrams in QED consist of electron lines, photon lines, and vertices. Electron lines, representing the propagation of electrons or positrons, are drawn as straight lines with arrows indicating the direction of charge flow—arrows pointing forward for electrons and backward for positrons, reflecting the Dirac field nature. Photon lines, depicting the exchange of virtual photons, are illustrated as wavy lines without arrows, as photons are their own antiparticles. Vertices mark the points of interaction where an electron line meets a photon line, symbolizing the emission or absorption of a photon by an electron. Construction of Feynman diagrams follows specific rules to ensure they correspond to physical processes. Diagrams are oriented with time progressing from left to right, allowing the sequence of interactions to be read chronologically along the horizontal axis. Momentum flow is indicated by arrows on fermion lines, conserving momentum at each vertex, while photon lines carry the momentum transfer between interacting particles. Incoming real particles are represented by external lines entering the diagram from the left, and outgoing real particles by external lines exiting to the right; these external lines connect to the initial and final states of the scattering process. Loops, formed by closed paths of lines, account for virtual particles that are not directly observable but contribute to intermediate states in the interaction. Topological equivalence ensures that diagrams representing the same physical amplitude are considered identical if one can be continuously deformed into the other without altering the connectivity of lines or crossing them, thus avoiding overcounting in the perturbative series. This equivalence arises because the diagrams capture the invariant structure of the interaction terms in the Lagrangian, independent of specific spatial arrangements. A representative example is the lowest-order Feynman diagram for Compton scattering, where an incoming electron and photon interact to produce an outgoing electron and photon. The diagram features two vertices connected by an internal electron line (propagating the electron between interactions) and two external photon lines (one incoming, one outgoing), with the electron lines forming a straight path with arrows from incoming to outgoing, illustrating the single virtual electron exchange mediated by the photons. This tree-level diagram captures the leading contribution to the scattering amplitude without loops.

Propagators and Vertices

In quantum electrodynamics (QED), propagators describe the propagation of virtual electrons and photons in Feynman diagrams, representing the free-field Green's functions that connect interaction vertices. These propagators are essential for computing probability amplitudes in perturbation theory, where they account for the intermediate states between scattering events. The interaction vertices, on the other hand, encode the local coupling between charged particles and the electromagnetic field, dictated by the QED Lagrangian. The electron propagator in momentum space, for a free Dirac field of mass mm, is expressed as S(p)=γμpμ+mp2m2+iϵ,S(p) = \frac{\gamma^\mu p_\mu + m}{p^2 - m^2 + i\epsilon}, where γμ\gamma^\mu are the Dirac matrices, pμp^\mu is the 4-momentum, and the infinitesimal iϵi\epsilon ensures the correct boundary conditions for time-ordered products. This form emerges from solving the Dirac equation in the Feynman prescription for handling positive and negative energy solutions, allowing consistent summation over all possible paths in the path-integral formulation. For the photon, in the Feynman gauge where the gauge-fixing term simplifies calculations while preserving Lorentz invariance, the propagator is Dμν(p)=igμνp2+iϵ,D_{\mu\nu}(p) = \frac{-i g_{\mu\nu}}{p^2 + i\epsilon}, with gμνg_{\mu\nu} the Minkowski metric tensor (signature ++---). This choice of gauge, introduced to facilitate diagrammatic expansions, yields transverse and longitudinal components that cancel in physical observables but streamline intermediate computations. At each interaction vertex involving an electron, positron, and photon, the Feynman rule assigns a factor of ieγμ-i e \gamma^\mu, where e>0e > 0 is the elementary charge magnitude and the index μ\mu contracts with the photon's polarization. This vertex factor derives directly from the minimal coupling term in the QED interaction Lagrangian, Lint=eψˉγμψAμ\mathcal{L}_\text{int} = -e \bar{\psi} \gamma^\mu \psi A_\mu, ensuring gauge invariance under U(1)U(1) transformations. Momentum is strictly conserved at every vertex, with the sum of incoming 4-momenta equaling the sum of outgoing 4-momenta, pinμ=poutμ\sum p_\text{in}^\mu = \sum p_\text{out}^\mu, as required by translational invariance of the theory. Higher-order corrections modify these bare propagators through loop insertions. The electron self-energy, arising from virtual photon emission and reabsorption, dresses the propagator and shifts the effective mass, while vacuum polarization—due to virtual electron-positron pairs—alters the photon propagator by screening the charge at short distances. These effects are incorporated perturbatively by inserting the respective one-loop diagrams into the free propagators, though full renormalization is required for finite results.

Renormalization

Need for Renormalization

In the perturbative formulation of quantum electrodynamics, higher-order terms in the expansion introduce Feynman diagrams containing closed loops of virtual particles. These loop contributions lead to ultraviolet divergences, arising from momentum integrals that fail to converge at large momenta due to the lack of a natural high-energy cutoff in the theory. A prototypical example is the divergent integral encountered in vacuum polarization processes, d4k(2π)41k2,\int \frac{d^4 k}{(2\pi)^4} \frac{1}{k^2}, which behaves as Λ2/(16π2)\Lambda^2 / (16\pi^2) in a hard cutoff regularization scheme, where Λ\Lambda \to \infty yields an infinite result. Such divergences imply that the bare parameters of the Lagrangian—the electron mass m0m_0 and coupling charge e0e_0—cannot directly match the finite, experimentally measured physical quantities mm and ee. Quantum vacuum fluctuations generate infinite corrections to these parameters, so the physical mass emerges as m=m0+δmm = m_0 + \delta m with δm\delta m \to \infty, and the physical charge as e=e0Z3e = e_0 \sqrt{Z_3}
Add your contribution
Related Hubs
User Avatar
No comments yet.