Hubbry Logo
search
logo

Renormalization group

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In theoretical physics, the renormalization group (RG) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying physical laws (codified in a quantum field theory) as the energy (or mass) scale at which physical processes occur varies. In this context, a change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (self-similarity),[a] where under the fixed point of the renormalization group flow the field theory is conformally invariant.

As the scale varies, it is as if one is decreasing (as RG is a semi-group and doesn't have a well-defined inverse operation) the magnifying power of a notional microscope viewing the system. In renormalizable theories, systems exhibit self-similarity across different scales, with parameters that describe system components changing as the scale varies. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.

For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.

History

[edit]

The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school, Euclid, and up to Galileo.[1] They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.

The renormalization group was initially developed for particle physics applications but has since been applied to solid-state physics, fluid mechanics, physical cosmology, and even nanotechnology. An early article[2] by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfers quantities from the bare terms to the counter terms. They introduced a function h(e) in quantum electrodynamics (QED), which is now known as the beta function (see below).

Beginnings

[edit]

Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954,[3] which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g(μ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation or equivalently, , for an arbitrary function G (known as Wegner's scaling function, after Franz Wegner) and a constant d, in terms of the coupling g(M) at a reference scale M.

Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ, and can vary to define the theory at any other scale:

The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings[4] in the mathematical sense (Schröder's equation).

On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ(g) = G d/(∂G/∂g) of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg–Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation: The modern name is also indicated, the beta function, introduced by Curtis Callan and Kurt Symanzik in 1970.[5][6] Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the Large Electron–Positron Collider (LEP) experiments: the fine structure "constant" of QED was measured[7] to be about 1127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1137.[b]

Deeper understanding

[edit]

The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory.[c] This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ.[d]

The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).

A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group.[9] The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.

This approach covered the conceptual point and was given full computational substance in the extensive important work of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975,[10] as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971.[11][12][13] He was awarded the Nobel prize for these decisive contributions in 1982.[14]

Reformulation

[edit]

Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970.[5][15] The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory.[e] Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.

In 1973,[16][17] it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of μ at which the coupling blows up (diverges). This special value is the scale of the strong interactions, μ = ΛQCD and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.

Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.[f]

Conformal symmetry

[edit]

Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β(g) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a (trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point, first predicted by Pendleton and Ross (1981),[18] and C. T. Hill.[19] The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.[citation needed]

In string theory, conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification.

It is also the modern key idea underlying critical phenomena in condensed matter physics.[20] Indeed, the RG has become one of the most important tools of modern physics.[21]

Block spin

[edit]

This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966.[9]

Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.

Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature T. The strength of their interaction is quantified by a certain coupling J. The physics of the system will be described by a certain formula, say the Hamiltonian H(T, J).

The solid is then divided into blocks of 2×2 squares ('blocks), described in terms of block variables that represent the average behavior within each block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for T and J: H(T, J). (This isn't exactly true, in general, but it is often a good first approximation.)

The original problem may be computationally intractable due to the large number of atomic variables involved. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to H(T", J"), and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.

Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behavior of the RG transformation which took (T,J) → (T,J) and (T, J) → (T", J"). Often, when iterated many times, this RG transformation leads to a certain number of fixed points.

To be more concrete, consider a magnetic system (e.g., the Ising model), in which the J coupling denotes the trend of neighbor spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering J term and the disordering effect of temperature.

For many models of this kind there are three fixed points:

  1. T = 0 and J → ∞. This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase.
  2. T → ∞ and J → 0. Exactly the opposite; here, temperature dominates, and the system is disordered at large scales.
  3. A nontrivial point between them, T = Tc and J = Jc. In this point, changing the scale does not change the physics, because the system is in a fractal state. It corresponds to the Curie phase transition, and is also called a critical point.

So, if we are given a certain material with given values of T and J, all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.

Elementary theory

[edit]

In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.

Now we consider a certain blocking transformation of the state variables . The number of must be less than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable.

Most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.

The change in the parameters is implemented by a certain beta function: , which is said to induce a renormalization group flow (or RG flow) on the -space. The values of under the flow are called running couplings.

As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality, possessing what is called a Landau pole, as in quantum electrodynamics. For a φ4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension D ≥ 5.[22] For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.

Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup, as lossiness implies that there is no unique inverse for each element.

Relevant and irrelevant operators and universality classes

[edit]

Consider a certain observable A of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:

If its magnitude ... then the observable is ...
always increases relevant
always decreases irrelevant
other marginal

A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant, i.e., the macroscopic physics is dominated by only a few observables in most systems.

As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 1023 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.

Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables, such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.

This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes, specified by the shared sets of relevant observables.[g]

Momentum space

[edit]

Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG.

Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.

Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion. This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.

As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale.

Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.

Exact renormalization group equations

[edit]

An exact renormalization group equation (ERGE) is one that takes irrelevant couplings into account. There are several formulations.

The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space. Insist upon a hard momentum cutoff, p2 ≤ Λ2 so that the only degrees of freedom are those with momenta less than Λ. The partition function is

For any positive Λ′ less than Λ, define SΛ′ (a functional over field configurations φ whose Fourier transform has momentum support within p2 ≤ Λ′2) as

If SΛ depends only on ϕ and not on derivatives of ϕ, this may be rewritten as in which it becomes clear that, since only functions φ with support between Λ' and Λ are integrated over, the left hand side may still depend on ϕ with support outside that range. Obviously,

In fact, this transformation is transitive. If you compute SΛ from SΛ and then compute SΛ from SΛ, this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.

The Polchinski ERGE involves a smooth UV regulator cutoff.[23] Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than Λ heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale Λ. As in Wilson's approach, we have a different action functional for each cutoff energy scale Λ. Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.

In other words, (for a real scalar field; generalizations to other fields are obvious), and ZΛ is really independent of Λ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is when expanded.

When , RΛ(p)/p2 is essentially 1. When , RΛ(p)/p2 becomes very very huge and approaches infinity. RΛ(p)/p2 is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff Λ unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.

The condition that can be satisfied by (but not only by)

Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively.[24]

The effective average action ERGE involves a smooth IR regulator cutoff. The idea is to take all fluctuations right up to an IR scale k into account. The effective average action(EAA) will be accurate for fluctuations with momenta larger than k. As the parameter k is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large k the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.

For a real scalar field, one adds an IR cutoff to the action S, where Rk is a function of both k and p such that for , Rk(p) is very tiny and approaches 0 and for , . Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.

One can use the condensed deWitt notation for this IR regulator.

So, where J is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S[φ] + 1/2 φ⋅Rkφ and so, to get the effective average action, we subtract off 1/2 φRkφ. In other words, can be inverted to give Jk[φ] and we define the effective average action Γk as

Hence, thus is the ERGE which is also known as the Wetterich equation.[25]

As shown by Morris the effective action Γk is in fact simply related to Polchinski's effective action Sint via a Legendre transform relation.[26]

As there are infinitely many choices of Rk, there are also infinitely many different interpolating ERGEs. Generalization to other fields like spinorial fields is straightforward.

Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.

Renormalization group improvement of the effective potential

[edit]

The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg[27] mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the model:

In order to determine the effective potential, it is useful to write as where is a power series in :

Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference.[28]

See also

[edit]

Remarks

[edit]
  1. ^ Note that scale transformations are a strict subset of conformal transformations, in general, the latter including additional symmetry generators associated with special conformal transformations.
  2. ^ Early applications to quantum electrodynamics are discussed in the influential 1959 book The Theory of Quantized Fields by Nikolay Bogolyubov and Dmitry Shirkov.[8]
  3. ^ Although note that the RG exists independently of the infinities.
  4. ^ The regulator parameter Λ could ultimately be taken to be infinite – infinities reflect the pileup of contributions from an infinity of degrees of freedom at infinitely high energy scales.
  5. ^ Remarkably, the trace anomaly and the running coupling quantum mechanical procedures can themselves induce mass.
  6. ^ For strongly correlated systems, variational techniques are a better alternative.
  7. ^ A superb technical exposition by J. Zinn-Justin (2010) is the classic article Zinn-Justin, Jean (2010). "Critical Phenomena: Field theoretical approach". Scholarpedia. 5 (5): 8346. Bibcode:2010SchpJ...5.8346Z. doi:10.4249/scholarpedia.8346.. For example, for Ising-like systems with a symmetry or, more generally, for models with an O(N) symmetry, the Gaussian (free) fixed point is long-distance stable above space dimension four, marginally stable in dimension four, and unstable below dimension four. See Quantum triviality.

Citations

[edit]
  1. ^ "Introduction to Scaling Laws". av8n.com. Archived from the original on 2014-06-21. Retrieved 2013-03-15.
  2. ^ Stueckelberg, E.C.G.; Petermann, A. (1953). "La renormalisation des constants dans la théorie de quanta". Helv. Phys. Acta (in French). 26: 499–520.
  3. ^ Gell-Mann, M.; Low, F. E. (1954). "Quantum Electrodynamics at Small Distances" (PDF). Physical Review. 95 (5): 1300–1312. Bibcode:1954PhRv...95.1300G. doi:10.1103/PhysRev.95.1300.
  4. ^ Curtright, T.L.; Zachos, C.K. (March 2011). "Renormalization Group Functional Equations". Physical Review D. 83 (6) 065019. arXiv:1010.5174. Bibcode:2011PhRvD..83f5019C. doi:10.1103/PhysRevD.83.065019. S2CID 119302913.
  5. ^ a b Callan, C.G. (1970). "Broken scale invariance in scalar field theory". Physical Review D. 2 (8): 1541–1547. Bibcode:1970PhRvD...2.1541C. doi:10.1103/PhysRevD.2.1541.
  6. ^ Schwartz, Matthew D. (2013-12-14). Quantum Field Theory and the Standard Model (1 ed.). Cambridge University Press. p. 314. doi:10.1017/9781139540940. ISBN 978-1-108-98503-1.
  7. ^ Fritzsch, Harald (2002). "Fundamental Constants at High Energy". Fortschritte der Physik. 50 (5–7): 518–524. arXiv:hep-ph/0201198. Bibcode:2002ForPh..50..518F. doi:10.1002/1521-3978(200205)50:5/7<518::AID-PROP518>3.0.CO;2-F. S2CID 18481179.
  8. ^ Bogoliubov, N.N.; Shirkov, D.V. (1959). The Theory of Quantized Fields. New York, NY: Interscience.
  9. ^ a b Kadanoff, Leo P. (1966). "Scaling laws for Ising models near ". Physics Physique Fizika. 2 (6): 263. doi:10.1103/PhysicsPhysiqueFizika.2.263.
  10. ^ Wilson, K.G. (1975). "The renormalization group: Critical phenomena and the Kondo problem". Rev. Mod. Phys. 47 (4): 773. Bibcode:1975RvMP...47..773W. doi:10.1103/RevModPhys.47.773.
  11. ^ Wilson, K.G. (1971). "Renormalization group and critical phenomena. I. Renormalization group and the Kadanoff scaling picture". Physical Review B. 4 (9): 3174–3183. Bibcode:1971PhRvB...4.3174W. doi:10.1103/PhysRevB.4.3174.
  12. ^ Wilson, K. (1971). "Renormalization group and critical phenomena. II. Phase-space cell analysis of critical behavior". Physical Review B. 4 (9): 3184–3205. Bibcode:1971PhRvB...4.3184W. doi:10.1103/PhysRevB.4.3184.
  13. ^ Wilson, K.G.; Fisher, M. (1972). "Critical exponents in 3.99 dimensions". Physical Review Letters. 28 (4): 240. Bibcode:1972PhRvL..28..240W. doi:10.1103/physrevlett.28.240.
  14. ^ Wilson, Kenneth G. "Wilson's Nobel Prize address" (PDF). NobelPrize.org.
  15. ^ Symanzik, K. (1970). "Small distance behaviour in field theory and power counting". Communications in Mathematical Physics. 18 (3): 227–246. Bibcode:1970CMaPh..18..227S. doi:10.1007/BF01649434. S2CID 76654566.
  16. ^ Gross, D.J.; Wilczek, F. (1973). "Ultraviolet behavior of non-Abelian gauge theories". Physical Review Letters. 30 (26): 1343–1346. Bibcode:1973PhRvL..30.1343G. doi:10.1103/PhysRevLett.30.1343.
  17. ^ Politzer, H.D. (1973). "Reliable perturbative results for strong interactions". Physical Review Letters. 30 (26): 1346–1349. Bibcode:1973PhRvL..30.1346P. doi:10.1103/PhysRevLett.30.1346.
  18. ^ Pendleton, Brian; Ross, Graham (1981). "Mass and mixing angle predictions from infrared fixed points". Physics Letters B. 98 (4): 291–294. Bibcode:1981PhLB...98..291P. doi:10.1016/0370-2693(81)90017-4.
  19. ^ Hill, Christopher T. (1981). "Quark and lepton masses from renormalization group fixed points". Physical Review D. 24 (3): 691–703. Bibcode:1981PhRvD..24..691H. doi:10.1103/PhysRevD.24.691.
  20. ^ Shankar, R. (1994). "Renormalization-group approach to interacting fermions". Reviews of Modern Physics. 66 (1): 129–192. arXiv:cond-mat/9307009. Bibcode:1994RvMP...66..129S. doi:10.1103/RevModPhys.66.129. (For nonsubscribers see Shankar, R. (1993). "Renormalization-group approach to interacting fermions". Reviews of Modern Physics. 66 (1): 129–192. arXiv:cond-mat/9307009. Bibcode:1994RvMP...66..129S. doi:10.1103/RevModPhys.66.129..)
  21. ^ Adzhemyan, L.Ts.; Kim, T.L.; Kompaniets, M.V.; Sazonov, V.K. (August 2015). "Renormalization group in the infinite-dimensional turbulence: determination of the RG-functions without renormalization constants". Nanosystems: Physics, Chemistry, Mathematics. 6 (4): 461. doi:10.17586/2220-8054-2015-6-4-461-469.
  22. ^ Aizenman, M. (1981). "Proof of the triviality of Φ4
    d
    field theory and some mean-field features of Ising models for d > 4". Physical Review Letters. 47 (1): 1–4. Bibcode:1981PhRvL..47....1A. doi:10.1103/PhysRevLett.47.1.
  23. ^ Polchinski, Joseph (1984). "Renormalization and Effective Lagrangians". Nucl. Phys. B. 231 (2): 269. Bibcode:1984NuPhB.231..269P. doi:10.1016/0550-3213(84)90287-6.
  24. ^ Distler, Jacques. "000648.html". golem.ph.utexas.edu.
  25. ^ Wetterich, Christof (1993). "Exact evolution equations for the effective potential". Phys. Lett. B. 301 (1): 90. arXiv:1710.05815. Bibcode:1993PhLB..301...90W. doi:10.1016/0370-2693(93)90726-X.
  26. ^ Morris, Tim R. (1994). "The Exact renormalization group and approximate solutions". Int. J. Mod. Phys. A. 9 (14): 2411. arXiv:hep-ph/9308265. Bibcode:1994IJMPA...9.2411M. doi:10.1142/S0217751X94000972. S2CID 15749927.
  27. ^ Coleman, Sidney; Weinberg, Erick (1973-03-15). "Radiative Corrections as the Origin of Spontaneous Symmetry Breaking". Physical Review D. 7 (6): 1888–1910. arXiv:hep-th/0507214. Bibcode:1973PhRvD...7.1888C. doi:10.1103/PhysRevD.7.1888. ISSN 0556-2821. S2CID 6898114.
  28. ^ Souza, Huan; Bevilaqua, L. Ibiapina; Lehum, A. C. (2020-08-05). "Renormalization group improvement of the effective potential in six dimensions". Physical Review D. 102 (4) 045004. arXiv:2005.03973. Bibcode:2020PhRvD.102d5004S. doi:10.1103/PhysRevD.102.045004.

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The renormalization group (RG) is a fundamental framework in theoretical physics and statistical mechanics that elucidates how the effective behavior of a physical system changes across different length scales or energy scales.[1] It involves iteratively integrating out short-wavelength (high-energy) degrees of freedom to derive coarse-grained effective theories, revealing scale-invariant properties, universality classes near critical points, and the scale-dependent evolution of coupling constants via beta functions.[2] This approach, pioneered by Kenneth Wilson, ensures that physical observables remain independent of ultraviolet cutoffs and regularization schemes, providing insights into phenomena like phase transitions, quantum field theories, and condensed matter systems.[3]

Historical Development

Origins in Quantum Electrodynamics

The development of renormalization in quantum electrodynamics (QED) emerged from efforts to address ultraviolet divergences that plagued perturbative calculations of quantum field theories in the 1930s and 1940s. These infinities arose in higher-order Feynman diagrams, particularly in the self-energy of the electron and vacuum polarization effects, rendering predictions ill-defined without a systematic procedure to handle them. Ernst Stueckelberg anticipated key aspects of renormalization in his 1934 work on a manifestly covariant perturbation theory for the Dirac electron, where he employed four-dimensional Fourier transforms to ensure relativistic invariance in processes like Compton scattering, laying groundwork for managing divergent integrals through redefined parameters.[4] The experimental discovery of the Lamb shift in 1947 provided a crucial impetus, revealing discrepancies between Dirac theory predictions and atomic spectra that demanded refined QED calculations. Hans Bethe promptly addressed this by introducing mass renormalization, estimating the shift through a non-relativistic approximation that subtracted infinite self-energy contributions, effectively redefining the observed electron mass in terms of bare parameters. Julian Schwinger advanced this in 1948 by computing the electron's anomalous magnetic moment as α/2π\alpha / 2\pi using variational principles and proper-time methods, demonstrating that divergences could be absorbed without altering the finite result. Freeman Dyson solidified the framework in 1949 with a comprehensive perturbative analysis, proving that mass, charge, and field renormalizations suffice to eliminate infinities to all orders in QED, via multiplicative redefinitions such as the renormalized charge e=Ze0e = Z e_0 where ZZ is the wave function renormalization constant.[5] These efforts highlighted the scale-dependent nature of renormalized parameters, as ultraviolet divergences implied that physical quantities vary with the energy scale μ\mu of observation. In 1954, Murray Gell-Mann and Francis Low formalized this through the concept of a running coupling constant, deriving the logarithmic evolution of the fine-structure constant α(μ2)=α+α23πln(μ2/m2)\alpha(\mu^2) = \alpha + \frac{\alpha^2}{3\pi} \ln(|\mu|^2 / m^2), where mm is the electron mass, capturing how vacuum polarization screens the bare charge at different scales. This running was encapsulated in the beta function β(g)=μdgdμ\beta(g) = \mu \frac{dg}{d\mu}, which for QED at one loop yields β(α)=α23π>0\beta(\alpha) = \frac{\alpha^2}{3\pi} > 0, indicating that the coupling strengthens at higher energies. The Callan-Symanzik equation, developed independently by Curtis Callan and Kurt Symanzik in 1970, generalized this to an evolution equation for Green's functions, (μμ+β(g)g+nγ)Γ=0\left( \mu \frac{\partial}{\partial \mu} + \beta(g) \frac{\partial}{\partial g} + n \gamma \right) \Gamma = 0, where γ\gamma is the anomalous dimension, providing a differential framework for how couplings and fields transform under scale changes in renormalized perturbation theory.[6] Despite these advances, early renormalization in QED remained confined to perturbative expansions around the weak-coupling regime, relying on asymptotic series without a non-perturbative group structure to unify scale transformations across theories. Stueckelberg and André Petermann extended the idea in 1953 by positing a renormalization group of transformations among equivalent perturbative definitions, but this was still tied to QED's diagrammatic methods rather than a broader invariance principle. These limitations underscored the need for a more general formulation to interpret fixed points, such as the Gaussian ultraviolet fixed point in QED, beyond order-by-order calculations.

Block Spin Transformations in Statistical Mechanics

In the mid-1960s, efforts to understand critical phenomena in statistical mechanics led to the development of block spin transformations as a means to connect microscopic spin models to macroscopic behavior. Leo P. Kadanoff introduced this real-space coarse-graining approach in 1966, motivated by discrepancies between mean-field predictions and experimental or exact results for critical exponents in systems like the Ising model.[7] His idea emphasized the separation of scales near critical points, where short-wavelength fluctuations could be averaged out to reveal effective long-wavelength physics, predating Kenneth Wilson's systematic formulation by several years. The core procedure involves partitioning the lattice into blocks of linear size $ b > 1 $, typically $ b^d $ sites in $ d $ dimensions, and replacing the spins within each block with a single effective "block spin" obtained by averaging the original spins. This decimation reduces the degrees of freedom while preserving the partition function's singular behavior relevant to critical phenomena. Under rescaling by factor $ b $, the partition function $ Z $ of the original system transforms into an effective partition function $ Z' $ for the coarser lattice, such that $ Z' \approx Z^{1/b^d} $ up to non-singular factors, ensuring the free energy density remains invariant in form but with transformed couplings. The new Hamiltonian $ H' $ features rescaled couplings, for example, the reduced temperature $ \tau' = b^y \tau $ and magnetic field $ h' = b^x h $, where $ y $ and $ x $ are scaling exponents, leading to iterative mappings of the parameters.[7][8] A illustrative application is to the two-dimensional Ising model with nearest-neighbor interactions. Starting from the original Hamiltonian with short-range ferromagnetic couplings, the block spin transformation—such as for $ b=2 $ on a square lattice—generates an effective model where interactions extend to next-nearest neighbors or further, effectively turning short-range couplings into longer-range ones after one or more iterations. This demonstrates how coarse-graining near the critical point $ K_c $ (where $ K_c = f(K_c) $ under the mapping) captures the emergence of scale-invariant behavior, with fixed points dictating universal critical exponents like $ \nu = 1/y $.[8] These 1960s developments highlighted the role of scale transformations in explaining universality without perturbative continuum methods.[7]

Wilsonian Reformulation

In the early 1970s, Kenneth Wilson developed a reformulation of the renormalization group (RG) that provided a non-perturbative framework unifying quantum field theory (QFT) and statistical mechanics. In his seminal 1971 papers, Wilson introduced the RG as a semigroup of transformations acting on the action functional of a theory, allowing for the systematic integration of high-momentum modes to generate effective theories at coarser scales.[9] This approach built upon earlier ideas in statistical mechanics but extended them to continuum field theories by emphasizing the flow of couplings under scale transformations. A comprehensive review of these ideas appeared in 1974, co-authored with John Kogut, which formalized the Wilsonian RG as a tool for analyzing critical behavior and phase transitions.[10] The core of the Wilsonian reformulation lies in the iterative process of coarse-graining via momentum-space integration. Starting with a theory cutoff at momentum scale Λ, one integrates out fluctuations in a thin shell between Λ/b and Λ, where b > 1 is the rescaling factor. The remaining low-momentum modes are then rescaled by b to restore the original cutoff Λ, leading to an effective action that incorporates the effects of the integrated modes. This procedure generates a flow equation for the effective potential U(φ, t), where φ represents the field and t = ln b parametrizes the RG "time" or scale evolution. For scalar theories like φ⁴, the flow captures how interactions evolve, enabling the study of fixed points where the theory becomes scale-invariant.[10] This momentum-shell method generalizes discrete block-spin transformations from lattice models into a continuous framework suitable for QFT.[11] Wilson's contributions earned him the 1982 Nobel Prize in Physics "for his theory of the renormalization group and its applications to critical phenomena," recognizing the profound impact on understanding phase transitions and scaling laws.[12] A key insight from this framework is its resolution of the triviality problem in φ⁴ theory in four dimensions. At the Gaussian fixed point, the quartic coupling is an irrelevant operator, meaning its influence diminishes under RG flow toward the infrared, causing the continuum limit to be a free theory regardless of bare interactions. This occurs because higher-order operators become increasingly irrelevant, suppressing non-trivial interactions at long distances.[10][13] The scaling behavior of couplings is quantified by the Wilsonian beta function. Near fixed points, the linearised RG transformation for a general coupling gig_i associated with an operator is gibyigig_i' \approx b^{y_i} g_i, where dd is the spacetime dimension, yiy_i is the scaling eigenvalue, and higher-order terms contribute to the full flow; the associated continuous beta function is β(gi)yigi+\beta(g_i) \approx y_i g_i + \cdots. The sign of yiy_i determines relevance: positive yiy_i indicates a relevant operator that grows under coarse-graining, while negative yiy_i signals irrelevance. Near fixed points, this reveals the stability structure, with the Gaussian fixed point in d=4d=4 featuring the ϕ4\phi^4 term as marginally irrelevant (yu=4d=0y_u = 4 - d = 0 at the upper critical dimension, but effectively irrelevant due to the positive quadratic term in the beta function). This formulation underpins the classification of operators and the prediction of universal critical exponents.[11][10]

Connections to Conformal Symmetry

The connections between the renormalization group (RG) and conformal symmetry were first elucidated in the 1970s by Alexander Polyakov, who demonstrated that correlation functions at critical points exhibit invariance under the full conformal group, linking scale invariance emerging from RG fixed points to enhanced conformal symmetry in quantum field theories.[14] This insight laid the foundation for understanding how RG flows approach conformally invariant theories at criticality. In the 1980s, John Cardy extended these ideas to systems with boundaries, developing boundary conformal field theory (BCFT) to describe surface critical behavior while preserving bulk conformal invariance, which proved essential for applications in statistical mechanics and string theory. A pivotal link occurs at RG fixed points, where the beta functions vanish, rendering the theory scale-invariant; in spacetime dimensions d>2d > 2, this scale invariance typically enhances to full conformal symmetry provided there are no relevant dimensionful operators or other symmetry-breaking terms that could introduce a scale.[15] In two dimensions, this enhancement is exact for unitary theories, as exemplified by the minimal models of conformal field theory, which represent RG fixed points parameterized by coprime integers (p,q)(p, q) and feature a finite spectrum of primary operators with central charge c=16(pq)2/(pq)c = 1 - 6(p - q)^2 / (p q), capturing universal critical behavior in systems like the Ising model.[16] The trace anomaly of the stress-energy tensor further bridges RG dynamics and conformal symmetry, with the trace TμμT^\mu_\mu proportional to the beta functions times the corresponding operators in the Lagrangian, quantifying the breaking of conformal invariance away from fixed points. This relation manifests in the anomalous Ward identity for the dilatation current DμD^\mu, which couples scale transformations to RG flows:
μDμ=β(g)Lg \partial_\mu D^\mu = \beta(g) \frac{\partial \mathcal{L}}{\partial g}
Here, the left side represents the divergence of the dilatation current, while the right side encodes the running of the coupling gg under RG transformations, illustrating how conformal symmetry is restored precisely when β(g)=0\beta(g) = 0.

Fundamental Concepts

Coarse-Graining and Scale Transformations

The renormalization group (RG) is conceptualized as a transformation $ T_b $, with $ b > 1 $ denoting the linear rescaling factor, that maps an original Hamiltonian $ H $ to a renormalized Hamiltonian $ H' = T_b(H) $. This mapping preserves the partition function and correlation functions at distances much larger than the microscopic scale, up to overall rescalings, thereby maintaining the physical content at long wavelengths while altering the description at shorter scales. The transformation effectively integrates out degrees of freedom below a cutoff length $ b \Lambda $, where $ \Lambda $ is the original ultraviolet cutoff, leading to an effective theory valid on the coarser lattice spacing $ b a $, with $ a $ the microscopic lattice constant.[17] Central to the RG procedure is coarse-graining, which reduces the number of degrees of freedom by averaging over microscopic configurations or, in quantum formulations, by performing a partial trace over short-scale modes. In classical statistical mechanics, this often involves summing the Boltzmann weights over subsets of variables, yielding an effective Hamiltonian for the remaining variables that incorporates emergent interactions. For example, in spin systems, one might average the energy contributions within spatial blocks to define collective variables, ensuring the long-distance thermodynamics remains unchanged. An early and influential implementation of this averaging appears in Kadanoff's block spin approach for the Ising model. A concrete illustration occurs in lattice spin models, where an initial Hamiltonian featuring only nearest-neighbor couplings, such as $ H = -J \sum_{\langle i j \rangle} \sigma_i \sigma_j $ for Ising spins $ \sigma = \pm 1 $, undergoes coarse-graining to produce $ H' $ with additional longer-range terms. During the block averaging or decimation process, correlations between spins in adjacent blocks induce effective interactions that extend beyond immediate neighbors, potentially including next-nearest or further couplings proportional to powers of the original $ J $, thus enriching the interaction structure while preserving the overall scale invariance at criticality.[17] The RG transformations exhibit the semigroup property $ T_{b_1} \circ T_{b_2} = T_{b_1 b_2} $ for $ b_1, b_2 > 1 $, meaning that applying successive rescalings is equivalent to a single transformation at the composite scale. This compositional rule underpins the iterative nature of RG, enabling a hierarchical description of the system across arbitrarily large scales without inconsistencies in the flow of parameters.[18]

Fixed Points and RG Flows

In the renormalization group (RG) framework, fixed points represent special configurations of coupling constants where the theory remains invariant under scale transformations, meaning the beta functions vanish and the couplings do not evolve with the energy scale.[19] These fixed points dictate the long-distance behavior of physical systems, serving as attractors or repellers in the space of possible theories. Ultraviolet (UV) fixed points characterize the high-energy completion of a theory, where interactions become scale-invariant at short distances, often corresponding to asymptotically free or conformal behaviors in quantum field theories.[20] In contrast, infrared (IR) fixed points describe the low-energy effective theories, governing the emergence of phases and critical phenomena at long distances, such as in condensed matter systems near phase transitions.[20] RG flows describe the trajectories of coupling constants $ g_i $ as the scale parameter $ l $ is varied, with increasing $ l $ corresponding to flowing toward the IR, parameterized by $ l = \ln(b) $, where $ b $ is the rescaling factor. The evolution is governed by the flow equations $ \frac{dg_i}{dl} = \beta_i(\mathbf{g}) $, where $ \beta_i $ are the beta functions encoding how interactions change under coarse-graining. Fixed points occur at values $ \mathbf{g}^* $ where $ \beta_i(\mathbf{g}^) = 0 $, separating basins of attraction for different physical phases; for instance, flows originating from microscopic Hamiltonians typically converge to an IR fixed point that determines macroscopic properties like correlation lengths.[19] The Gaussian fixed point, located at $ \mathbf{g}^ = 0 $, corresponds to free-field theories without interactions and is stable in the UV for dimensions $ d > 4 $, reflecting mean-field behavior above the upper critical dimension.[20] Non-trivial fixed points, such as the Wilson-Fisher fixed point in $ \phi^4 $ theory for $ 2 < d < 4 $, indicate interacting theories with non-mean-field critical exponents and arise from perturbative expansions in $ \epsilon = 4 - d $.[20] Near a fixed point $ \mathbf{g}^* $, the flows can be linearized by expanding the beta functions: $ \delta g_i = g_i - g_i^* $, leading to $ \frac{d \delta g_i}{dl} = \sum_j y_{ij} \delta g_j $, where $ y_{ij} = \left. \frac{\partial \beta_i}{\partial g_j} \right|_{\mathbf{g}^*} $ form the stability matrix with eigenvalues $ y_k $.[20] The real parts of these eigenvalues determine the rates at which trajectories approach or depart from the fixed point along the RG flow; negative eigenvalues indicate directions attracting to the fixed point in the IR (or repelling in the UV), corresponding to irrelevant operators, while positive ones signify repulsion in the IR (attraction in the UV), corresponding to relevant operators.[19] This linearization reveals the structure of the phase space, with the Gaussian fixed point exhibiting instabilities below four dimensions due to relevant perturbations that drive flows toward non-trivial fixed points.[20] Coarse-graining procedures generate these flows by successively integrating out short-wavelength modes, mapping the theory to an effective description at coarser scales.[19]

Relevant, Irrelevant, and Marginal Operators

In the renormalization group (RG) framework, perturbations to a fixed-point Hamiltonian or action are classified according to their behavior under scale transformations, determined by the eigenvalues $ y_i $ of the linearized RG transformation matrix around the fixed point. These eigenvalues govern how the couplings associated with operators evolve along RG trajectories. Relevant operators correspond to $ y_i > 0 $, where the couplings grow under coarse-graining toward the infrared (IR), destabilizing the fixed point and driving the system away from criticality.[21] Irrelevant operators have $ y_i < 0 $, causing their couplings to decay in the IR, rendering them insignificant for long-distance physics.[21] Marginal operators feature $ y_i = 0 $, resulting in scale-invariant behavior to linear order but typically leading to logarithmic corrections from higher-order terms in the beta function.[13] The classification is intimately tied to the scaling dimensions of the operators. For an operator $ \mathcal{O}_i $ in the effective action, the scaling dimension $ \Delta_i $ relates to the RG eigenvalue via
Δi=dyi, \Delta_i = d - y_i,
where $ d $ is the spacetime dimension.[13] Thus, relevant operators satisfy $ \Delta_i < d $ (or $ y_i > 0 $), irrelevant ones have $ \Delta_i > d $ (or $ y_i < 0 $), and marginal operators obey $ \Delta_i = d $ (or $ y_i = 0 $).[13] These $ y_i $ emerge as eigenvalues from linearizing the RG beta functions $ \frac{dg_i}{dl} = \beta_i(g) \approx y_i g_i $ near the fixed point, with $ l $ the logarithmic RG scale parameter increasing toward the IR.[21] A canonical example is the mass term in $ \phi^4 $ theory, given by the operator $ m^2 \phi^2 / 2 $ in the action $ S = \int d^d x \left[ \frac{1}{2} (\partial \phi)^2 + \frac{m^2}{2} \phi^2 + \frac{\lambda}{4!} \phi^4 \right] $. In the Gaussian fixed-point theory (free scalar field), the scaling dimension is $ \Delta_{\phi^2} = d - 2 $, yielding $ y = 2 > 0 $ and classifying it as relevant.[22] This relevance implies that even small positive $ m^2 $ grows under RG flow, driving the system toward the disordered (massive) phase away from the critical point at $ m^2 = 0 $.[22] Operators sharing the same relevant exponents $ y_i $ belong to the same universality class, ensuring that long-distance critical properties remain invariant under changes to irrelevant microscopic details. This insensitivity to irrelevant operators underscores the predictive power of RG analysis, focusing solely on the finite number of relevant directions that dictate phase transitions and scaling laws.[21]

Mathematical Frameworks

Momentum Space Formulation

In the momentum space formulation of the renormalization group (RG) in quantum field theory (QFT), an ultraviolet (UV) cutoff Λ is imposed on the momenta to regulate divergences, effectively defining a theory valid up to this high-energy scale.[20] This approach, developed in the context of perturbative QFT, proceeds by iteratively integrating out high-momentum modes in thin spherical shells around the cutoff, contrasting with real-space methods that operate on discrete spatial blocks.[20] The core step involves selecting a rescaling factor b > 1 and integrating out the "fast" modes with momenta k satisfying Λ/b < |k| < Λ, leaving the "slow" modes with |k| < Λ/b.[23] This shell integration updates the effective action through a path integral over the fast fields, generating corrections to the low-energy effective Lagrangian via the relation $ e^{-S_\mathrm{eff}[\phi_<]} = \int D\phi_> , e^{-S[\phi_<, \phi_>]} $, where the integral over fast fields ϕ>\phi_> produces loop contributions confined to the shell. After integration, momenta are rescaled as k' = b k (and similarly for coordinates x' = x / b and fields φ' = ζ φ, with ζ chosen to preserve the kinetic term), restoring the cutoff to Λ and mapping the theory to an equivalent one at a coarser scale.[20] This process ensures momentum conservation, as reducible Feynman diagrams with internal lines confined to the shell and external legs in the slow sector vanish due to the orthogonality of momentum regions.[23] For fermionic fields, the procedure follows analogously via shell decimation, where fast fermionic modes near the Fermi surface (with |ε(k)| between Λ/b and Λ, ε being the energy relative to the Fermi energy) are integrated out using Grassmann path integrals, updating the effective four-fermion interactions while preserving the quadratic free action form under rescaling.[24] Unlike real-space coarse-graining, which discretely averages over local blocks and suits lattice models, the momentum space method employs continuous logarithmic scales, making it particularly suited for perturbative expansions in continuum QFT where momentum is a natural variable.[20] This discrete shell-by-shell iteration provides the foundation for analyzing RG flows toward infrared fixed points.[20]

Wilsonian Effective Action

The Wilsonian effective action provides a scale-dependent description of quantum field theories by systematically integrating out high-momentum fluctuations above a cutoff scale kk, resulting in an effective theory valid for physics below that scale. In this framework, the action Sk[ϕ]S_k[\phi] or more commonly the effective average action Γk[ϕ]\Gamma_k[\phi] encodes the dynamics after coarse-graining, with the cutoff kk playing the role of the renormalization scale Λ\Lambda. As kk is lowered from an ultraviolet (UV) cutoff Λ\Lambda to zero, Γk[ϕ]\Gamma_k[\phi] flows to the full one-particle irreducible (1PI) effective action Γ[Φ]\Gamma[\Phi], which includes all quantum fluctuations and generates the physical correlation functions. This approach, rooted in Wilson's momentum-shell integration, generalizes perturbative renormalization to non-perturbative regimes by treating the action as a functional that evolves continuously with scale.[25] The evolution of the Wilsonian effective action is governed by a functional renormalization group (RG) equation, which describes the differential flow under changes in the scale kk. A key form of this equation, incorporating an infrared (IR) regulator to suppress low-momentum modes, is
tΓk=12Tr[(Γk(2)+Rk)1tRk], \partial_t \Gamma_k = \frac{1}{2} \mathrm{Tr} \left[ (\Gamma_k^{(2)} + R_k)^{-1} \partial_t R_k \right],
where t=ln(k/Λ)t = \ln(k/\Lambda), Γk(2)\Gamma_k^{(2)} is the second functional derivative of Γk\Gamma_k (the Hessian), and Rk(p)R_k(p) is the regulator function that vanishes for momenta pk|p| \gg k but suppresses modes with p<k|p| < k, ensuring the trace is finite and the flow captures only the contribution from the shell around kk. This equation, a variant inspired by Polchinski's exact RG formulation, allows for the systematic inclusion of quantum corrections across all scales without relying on perturbation theory.[25][26] The relation between the Wilsonian action and the full 1PI effective action involves a Legendre transform from the generating functional Wk[J]W_k[J] of connected correlators to Γk[ϕ]\Gamma_k[\phi], where the expectation value ϕ=δWk/δJ\phi = \delta W_k / \delta J serves as the variable conjugate to the external source JJ. At k=0k=0, with Rk=0R_k=0, this yields the standard effective action Γ[Φ]\Gamma[\Phi] satisfying the same Legendre relation without scale dependence. This setup handles non-perturbative effects, such as phase transitions and bound-state formation, by solving the flow equation numerically or approximately for the full functional. It has been instrumental in searches for asymptotic safety in quantum gravity and gauge theories, where UV fixed points ensure renormalizability through relevant operators only.[25]

Exact Renormalization Group Equations

The exact renormalization group equations provide a framework for describing the scale dependence of the effective average action Γk[Φ]\Gamma_k[\Phi] in quantum field theory, where kk is an infrared cutoff scale that interpolates between microscopic and macroscopic physics. These equations capture the full non-perturbative renormalization group flow without relying on perturbative expansions, allowing for the study of critical phenomena and quantum effects across all scales. In 1993, Christof Wetterich derived a central exact evolution equation for Γk\Gamma_k, which governs its dependence on the renormalization scale. The equation takes the form
tΓk[Φ]=12STr[tRk(Γk(2)+Rk)1], \partial_t \Gamma_k[\Phi] = \frac{1}{2} \mathrm{STr} \left[ \partial_t R_k \left( \Gamma_k^{(2)} + R_k \right)^{-1} \right],
where t=ln(k/Λ)t = \ln(k/\Lambda) with Λ\Lambda the ultraviolet cutoff, STr\mathrm{STr} denotes the supertrace over field components (including a minus sign for fermions), RkR_k is a regulator function suppressing low-momentum modes, and Γk(2)\Gamma_k^{(2)} is the second functional derivative of Γk\Gamma_k with respect to the fields Φ\Phi. This flow equation originates from the Wilsonian effective action at the ultraviolet scale k=Λk = \Lambda and evolves it down to the full effective action at k0k \to 0.[27]
To solve the Wetterich equation numerically, the local potential approximation (LPA) is commonly employed, where the effective action is truncated to Γk[Φ]=ddx[12(Φ)2+Uk(Φ)]\Gamma_k[\Phi] = \int d^dx \left[ \frac{1}{2} (\partial \Phi)^2 + U_k(\Phi) \right], focusing on the scale-dependent potential UkU_k while neglecting higher-derivative terms. This approximation simplifies the functional differential equation into a partial differential equation for UkU_k, enabling efficient computational studies of fixed points and flow trajectories in scalar field theories. The Wetterich equation is readily extended to theories involving fermions and gauge fields by appropriately defining the regulator RkR_k and supertrace, allowing for non-perturbative analyses of systems like quantum chromodynamics and the standard model. In principle, the equation is exact for any theory, as it derives from the exact path integral formulation, and it yields precise results for free theories where the flow reduces to a simple rescaling without interactions.[27]

Applications and Extensions

Universality Classes in Critical Phenomena

In the renormalization group (RG) framework, universality classes arise because physical systems near critical points that share the same relevant operators and symmetry properties flow under successive coarse-graining transformations to the same fixed point in the space of effective theories.[21] This convergence implies that macroscopic critical behavior, characterized by critical exponents, becomes independent of microscopic details such as lattice structure or short-range interactions, depending only on the dimensionality dd, the range of interactions, and the symmetry of the order parameter.[21] Relevant operators, which grow under RG flow and drive the system away from the fixed point, act as classifiers that group systems into distinct universality classes, while irrelevant operators contribute to corrections beyond leading-order scaling.[21] A prime example is the Ising model, which describes ferromagnetism with a scalar order parameter and Z2\mathbb{Z}_2 symmetry. In two dimensions, the model admits an exact solution via transfer matrix methods, yielding critical exponents such as the correlation length exponent ν=1\nu = 1 and the anomalous dimension η=14\eta = \frac{1}{4}, which are determined precisely without RG approximation.[28] In three dimensions, the Ising model belongs to the O(1) universality class, where RG analysis predicts ν0.63\nu \approx 0.63 and η0.036\eta \approx 0.036, influenced by the leading irrelevant operator at the fixed point that governs scaling corrections.[21] These exponents capture the divergence of the correlation length ξtν\xi \sim |t|^{-\nu} and the power-law decay of correlations G(r)1/rd2+ηG(r) \sim 1/r^{d-2+\eta} at criticality, with tt the reduced temperature. The Potts model generalizes the Ising case to qq discrete states per site with Zq\mathbb{Z}_q symmetry, providing further illustrations of universality. For q=2q=2, it reduces to the Ising model and shares its universality class in any dimension.[21] For q=3q=3 in three dimensions, the model falls into a distinct universality class, with RG flows leading to different fixed-point values for exponents like ν0.76\nu \approx 0.76 and η0.035\eta \approx 0.035, reflecting the enlarged symmetry and altered relevant operator spectrum.[29] Higher qq values, such as q=4q=4, exhibit first-order transitions in three dimensions but continuous ones in two, demarcating class boundaries via the stability of the RG fixed point.[29] RG theory further predicts hyperscaling relations among exponents, valid below the upper critical dimension, such as 2α=dν2 - \alpha = d \nu, where α\alpha governs the specific heat singularity CtαC \sim |t|^{-\alpha}; this links thermodynamic response to correlation volume scaling and holds for the Ising and Potts classes in low dimensions.[30] Additionally, dimensionless quantities like the Binder cumulant U=1M43M22U = 1 - \frac{\langle M^4 \rangle}{3 \langle M^2 \rangle^2}, with MM the order parameter, attain universal values at criticality within a given class, independent of system size or microscopic parameters; for the two-dimensional Ising class, Uc0.6107U_c \approx 0.6107.[31] These ratios serve as practical diagnostics for identifying universality in simulations of diverse systems.[31]

RG Improvement of Effective Potentials

In quantum field theory (QFT), the effective potential $ U(\phi) $ describes the vacuum structure and field-dependent energy at one-particle irreducible level, but fixed-order perturbative calculations often suffer from large logarithmic corrections when there is a separation of scales. The renormalization group (RG) improvement addresses this by evolving the potential along the RG flow from ultraviolet (UV) to infrared (IR) scales, resumming these logarithms to yield a more reliable approximation beyond perturbation theory. This method integrates the RG equation for the scale-dependent potential $ U(\phi, t) $, where $ t = \ln(\mu / \mu_0) $ with $ \mu $ the renormalization scale, effectively incorporating the running of couplings and masses.[32] The core of the approach involves solving the differential RG flow equation $ \frac{dU(\phi, t)}{dt} = \beta[U] $, where $ \beta[U] $ encodes the beta functions and anomalous dimensions derived from the theory's interactions, integrated from the UV cutoff $ t_{\text{UV}} $ to the IR scale $ t_{\text{IR}} $. In the leading-logarithm approximation, suitable for weakly coupled theories, the improved potential takes the form
U(ϕ)U0(ϕ)exp(tUVtIRβ(g)gdt), U(\phi) \approx U_0(\phi) \exp\left( \int_{t_{\text{UV}}}^{t_{\text{IR}}} \frac{\beta(g)}{g} \, dt \right),
where $ U_0(\phi) $ is the tree-level or bare potential, $ g $ the relevant coupling (e.g., quartic $ \lambda $), and the exponential resums the leading logarithmic contributions from the running of $ g $. This can be equivalently implemented by substituting running couplings evaluated at $ \mu \sim \phi $ into the perturbative potential expression, capturing scale-dependent effects systematically. Such resummation is particularly effective in theories with multiple scales, where naive perturbation theory breaks down due to secular terms.[32] A prominent application is to the Higgs effective potential in the Standard Model (SM), where RG improvement refines stability analyses by accurately tracking the running quartic coupling $ \lambda(\mu) $ to high scales, revealing potential metastability or vacuum decay risks. In the SM, two-loop RG evolution shows $ \lambda $ decreasing and possibly turning negative around $ 10^{10} -- 10^{12} $ GeV, but the improved potential mitigates perturbative uncertainties near these scales, providing tighter bounds on the Higgs mass (e.g., lower limit around 130 GeV for stability up to the Planck scale). This resummation also helps address issues near Landau poles in non-asymptotically free sectors, such as the U(1)_Y gauge coupling, by extending the validity of the potential beyond naive perturbative regimes in asymptotically free components like QCD. The advantages of RG improvement include its ability to capture non-perturbative aspects of the vacuum structure, such as phase transitions or multiple minima, through the full flow dynamics, often building on exact RG equations like the Wetterich functional flow for a non-perturbative basis. Unlike fixed-order methods, it ensures renormalization-scale independence to the resummed order and enhances predictive power in effective field theories with hierarchies, as demonstrated in scalar and Yukawa models.[32]

Numerical and Computational Methods

Numerical and computational methods in the renormalization group (RG) framework enable the study of critical phenomena and fixed points in complex systems where analytical solutions are intractable. These approaches approximate RG flows through discrete transformations or iterative solvers, often leveraging Monte Carlo simulations, tensor decompositions, or functional equations to compute scaling exponents and effective theories with high precision. Real-space RG techniques, such as tensor network renormalization (TNR), provide a powerful way to coarse-grain lattice models by representing partition functions or Hamiltonians as tensor networks and applying isometric projections to preserve entanglement structure during scale transformations. TNR, introduced in 2015, converges to scale-invariant fixed points and can generate multi-scale entanglement renormalization ansatz (MERA) representations for ground states, facilitating the extraction of critical data in two-dimensional systems like the Ising model.[33] Monte Carlo renormalization group (MCRG) methods, pioneered by Swendsen in 1979, integrate stochastic sampling with block-spin transformations to map high-resolution lattices onto coarser ones, allowing numerical determination of RG flows without prior knowledge of the effective Hamiltonian. This approach has been refined for cluster-based updates to improve efficiency near criticality, enabling accurate computations of exponents in spin models. In functional RG formulations, the local potential approximation with anomalous dimension (LPA'), which includes wave function renormalization in the effective potential, solves flow equations numerically for scalar field theories, capturing non-perturbative effects beyond mean-field theory. LPA' truncations have been used to compute RG flows in O(N) models, yielding critical exponents such as the anomalous dimension η ≈ 0.036 for N=1 in three dimensions with percent-level precision.[34][35] Recent advances in the 2020s incorporate machine learning to parameterize RG transformations, where neural networks learn coarse-graining rules from data, approximating flows in lattice models and identifying fixed points more efficiently than traditional methods. For instance, deep learning architectures have been trained to mimic single-step RG flows, achieving convergence to universal exponents in Ising-like systems with reduced computational cost. These numerical techniques extend to quantum chromodynamics (QCD) on the lattice, where tensor RG methods complement Monte Carlo simulations by providing sign-problem-free access to phase diagrams and renormalization constants in finite-density regimes.[36][37][38] For disordered systems, numerical RG approaches handle quenched randomness by averaging over disorder realizations during iterative decimations, revealing multifractal scaling and localization transitions in models like the Anderson Hamiltonian. Real-space RG variants, such as those using transfer matrices, compute conductance distributions and critical states in one-dimensional chains with up to thousands of sites, confirming logarithmic scaling of the localization length. These methods, often combined with exact RG equations solved via pseudospectral techniques, provide benchmarks for universality classes in random environments.[39][40]

References

User Avatar
No comments yet.