Hubbry Logo
Large extra dimensionsLarge extra dimensionsMain
Open search
Large extra dimensions
Community hub
Large extra dimensions
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Large extra dimensions
Large extra dimensions
from Wikipedia

In particle physics and string theory (M-theory), the Arkani-Hamed, Dimopoulos, Dvali model (ADD), also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem (Why is the force of gravity so weak compared to the electromagnetic force and the other fundamental forces?). The model tries to explain this problem by postulating that our universe, with its four dimensions (three spatial ones plus time), exists on a membrane in a higher dimensional space. It is then suggested that the other forces of nature (the electromagnetic force, strong interaction, and weak interaction) operate within this membrane and its four dimensions, while the hypothetical gravity-bearing particle, the graviton, can propagate across the extra dimensions. This would explain why gravity is very weak compared to the other fundamental forces.[clarification needed][1] The size of the dimensions in ADD is around the order of the TeV scale, which results in it being experimentally probeable by current colliders, unlike many exotic extra dimensional hypotheses that have the relevant size around the Planck scale.[2]

The model was proposed by Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali in 1998.[3][4]

One way to test the theory is performed by colliding together two protons in the Large Hadron Collider so that they interact and produce particles. If a graviton were to be formed in the collision, it could propagate into the extra dimensions, resulting in an imbalance of transverse momentum. No experiments from the Large Hadron Collider have been decisive thus far.[5][6][7][8][9][10] However, the operation range of the LHC (13 TeV collision energy) covers only a small part of the predicted range in which evidence for LED would be recorded (a few TeV to 1016 TeV).[11] This suggests that the theory might be more thoroughly tested with more advanced technology.

Proponents' views

[edit]

Traditionally, in theoretical physics, the Planck scale is the highest energy scale and all dimensionful parameters are measured in terms of the Planck scale. There is a great hierarchy between the weak scale and the Planck scale, and explaining the ratio of strength of weak force and gravity is the focus of much of beyond-Standard-Model physics. In models of large extra dimensions, the fundamental scale is much lower than the Planck. This occurs because the power law of gravity changes. For example, when there are two extra dimensions of size , the power law of gravity is for objects with and for objects with . If we want the Planck scale to be equal to the next accelerator energy (1 TeV), we should take to be approximately 1 mm. For larger numbers of dimensions, fixing the Planck scale at 1 TeV, the size of the extra-dimensions become smaller and as small as 1 femtometer for six extra dimensions.

By reducing the fundamental scale to the weak scale, the fundamental theory of quantum gravity, such as string theory, might be accessible at colliders such as the Tevatron or the LHC.[12] There has been recent[when?] progress in generating large volumes in the context of string theory.[13] Having the fundamental scale accessible allows the production of black holes at the LHC,[10][14][15] though there are constraints on the viability of this possibility at the energies at the LHC.[16] There are other signatures of large extra dimensions at high energy colliders.[17][18][19][20][21]

Many of the mechanisms that were used to explain the problems in the Standard Model used very high energies. In the years after the publication of ADD, much of the work of the beyond the Standard Model physics community went to explore how these problems could be solved with a low scale of quantum gravity. Almost immediately, there was an alternative explanation to the see-saw mechanism for the neutrino mass.[22][23] Using extra dimensions as a new source of small numbers allowed for new mechanisms for understanding the masses and mixings of the neutrinos.[24][25]

Another problem with having a low scale of quantum gravity was the existence of possibly TeV-suppressed proton decay, flavor violating, and CP violating operators. These would be disastrous phenomenologically. Physicists quickly realized that there were novel mechanisms for getting small numbers necessary for explaining these very rare processes.[26][27][28][29][30]

Opponents' views

[edit]

In the traditional view, the enormous gap in energy between the mass scales of ordinary particles and the Planck mass is reflected in the fact that virtual processes involving black holes or gravity are strongly suppressed. The suppression of these terms is the principle of renormalizability – in order to see an interaction at low energy, it must have the property that its coupling only changes logarithmically as a function of the Planck scale. Nonrenormalizable interactions are weak only to the extent that the Planck scale is large.

Virtual gravitational processes do not conserve anything except gauge charges, because black holes decay into anything with the same charge. Therefore, it is difficult to suppress interactions at the gravitational scale. One way to do it is by postulating new gauge symmetries. A different way to suppress these interactions in the context of extra-dimensional models is the "split fermion scenario" proposed by Arkani-Hamed and Schmaltz in their paper "Hierarchies without Symmetries from Extra Dimensions".[31] In this scenario, the wavefunctions of particles that are bound to the brane have a finite width significantly smaller than the extra-dimension, but the center (e.g. of a Gaussian wave packet) can be dislocated along the direction of the extra dimension in what is known as a "fat brane". Integrating out the additional dimension(s) to obtain the effective coupling of higher-dimensional operators on the brane, the result is suppressed with the exponential of the square of the distance between the centers of the wave functions, a factor that generates a suppression by many orders of magnitude already by a dislocation of only a few times the typical width of the wave function.

In electromagnetism, the electron magnetic moment is described by perturbative processes derived in the QED Lagrangian:

which is calculated and measured to one part in a trillion. But it is also possible to include a Pauli term in the Lagrangian:

and the magnetic moment would change by . The reason the magnetic moment is correctly calculated without this term is because the coefficient has the dimension of inverse mass. The mass scale is at most the Planck mass, so would only be seen at the 20th decimal place with the usual Planck scale.

Since the electron magnetic moment is measured so accurately, and since the scale where it is measured is at the electron mass, a term of this kind would be visible even if the Planck scale were only about 109 electron masses, which is 1000 TeV. This is much higher than the proposed Planck scale in the ADD model.

QED is not the full theory, and the Standard Model does not have many possible Pauli terms. A good rule of thumb is that a Pauli term is like a mass term – in order to generate it, the Higgs must enter. But in the ADD model, the Higgs vacuum expectation value is comparable to the Planck scale, so the Higgs field can contribute to any power without any suppression. One coupling which generates a Pauli term is the same as the electron mass term, except with an extra where is the U(1) gauge field. This is dimension-six, and it contains one power of the Higgs expectation value, and is suppressed by two powers of the Planck mass. This should start contributing to the electron magnetic moment at the sixth decimal place. A similar term should contribute to the muon magnetic moment at the third or fourth decimal place.

The neutrinos are only massless because the dimension-five operator does not appear. But neutrinos have a mass scale of approximately eV, which is 14 orders of magnitude smaller than the scale of the Higgs expectation value of 1 TeV. This means that the term is suppressed by a mass such that

Substituting  TeV gives  eV GeV. So this is where the neutrino masses suggest new physics; at close to the traditional Grand Unification Theory (GUT) scale, a few orders of magnitude less than the traditional Planck scale. The same term in a large extra dimension model would give a mass to the neutrino in the MeV-GeV range, comparable to the mass of the other particles.

In this view, models with large extra dimensions miscalculate the neutrino masses by inappropriately assuming that the mass is due to interactions with a hypothetical right-handed partner. The only reason to introduce a right-handed partner is to produce neutrino masses in a renormalizable GUT. If the Planck scale is small so that renormalizability is no longer an issue, there are many neutrino mass terms which do not require extra particles.

For example, at dimension-six, there is a Higgs-free term which couples the lepton doublets to the quark doublets, , which is a coupling to the strong interaction quark condensate. Even with a relatively low energy pion scale, this type of interaction could conceivably give a mass to the neutrino of size , which is only a factor of 107 less than the pion condensate itself at 200 MeV. This would be some 10 eV of mass, about a thousand times bigger than what is measured.

This term also allows for lepton number violating pion decays, and for proton decay. In fact, in all operators with dimension greater than four, there are CP, baryon, and lepton-number violations. The only way to suppress them is to deal with them term by term, which nobody has done.[citation needed]

The popularity, or at least prominence, of these models may have been enhanced because they allow the possibility of black hole production at the LHC, which has attracted significant attention.

Empirical tests

[edit]

Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions.[5][6][7][8][9][10]

In 2012, the Fermi/LAT collaboration published limits on the ADD model of Large Extra Dimensions from astrophysical observations of neutron stars. If the unification scale is at a TeV, then for , the results presented here imply that the compactification topology is more complicated than a torus, i.e., all large extra dimensions (LED) having the same size. For flat LED of the same size, the lower limits on the unification scale results are consistent with n ≥ 4.[32] The details of the analysis is as follows: A sample of 6 gamma-ray faint NS sources not reported in the first Fermi gamma-ray source catalog that are good candidates are selected for this analysis, based on age, surface magnetic field, distance, and galactic latitude. Based on 11 months of data from Fermi-LAT, 95% CL upper limits on the size of extra dimensions from each source are obtained, as well as 95% CL lower limits on the (n+4)-dimensional Planck scale . In addition, the limits from all of the analyzed NSs have been combined statistically using two likelihood-based methods. The results indicate more stringent limits on LED than quoted previously from individual neutron star sources in gamma-rays. In addition, the results are more stringent than current collider limits, from the LHC, for .[33]

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Large extra dimensions (LED), also known as the ADD model, is a theoretical framework in high-energy physics that proposes the of additional spatial dimensions beyond the familiar three, compactified on scales potentially as large as millimeters, to address the —the vast disparity between the Planck scale (~10¹⁹ GeV) and the electroweak scale (~TeV). In this model, gravity propagates freely through these extra dimensions, diluting its apparent strength in our four-dimensional (4D) , while particles and forces are confined to a 3D embedded in the higher-dimensional bulk. First proposed in 1998, the model eliminates the need for or by setting the fundamental short-distance scale at the weak scale, with the observed Planck scale emerging as an effective low-energy phenomenon due to the volume of the extra dimensions. The core motivation for LED stems from resolving the hierarchy problem, where quantum corrections would otherwise push the Higgs mass far above the electroweak scale unless fine-tuned. By introducing δ ≥ 2 flat, compact of radius R ~ 1/TeV to millimeters (depending on δ), the (4+δ)-dimensional Planck mass M_D is lowered to ~TeV, unifying gravity with gauge interactions at accessible energies without invoking new particles beyond the . For instance, with δ=2, the extra dimensions could span ~0.1 mm, leading to gravitational force deviations from Newton's law at sub-millimeter distances, transitioning from 1/r² to 1/r⁴ behavior. This setup predicts Kaluza-Klein (KK) gravitons, massive excitations of the in the extra dimensions, which could manifest as resonances or missing energy signatures in high-energy collisions. Experimental searches for LED have yielded stringent constraints, primarily through deviations in gravity at short ranges, collider signatures, and astrophysical observations. Tabletop torsion balance experiments limit the radius R < 30 μm for δ=2, corresponding to M_D > 4.0 TeV. At the (LHC), ATLAS and CMS analyses of monojet plus missing transverse energy events set M_D > 5.9–11.2 TeV for δ=2–6, while astrophysical bounds from supernova 1987A cooling and neutron star heating push M_D > 27–1700 TeV for δ=2. Notable predictions include micro black hole production at TeV scales and potential energy loss to in high-p_T processes, though no evidence has been found, spurring ongoing searches at future colliders like the High-Luminosity LHC. Stabilization mechanisms, such as fluxes or dynamics, are required to prevent the extra dimensions from expanding uncontrollably.

Introduction

Concept and Definition

Large extra dimensions (LED) constitute a theoretical framework in high-energy physics proposing the existence of additional spatial dimensions beyond the three observed in everyday experience, which are macroscopic in scale—ranging from subatomic distances to potentially millimeter sizes—rather than being tightly curled up. In this paradigm, introduced by Arkani-Hamed, Dimopoulos, and Dvali, the (SM) particles and forces are confined to a (3+1)-dimensional called a brane, embedded within a higher-dimensional bulk , while propagates freely throughout the bulk, diluting its effective strength in our brane-localized perception. This setup aims to address the —the vast disparity between the electroweak scale (~246 GeV) and the Planck scale (~10^{19} GeV)—by allowing the fundamental gravitational scale to be lowered to around the TeV range through the geometry of . Unlike the small extra dimensions typical in , where the additional six or seven dimensions are compactified at the Planck length of approximately 103510^{-35} meters, making them inaccessible to current experiments, LED models feature flat, loosely compactified or even uncompactified dimensions large enough to influence physics at observable scales. LED also differ from warped extra dimension scenarios, such as the Randall-Sundrum model, which employ a single curved fifth dimension with anti-de Sitter geometry to generate the via an exponential warp factor that suppresses gravitational interactions on the without requiring large flat volumes. In LED, the extra dimensions remain flat, ensuring that only leaks into the bulk, preserving the localization of SM gauge interactions on the . The central parameter governing LED phenomenology is the radius RR of the extra dimensions, which compactifies them into a topology like a or , with the inverse scale 1/R1/R defining the threshold for new physics effects such as Kaluza-Klein excitations of gravitons. For two extra dimensions, RR can extend up to about 0.1 mm, placing 1/R1/R near 10^{-3} eV, while for six dimensions, it shrinks to around 10^{-12} cm (~3 \times 10^{-14} m), yielding 1/R101/R \sim 10 MeV—potentially detectable through a dense tower of Kaluza-Klein modes in high-energy collisions or, for lower δ, in precision gravity tests.

Motivation from the Hierarchy Problem

The in arises from the enormous disparity between the electroweak scale, set by the Higgs of approximately 246 GeV, and the Planck scale of about 1.22×10191.22 \times 10^{19} GeV, which governs . In the , radiative corrections to the Higgs mass from virtual loops involving top quarks, electroweak gauge bosons, and other particles would naturally generate contributions of order the scale—presumed to be the Planck scale—pushing the physical Higgs mass far beyond observed values unless the bare Higgs mass parameter is exquisitely fine-tuned to cancel these effects with a precision of roughly 1 part in 103210^{32}. This unnatural sensitivity to high-scale physics motivates extensions beyond the that stabilize the electroweak scale without such tuning. Large extra dimensions (LED) address this hierarchy by proposing a geometric mechanism that lowers the fundamental scale of quantum gravity, denoted MM_*, to around the TeV range, making the electroweak and gravity scales naturally comparable without invoking fine-tuning or additional symmetries. In this framework, gravity propagates through the full higher-dimensional bulk, while Standard Model fields are confined to a lower-dimensional brane, leading to an effective dilution of gravitational strength in our observable four-dimensional spacetime due to the large volume of the extra dimensions. This reduces the apparent Planck scale from a fundamental parameter to an emergent one, arising from the product of the higher-dimensional Planck mass and the extra-dimensional volume, thus eliminating the need for the extreme cancellations required in the Standard Model. Unlike , which stabilizes the Higgs mass through pairwise cancellations between bosonic and fermionic loops but introduces superpartners that can mediate rapid in grand unified theories or induce flavor-violating processes exceeding experimental limits, LED achieves naturalness through this spatial geometry alone, avoiding such phenomenological challenges. The analogy often used is that of "leaking" into the , akin to a force spreading over a larger effective area, which weakens its in four dimensions without altering the underlying dynamics at short distances.

Historical Background

Early Theories of Extra Dimensions

The concept of extra dimensions in originated in the early with efforts to unify fundamental forces. In 1921, proposed extending to five dimensions, where the fifth dimension is spatial and the theory's equations naturally incorporate both and as geometric effects. This framework, known as Kaluza's theory, treats the as arising from the off-diagonal components of the five-dimensional . In 1926, advanced this idea by providing a quantum mechanical interpretation and mechanism to explain the absence of effects from the extra . Klein suggested that the fifth dimension is compactified into a small circle of radius on the order of the Planck length, approximately 103310^{-33} cm, rendering it undetectable at macroscopic scales. In this compactification, particles propagating around the extra acquire quantized momentum modes, termed Kaluza-Klein (KK) modes, which manifest as a tower of charged particles with masses inversely proportional to the compactification radius, effectively reproducing the spectrum of electromagnetic interactions in four dimensions. The idea of gained renewed prominence in the 1980s through , which requires a higher-dimensional for mathematical consistency. Superstring theories demand exactly 10 dimensions to ensure anomaly cancellation, where quantum inconsistencies in gauge and gravitational sectors are resolved only in this dimensionality. The additional six dimensions beyond the observed four are compactified on tiny scales, typically Calabi-Yau manifolds with radii around the string scale of 103210^{-32} cm, to reproduce the effective four-dimensional physics observed in nature. An early exploration of larger extra dimensions within appeared in 1988, when Ignatios Antoniadis and collaborators investigated mechanisms for breaking. In their work on compactifications, they considered scenarios where one or more could have sizes up to the TeV scale to mediate breaking while preserving 's consistency, though the scales remained small compared to macroscopic distances. However, these early models with small extra dimensions faced significant challenges related to the hierarchy problem, the vast disparity between the electroweak scale (~246 GeV) and the Planck scale (~10^{19} GeV). Compactification at Planckian scales implied unification of forces at extremely high energies, which did not alleviate the need for fine-tuning in the Higgs sector and instead amplified the sensitivity of low-energy parameters to ultraviolet physics without introducing stabilizing mechanisms.

The ADD Model

The ADD model was proposed in 1998 by , Savas Dimopoulos, and Gia Dvali as a solution to the , positing that the apparent weakness of arises from its propagation into large rather than relying on or . In their framework, the universe consists of a (4 + n)-dimensional , with n ≥ 2 flat extra dimensions compactified on a of radius R, while fields are confined to a 3-brane embedded in this bulk. This setup lowers the fundamental scale M_* to the TeV range (approximately 1–100 TeV), enabling unification of with gauge interactions at energies accessible to particle accelerators. A central feature of the model is the relation between the observed 4D Planck scale M_Pl (∼10^{19} GeV) and the higher-dimensional scale, given by M_Pl^2 ≈ M_^{n+2} R^n, which implies R ∼ 1/M_ for the effective size of the extra dimensions when M_* is near the weak scale. The key innovation lies in making these extra dimensions sufficiently large to suppress quantum gravity effects at low energies while remaining consistent with existing gravitational tests, yet small enough to yield novel predictions; for instance, with n=2, R ∼ 0.5 mm, and for n=6, R ∼ 0.1 MeV^{-1} (∼ 2 \times 10^{-14} m). This scale allows gravitons to propagate freely in the bulk, diluting gravity's strength in 4D while opening possibilities for observable deviations, such as modifications to the inverse-square law or production of Kaluza-Klein gravitons. Building on earlier ideas of compact in , the ADD proposal marked a pivotal shift by advocating dimensions large enough for direct experimental probing, transforming them from a theoretical into a testable paradigm. It inspired subsequent developments, such as the Randall-Sundrum warped extra dimension model in 1999, which addressed similar issues through rather than volume dilution. Upon publication, the model gained rapid adoption throughout the late and as a compelling alternative to for resolving the , inspiring extensive theoretical extensions and shaping search strategies at facilities like the , where signatures such as missing energy from emission became standard benchmarks. Its influence is evident in over 5,000 citations of the original paper, underscoring its role in revitalizing extra-dimensional physics.

Theoretical Framework

Brane-World Scenarios

In brane-world scenarios central to , the is represented as a (p+1)-dimensional , specifically a 3+1-dimensional for p=3, embedded within a higher-dimensional bulk of total D=4+d, where d is the number of extra spatial . This setup confines the (SM) fields—such as gauge bosons and fermions—to the through localization mechanisms, including orbifolding, which applies parity boundary conditions (e.g., via S^1/Z_2 orbifolds) to restrict field propagation perpendicular to the while allowing zero modes along it. The ADD model serves as the primary instantiation of this , integrating localization with flat extra to address gravitational weakness. Unlike the SM fields, propagates freely in the full bulk, with gravitons able to explore all . At low energies, where the of gravitational disturbances exceeds the size R, this bulk propagation yields an effective four-dimensional , manifesting as the observed Newtonian 1/r potential and recovering on scales much larger than R. The effective Planck scale M_Pl relates to the fundamental (4+d)-dimensional scale M_* via M_Pl^2 ≈ M_*^{2+d} V_d, where V_d is the extra-dimensional , diluting the gravitational in higher dimensions. These scenarios emphasize flat extra dimensions, characterized by the Minkowski metric
ds2=ημνdxμdxν+dymdym,ds^2 = \eta_{\mu\nu} dx^\mu dx^\nu + dy^m dy_m,
where ημν\eta_{\mu\nu} is the four-dimensional Minkowski metric and ymy^m (m=1,...,d) are the extra coordinates. This flat geometry contrasts with warped brane-world models, such as those proposed by Randall and Sundrum, which incorporate exponential curvature in the extra dimension to localize gravity near the brane without relying on large flat volumes.
The theory remains stable and consistent as a low-energy effective description below the cutoff scale M_*, with the ADD free of ghosts or tachyons due to the positive-definite metric and appropriate boundary conditions in the compactified bulk. This ensures perturbative unitarity and the absence of instabilities in the sector, validating the model's use for phenomenological predictions.

Compactification and Size of

In large (LED) models, the extra dimensions must be compactified on manifolds that render them unobservable at low energies while allowing to propagate through them. The simplest approach is toroidal compactification, where the n extra dimensions are each curled into a of common RR, forming an n-torus with total volume Vn=(2πR)nV_n = (2\pi R)^n and ensuring spatial periodicity. compactifications provide an alternative, projecting out certain modes via discrete symmetries to achieve desirable geometric properties, such as fixed points for localization. This compactification geometry directly relates the fundamental higher-dimensional scales to the observed 4D physics via the equation MPl2=M2+nVn,M_{\rm Pl}^2 = M_*^{2+n} V_n, where MPl1.2×1019M_{\rm Pl} \approx 1.2 \times 10^{19} GeV is the 4D Planck mass, MM_* is the fundamental scale suppressing higher-dimensional gravitational interactions (typically \sim TeV in LED models), and VnV_n is the extra-dimensional volume. This relation explains the weakness of 4D gravity as a dilution effect: the gravitational coupling, strong at the scale MM_*, appears feeble in 4D because flux spreads over the large volume VnV_n. The permissible size RR of the depends sensitively on nn, with VnV_n fixed to reproduce MPlM_{\rm Pl} for a given MM_*. Assuming M1M_* \approx 1 TeV, representative estimates yield R0.5R \sim 0.5 mm for n=2n=2, R106R \sim 10^{-6} mm for n=3n=3, and progressively smaller values for n=4n=4 to 77 (e.g., R1011R \sim 10^{-11} mm for n=6n=6), as higher nn requires tinier radii to maintain the same volume. Models with n=1n=1 are ruled out, as they imply R1011R \sim 10^{11} m—comparable to astronomical scales—leading to unacceptable deviations from Newtonian gravity in the solar system. Thus, viable LED scenarios feature n=2n=2 to 77. Quantum corrections in LED models primarily induce logarithmic running of MM_* due to gravitational loops, but these effects are mild and do not destabilize the between the weak scale and the millimeter-sized below MM_*, avoiding the need for fine-tuning. fields are localized on a codimension-1 within the bulk to prevent their propagation into the .

Physical Predictions

Modifications to Gravity

In large extra dimensions (LED) models, gravitational interactions deviate from the predictions of at scales comparable to the size of the extra dimensions, RR, leading to testable modifications in the Newtonian force law. For distances rRr \ll R, gravity behaves as in a higher-dimensional spacetime with nn extra dimensions, following an inverse-power law F1/r2+nF \sim 1/r^{2+n} rather than the familiar 1/r21/r^2. This arises because the , the mediator of gravity, propagates freely in the full (4+n)(4+n)-dimensional bulk, while particles are confined to a 3-brane. As rr increases beyond RR, the extra dimensions compactify, and the force transitions back to the 4D , with the effective 4D GG diluted by the volume of the extra dimensions, G1/(Mn+2Rn)G \sim 1/(M_*^{n+2} R^n), where MM_* is the fundamental Planck scale in higher dimensions. The effective gravitational potential between two masses m1m_1 and m2m_2 in LED can be approximated as V(r)m1m2Mn+2rn+1V(r) \approx - \frac{m_1 m_2}{M_*^{n+2} r^{n+1}} for rRr \ll R, reflecting the higher-dimensional form, while for rRr \gg R, it becomes V(r)Gm1m2r(1+k=1ck(rR)nk)V(r) \approx - \frac{G m_1 m_2}{r} \left(1 + \sum_{k=1}^\infty c_k \left(\frac{r}{R}\right)^{n k}\right), incorporating power-law corrections from Kaluza-Klein modes that become negligible at large distances. These deviations, often parameterized as power-law terms testable at sub-millimeter scales (e.g., for n=2n=2, deviations around 100 μ\mum to 1 mm), motivate precision gravity experiments to probe RR. The compactification volume determines the transition scale, with RR inversely related to MM_* via MPl2Mn+2RnM_{\rm Pl}^2 \sim M_*^{n+2} R^n, allowing RR to be macroscopic if MM_* \sim TeV.

Kaluza-Klein Modes and New Particles

In large (LED) models, quantum fields that propagate into the bulk, such as the graviton, develop a Kaluza-Klein (KK) tower of excitations due to the imposed by compactification on a or similar manifold. These modes arise from quantized components in the extra dimensions, labeled by an integer vector n\vec{n}
Add your contribution
Related Hubs
User Avatar
No comments yet.