Hubbry Logo
Learning with errorsLearning with errorsMain
Open search
Learning with errors
Community hub
Learning with errors
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Learning with errors
Learning with errors
from Wikipedia

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms.[1] It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it.[2] In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve,[1] and thus to be useful in cryptography.

More precisely, the LWE problem is defined as follows. Let denote the ring of integers modulo and let denote the set of -vectors over . There exists a certain unknown linear function , and the input to the LWE problem is a sample of pairs , where and , so that with high probability . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function , or some close approximation thereof, with high probability.

The LWE problem was introduced by Oded Regev in 2005[3] (who won the 2018 Gödel Prize for this work); it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems,[3][4] such as the ring learning with errors key exchange by Peikert.[5]

Definition

[edit]

Denote by the additive group on reals modulo one. Let be a fixed vector. Let be a fixed probability distribution over . Denote by the distribution on obtained as follows.

  1. Pick a vector from the uniform distribution over ,
  2. Pick a number from the distribution ,
  3. Evaluate , where is the standard inner product in , the division is done in the field of reals (or more formally, this "division by " is notation for the group homomorphism mapping to ), and the final addition is in .
  4. Output the pair .

The learning with errors problem is to find , given access to polynomially many samples of choice from .

For every , denote by the one-dimensional Gaussian with zero mean and variance , that is, the density function is where , and let be the distribution on obtained by considering modulo one. The version of LWE considered in most of the results would be

Decision version

[edit]

The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from (practically, some discretized version of it). Regev[3] showed that the decision and search versions are equivalent when is a prime bounded by some polynomial in .

[edit]

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by . If the solver returns a candidate , for all , calculate . If the samples are from an LWE distribution, then the results of this calculation will be distributed according , but if the samples are uniformly random, these quantities will be distributed uniformly as well.

Solving search assuming decision

[edit]

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover one coordinate at a time. To obtain the first coordinate, , make a guess , and do the following. Choose a number uniformly at random. Transform the given samples as follows. Calculate . Send the transformed samples to the decision solver.

If the guess was correct, the transformation takes the distribution to itself, and otherwise, since is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since is bounded by some polynomial in , it only takes polynomial time to guess every possible value for and use the solver to see which one is correct.

After obtaining , we follow an analogous procedure for each other coordinate . Namely, we transform our samples the same way, and transform our samples by calculating , where the is in the coordinate.[3]

Peikert[4] showed that this reduction, with a small modification, works for any that is a product of distinct, small (polynomial in ) primes. The main idea is if , for each , guess and check to see if is congruent to , and then use the Chinese remainder theorem to recover .

Average case hardness

[edit]

Regev[3] showed the random self-reducibility of the LWE and DLWE problems for arbitrary and . Given samples from , it is easy to see that are samples from .

So, suppose there was some set such that , and for distributions , with , DLWE was easy.

Then there would be some distinguisher , who, given samples , could tell whether they were uniformly random or from . If we need to distinguish uniformly random samples from , where is chosen uniformly at random from , we could simply try different values sampled uniformly at random from , calculate and feed these samples to . Since comprises a large fraction of , with high probability, if we choose a polynomial number of values for , we will find one such that , and will successfully distinguish the samples.

Thus, no such can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.

Hardness results

[edit]

Regev's result

[edit]

For a n-dimensional lattice , let smoothing parameter denote the smallest such that where is the dual of and is extended to sets by summing over function values at each element in the set. Let denote the discrete Gaussian distribution on of width for a lattice and real . The probability of each is proportional to .

The discrete Gaussian sampling problem (DGS) is defined as follows: An instance of is given by an -dimensional lattice and a number . The goal is to output a sample from . Regev shows that there is a reduction from to for any function .

Regev then shows that there exists an efficient quantum algorithm for given access to an oracle for for integer and such that . This implies the hardness for LWE. Although the proof of this assertion works for any , for creating a cryptosystem, the modulus has to be polynomial in .

Peikert's result

[edit]

Peikert proves[4] that there is a probabilistic polynomial time reduction from the problem in the worst case to solving using samples for parameters , , and .

Use in cryptography

[edit]

The LWE problem serves as a versatile problem used in construction of several[3][4][6][7] cryptosystems. In 2005, Regev[3] showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems (for as above) and with ). In 2009, Peikert[4] proved a similar result assuming only the classical hardness of the related problem . The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.

Public-key cryptosystem

[edit]

Regev[3] proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by and a probability distribution on . The setting of the parameters used in proofs of correctness and security is

  • , usually a prime number between and .
  • for an arbitrary constant
  • for , where is a probability distribution obtained by sampling a normal variable with mean and standard variation and reducing the result modulo .

The cryptosystem is then defined by:

  • Private key: Private key is an chosen uniformly at random.
  • Public key: Choose vectors uniformly and independently. Choose error offsets independently according to . The public key consists of
  • Encryption: The encryption of a bit is done by choosing a random subset of and then defining as
  • Decryption: The decryption of is if is closer to than to , and otherwise.

The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of and can be used to distinguish between and the uniform distribution over

CCA-secure cryptosystem

[edit]

Peikert[4] proposed a system that is secure even against any chosen-ciphertext attack.

Key exchange

[edit]

The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper[8] appeared in 2012 after a provisional patent application was filed in 2012.

The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme[9] following the same basic idea of Ding's, where the new idea of sending an additional 1-bit signal for rounding in Ding's construction is also used. The "new hope" implementation[10] selected for Google's post-quantum experiment,[11] uses Peikert's scheme with variation in the error distribution.

Ring learning with errors signature (RLWE-SIG)

[edit]

A RLWE version of the classic Feige–Fiat–Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Learning with Errors (LWE) is a fundamental computational problem in cryptography, defined as the task of distinguishing between samples of the form (a,b=a,s+emodq)(a, b = \langle a, s \rangle + e \mod q), where aa is chosen uniformly at random from Zqn\mathbb{Z}_q^n, ss is a fixed secret vector, and ee is a small error term drawn from a bounded distribution such as a discrete Gaussian, from uniformly random pairs (a,u)(a, u) over Zqn×Zq\mathbb{Z}_q^n \times \mathbb{Z}_q. This average-case problem captures the difficulty of decoding noisy linear equations modulo qq, with parameters nn (dimension), qq (modulus, often polynomial in nn), and noise level controlled by αq\alpha q where α=1/\poly(n)\alpha = 1/\poly(n). Introduced by Oded Regev in 2005, LWE extends earlier ideas from learning parity with noise and provides a versatile foundation for secure cryptographic primitives resistant to quantum attacks. The hardness of LWE is equivalent to the worst-case complexity of approximating fundamental lattice problems, such as the shortest vector problem (SVP) and shortest independent vectors problem (SIVP), to within O~(n/α)\tilde{O}(n / \alpha) factors, via quantum and classical reductions. Specifically, Regev's original work established a quantum reduction showing that solving LWE implies efficient approximation of these lattice problems, while subsequent results by Regev, Peikert, and others provided classical reductions for decision-LWE, confirming its robustness even against non-quantum adversaries. No subexponential-time algorithms are known for LWE beyond 2Ω(n)2^{\Omega(n)} runtime, making it a reliable assumption for security levels comparable to factoring or discrete logarithms. LWE has enabled a broad range of lattice-based cryptographic constructions, including public-key schemes that achieve under chosen-plaintext attacks, as first demonstrated by Regev using a dual-Regev variant. Its applications extend to key encapsulation mechanisms (KEMs), digital signatures, , and fully , with variants like Ring-LWE (RLWE) and Module-LWE optimizing efficiency by operating over ring or module structures for faster computations and smaller key sizes. Notably, the NIST Post-Quantum Cryptography Standardization Process selected CRYSTALS-Kyber, a Module-LWE-based KEM (standardized as ML-KEM in FIPS 203), as a primary quantum-resistant for , underscoring LWE's role in transitioning to post-quantum .

Introduction

Definition

The Learning with Errors (LWE) problem is an average-case hard problem over the integers qq, conjectured to be computationally difficult and serving as a foundational assumption in . It involves recovering a secret vector from a collection of noisy linear equations, where the noise is drawn from a small error distribution, making it resistant to known efficient algorithms. Formally, the search-LWE problem is parameterized by the dimension nn, a modulus q>0q > 0, an error distribution χ\chi over Z\mathbb{Z} (typically a discrete Gaussian distribution with standard deviation σ\sigma), and the number of samples mm. The input consists of mm pairs (ai,bi)(a_i, b_i) for i=1i = 1 to mm, where each aia_i is chosen uniformly at random from Zqn\mathbb{Z}_q^n, the secret ss is a fixed vector in Zqn\mathbb{Z}_q^n, and bi=ai,s+ei(modq)b_i = \langle a_i, s \rangle + e_i \pmod{q} with eie_i sampled from χ\chi (ensuring eie_i is small relative to qq). The goal is to recover the secret vector ss. This setup draws motivation from the task of with additive noise, where one aims to learn an underlying from perturbed observations, but adapted to to connect to the hardness of lattice problems like the shortest vector problem. In , LWE's average-case hardness enables the construction of secure primitives resistant to quantum attacks, as its difficulty holds even against adversaries with unlimited computational power in certain parameter regimes. For illustration, consider a small instance with n=2n=2, q=7q=7, and χ\chi uniform over {1,0,1}\{-1, 0, 1\}. Suppose the secret is s=(3,5)(mod7)s = (3, 5) \pmod{7}. One possible sample might be a1=(2,4)a_1 = (2, 4), e1=1e_1 = 1, yielding b1=(2,4),(3,5)+1=(6+20)+1=276(mod7)b_1 = \langle (2,4), (3,5) \rangle + 1 = (6 + 20) + 1 = 27 \equiv 6 \pmod{7}, so the pair is ((2,4),6)( (2,4), 6 ). Another sample could be a2=(1,3)a_2 = (1, 3), e2=1e_2 = -1, giving b2=(1,3),(3,5)1=(3+15)1=173(mod7)b_2 = \langle (1,3), (3,5) \rangle - 1 = (3 + 15) - 1 = 17 \equiv 3 \pmod{7}, for the pair ((1,3),3)( (1,3), 3 ). Recovering ss from such noisy equations becomes challenging as nn and mm grow, even with multiple samples.

Historical Development

The learning with errors (LWE) problem was introduced by Oded Regev in as a generalization of earlier parity learning problems, such as the learning parity with noise (LPN) problem, providing a new average-case hardness assumption rooted in lattice problems. This formulation aimed to bridge the gap between worst-case lattice hardness assumptions—known to be computationally difficult—and more efficient average-case problems suitable for cryptographic applications. In 2008, , Peikert, and Vaikuntanathan introduced the dual-Regev encryption scheme, which offered improved efficiency and security guarantees compared to the original construction, solidifying LWE's role as a foundation for public-key . During the , the field expanded significantly through Chris Peikert's contributions, including efficient worst-case to average-case reductions that enhanced the practicality of LWE for and novel sampling techniques for generating secure error distributions. Concurrently, in 2010, Vadim Lyubashevsky, Chris Peikert, and Oded Regev introduced ring-LWE (RLWE), a structured variant that leveraged ideal lattices to achieve greater efficiency while preserving hardness, enabling applications like . LWE gained institutional recognition starting in 2016 with its inclusion in the U.S. National Institute of Standards and Technology (NIST) post-quantum cryptography standardization process, which sought quantum-resistant algorithms amid concerns over large-scale quantum computers. This culminated in 2022 when NIST selected the LWE-based CRYSTALS-Kyber key-encapsulation mechanism as one of the first standardized post-quantum algorithms, marking LWE's transition from theoretical construct to deployed standard. From 2023 to 2025, research has intensified on practical attacks against LWE, with benchmarking frameworks evaluating solvers like uSVP and SALSA on real-world parameters from schemes such as , revealing insights into concrete security levels. Parallel efforts have focused on quantum hardness proofs, including analyses of LWE variants resistant to quantum algorithms, such as those incorporating quantum amplitudes to strengthen oblivious sampling and maintain post-quantum security.

Problem Formulations

Search LWE

The search version of the Learning with Errors (LWE) problem, denoted as Search-LWE_{n,q,\chi}, requires recovering a secret vector sZqns \in \mathbb{Z}_q^n given mm samples (ai,bi)(a_i, b_i) for i=1,,mi = 1, \dots, m, where each aia_i is chosen uniformly at random from Zqn\mathbb{Z}_q^n, bi=ai,s+ei(modq)b_i = \langle a_i, s \rangle + e_i \pmod{q}, and the errors eie_i are drawn independently from a distribution χ\chi over Zq\mathbb{Z}_q. Typically, ss is also sampled from χ\chi or uniformly, and χ\chi is a bounded or discrete Gaussian distribution centered at 0. The hardness of Search-LWE is assumed under computational limits when mm is polynomial in the dimension nn, the modulus qq is polynomial in nn, and χ\chi has small width relative to qq, such as a discrete Gaussian with standard deviation σ=αq\sigma = \alpha q where α=1/poly(n)\alpha = 1/\mathrm{poly}(n). This regime ensures the errors are large enough to obscure the secret (e.g., αq2n\alpha q \geq 2\sqrt{n}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.