Recent from talks
Nothing was collected or created yet.
Security level
View on WikipediaIn cryptography, security level is a measure of the strength that a cryptographic primitive — such as a cipher or hash function — achieves. Security level is usually expressed as a number of "bits of security" (also security strength),[1] where n-bit security means that the attacker would have to perform 2n operations to break it,[2] but other methods have been proposed that more closely model the costs for an attacker.[3] This allows for convenient comparison between algorithms and is useful when combining multiple primitives in a hybrid cryptosystem, so there is no clear weakest link. For example, AES-128 (key size 128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to a RSA using 3072-bit key.
In this context, security claim or target security level is the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is considered broken.[4][5]
In symmetric cryptography
[edit]Symmetric algorithms usually have a strictly defined security claim. For symmetric ciphers, it is typically equal to the key size of the cipher — equivalent to the complexity of a brute-force attack.[5][6] Cryptographic hash functions with output size of n bits usually have a collision resistance security level n/2 and a preimage resistance level n. This is because the general birthday attack can always find collisions in 2n/2 steps.[7] For example, SHA-256 offers 128-bit collision resistance and 256-bit preimage resistance.
However, there are some exceptions to this. The Phelix and Helix are 256-bit ciphers offering a 128-bit security level.[5][8] The SHAKE variants of SHA-3 are also different: for a 256-bit output size, SHAKE-128 provides 128-bit security level for both collision and preimage resistance.[9]
In asymmetric cryptography
[edit]The design of most asymmetric algorithms (i.e. public-key cryptography) relies on neat mathematical problems that are efficient to compute in one direction, but inefficient to reverse by the attacker. However, attacks against current public-key systems are always faster than brute-force search of the key space. Their security level isn't set at design time, but represents a computational hardness assumption, which is adjusted to match the best currently known attack.[6]
Various recommendations have been published that estimate the security level of asymmetric algorithms, which differ slightly due to different methodologies.
- For the RSA cryptosystem at 128-bit security level, NIST and ENISA recommend using 3072-bit keys[10][11] and IETF 3253 bits.[12][13] The conversion from key length to a security level estimate is based on the complexity of the GNFS.[14]: §7.5
- Diffie–Hellman key exchange and DSA are similar to RSA in terms of the conversion from key length to a security level estimate.[14]: §7.5
- Elliptic curve cryptography requires shorter keys, so the recommendations for 128-bit are 256-383 (NIST), 256 (ENISA) and 242 bits (IETF). The conversion from key size f to security level is approximately f / 2: this is because the method to break the Elliptic Curve Discrete Logarithm Problem, the rho method, finishes in 0.886 sqrt(2f) additions.[15]
Typical levels
[edit]The following table are examples of typical security levels for types of algorithms as found in s5.6.1.1 of the US NIST SP-800-57 Recommendation for Key Management.[16] : Table 2
| Security Bits | Symmetric Key | Finite Field/Discrete Logarithm (DSA, DH, MQV) |
Integer Factorization (RSA) |
Elliptic Curve (ECDSA, EdDSA, ECDH, ECMQV) |
|---|---|---|---|---|
| 80 | 2TDEA[a] | L = 1024, N = 160 | k = 1024 | 160 ≤ f ≤ 223 |
| 112 | 3TDEA[a] | L = 2048, N =224 | k = 2048 | 224 ≤ f ≤ 255 |
| 128 | AES-128 | L = 3072, N = 256 | k = 3072 | 256 ≤ f ≤ 383 |
| 192 | AES-192 | L = 7680, N = 384 | k = 7680 | 384 ≤ f ≤ 511 |
| 256 | AES-256 | L = 15360, N = 512 | k = 15360 | f ≥ 512 |
Under NIST recommendation, a key of a given security level should only be transported under protection using an algorithm of equivalent or higher security level.[14]
The security level is given for the cost of breaking one target, not the amortized cost for group of targets. It takes 2128 operations to find a AES-128 key, yet the same number of amortized operations is required for any number m of keys. On the other hand, breaking m ECC keys using the rho method require sqrt(m) times the base cost.[15][17]
Meaning of "broken"
[edit]A cryptographic primitive is considered broken when an attack is found to have less than its advertised level of security. However, not all such attacks are practical: most currently demonstrated attacks take fewer than 240 operations, which translates to a few hours on an average PC. The costliest demonstrated attack on hash functions is the 261.2 attack on SHA-1, which took 2 months on 900 GTX 970 GPUs, and cost US$75,000 (although the researchers estimate only $11,000 was needed to find a collision).[18]
Aumasson draws the line between practical and impractical attacks at 280 operations. He proposes a new terminology:[19]
- A broken primitive has an attack taking ≤ 280 operations. An attack can be plausibly carried out.
- A wounded primitive has an attack taking between 280 and around 2100 operations. An attack is not possible right now, but future improvements are likely to make it possible.
- An attacked primitive has an attack that is cheaper than the security claim, but much costlier than 2100. Such an attack is too far from being practical.
- Finally, an analyzed primitive is one with no attacks cheaper than its security claim.
Quantum attacks
[edit]The field of post-quantum cryptography considers the security level of cryptographic algorithms in the face of a hypothetical attacker possessing a quantum computer.
- Most quantum attacks on symmetric ciphers provide a square-root speedup to their classical counterpart, thereby halving the security level provided. (The exception is the slide attack with Simon's algorithm, though it has not proved useful in attacking AES.) For example, AES-256 would provide 128 bits of quantum security, which is still considered plenty.[20][21]
- Shor's algorithm promises a massive speedup in solving the factoring problem, the discrete logarithm problem, and the period-finding problem, so long as a sufficiently large quantum computer on the order of millions of cubits is available. This would spell the end of RSA, DSA, DH, MQV, ECDSA, EdDSA, ECDH, and ECMQV in their current forms.[22]
Even though quantum computers capable of these operations have yet to appear, adversaries of today may choose to "harvest now, decrypt later": to store intercepted ciphertexts so that they can be decrypted when sufficiently powerful quantum computers become available. As a result, governments and businesses have already begun work on moving to quantum-resistant algorithms. Examples of these effort include Google and Cloudflare's tests of hybrid post-quantum TLS on the Internet and[23] NSA's release of Commercial National Security Algorithm Suite 2.0 in 2022.
References
[edit]- ^ NIST Special Publication 800-57 Part 1, Revision 5. Recommendation for Key Management: Part 1 – General, p. 17.
- ^ Lenstra, Arjen K. "Key Lengths: Contribution to The Handbook of Information Security" (PDF).
- ^ Bernstein, Daniel J.; Lange, Tanja (4 June 2012). "Non-uniform cracks in the concrete: the power of free precomputation" (PDF). Advances in Cryptology - ASIACRYPT 2013. Lecture Notes in Computer Science. pp. 321–340. doi:10.1007/978-3-642-42045-0_17. ISBN 978-3-642-42044-3.
- ^ Aumasson, Jean-Philippe (2011). Cryptanalysis vs. Reality (PDF). Black Hat Abu Dhabi.
- ^ a b c Bernstein, Daniel J. (25 April 2005). Understanding brute force (PDF). ECRYPT STVL Workshop on Symmetric Key Encryption.
- ^ a b Lenstra, Arjen K. (9 December 2001). "Unbelievable Security: Matching AES Security Using Public Key Systems" (PDF). Advances in Cryptology — ASIACRYPT 2001. Lecture Notes in Computer Science. Vol. 2248. Springer, Berlin, Heidelberg. pp. 67–86. doi:10.1007/3-540-45682-1_5. ISBN 978-3-540-45682-7.
- ^ Alfred J. Menezes; Paul C. van Oorschot; Scott A. Vanstone. "Chapter 9 - Hash Functions and Data Integrity" (PDF). Handbook of Applied Cryptography. p. 336.
- ^ Ferguson, Niels; Whiting, Doug; Schneier, Bruce; Kelsey, John; Lucks, Stefan; Kohno, Tadayoshi (24 February 2003). "Helix: Fast Encryption and Authentication in a Single Cryptographic Primitive" (PDF). Fast Software Encryption. Lecture Notes in Computer Science. Vol. 2887. Springer, Berlin, Heidelberg. pp. 330–346. doi:10.1007/978-3-540-39887-5_24. ISBN 978-3-540-20449-7.
- ^ Dworkin, Morris J. (August 2015). SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions (PDF) (Report). NIST. p. 23. doi:10.6028/nist.fips.202.
- ^ Barker, Elaine (2020). Recommendation for Key Management, Part 1 -- General (PDF) (Report). NIST. NIST. pp. 54–55. doi:10.6028/NIST.SP.800-57pt1r5.
- ^ Algorithms, key size and parameters report – 2014. ENISA. Publications Office. 2013. p. 37. doi:10.2824/36822. ISBN 978-92-9204-102-1.
{{cite book}}: CS1 maint: others (link) - ^ Orman, Hilarie; Hoffman, Paul (April 2004). Determining Strengths For Public Keys Used For Exchanging Symmetric Keys. RFC. IETF. doi:10.17487/RFC3766.
- ^ Giry, Damien. "Keylength - Compare all Methods". keylength.com. Retrieved 2017-01-02.
- ^ a b c "Implementation Guidance for FIPS 140-2 and the Cryptographic Module Validation Program" (PDF).
- ^ a b "The rho method". Retrieved 21 February 2024.
- ^ Barker, Elaine (May 2020). Recommendation for Key Management, Part 1: General (PDF) (Report). NIST. NIST. p. 158. CiteSeerX 10.1.1.106.307. doi:10.6028/nist.sp.800-57pt1r5.
- ^ "After ECDH with Curve25519, is it pointless to use anything stronger than AES-128?". Cryptography Stack Exchange.
- ^ Gaëtan Leurent; Thomas Peyrin (2020-01-08). SHA-1 is a Shambles: First Chosen-Prefix Collision on SHA-1 and Application to the PGP Web of Trust (PDF) (Report). IACR Cryptology ePrint Archive.
- ^ Aumasson, Jean-Philippe (2020). Too Much Crypto (PDF). Real World Crypto Symposium.
- ^ Bonnetain, Xavier; Naya-Plasencia, María; Schrottenloher, André (11 June 2019). "Quantum Security Analysis of AES". IACR Transactions on Symmetric Cryptology. 2019 (2): 55–93. doi:10.13154/tosc.v2019.i2.55-93.
- ^ O'Shea, Dan (April 26, 2022). "AES-256 joins the quantum resistance". Fierce Electronics. Retrieved September 26, 2023.
- ^ WOHLWEND, JEREMY (2016). "ELLIPTIC CURVE CRYPTOGRAPHY: PRE AND POST QUANTUM" (PDF).
- ^ Bernstein, Daniel J. (2024-01-02). "Double encryption: Analyzing the NSA/GCHQ arguments against hybrids. #nsa #quantification #risks #complexity #costs".
Further reading
[edit]- Aumasson, Jean-Philippe (2020). Too Much Crypto (PDF). Real World Crypto Symposium.
See also
[edit]Security level
View on GrokipediaFundamentals
Definition and purpose
In cryptography, a security level refers to the minimum amount of computational resources, such as the number of operations or processing time, required for an adversary to successfully break a cryptographic primitive, such as an encryption algorithm or hash function. This strength is typically quantified in bits, where a k-bit security level indicates that breaking the system would require approximately 2^k operations, equivalent to the effort needed to exhaustively search a k-bit key space in an ideal scenario.[3][4] The primary purpose of security levels is to provide a standardized metric for assessing and comparing the resilience of cryptographic systems against brute-force attacks and more sophisticated cryptanalytic methods, thereby facilitating informed decisions on algorithm selection and parameter configuration to achieve desired protection durations. By expressing security in comparable terms, this framework helps ensure that cryptographic implementations can safeguard sensitive data against foreseeable computational threats over extended periods, such as decades.[3][5] The concept of security level in bits emerged in the late 1990s and early 2000s amid growing concerns over the adequacy of existing key sizes and the push for robust cryptographic standards, evolving from informal, ad-hoc assessments to precise, formalized evaluations. This development was closely tied to initiatives like the Advanced Encryption Standard (AES) selection process, initiated by NIST in 1997 and finalized in 2001, which emphasized quantifiable security to replace the vulnerable Data Encryption Standard (DES). Seminal work, such as the 2000 analysis by Lenstra and Verheul, provided foundational guidelines for aligning key sizes across algorithm types to equivalent security levels, marking a shift toward systematic cryptographic design.[6][5]Measurement in bits
In cryptography, the security level is quantified in bits, where a level of bits indicates that the most efficient known attack requires approximately computational operations to succeed. This logarithmic measure captures the exponential effort needed to compromise a system, drawing from foundational concepts like exhaustive search for key recovery or the birthday paradox for collision-finding in hash functions. For instance, under exhaustive search, an adversary must evaluate roughly half the possible inputs on average to identify the correct one, establishing a baseline for this metric.[7] For brute-force attacks, the security level is approximately , where is the size of the search space, such as the number of possible keys. To derive this, consider a uniform key space of size ; an exhaustive search succeeds after an expected trials, each involving a fixed-cost operation like encryption or verification. The effort in operations is thus , and the security level satisfies , yielding . For large , the subtraction of 1 bit is negligible, so is conventionally taken as bits, reflecting the full key length's contribution to brute-force resistance. In generic cases beyond brute force, , where is the minimum time complexity over all feasible attacks, often adjusted for success probability in formal games; this ensures the measure accounts for non-uniform or probabilistic adversaries.[7][8][4] Several factors influence this measurement beyond raw key size. Algorithm-specific weaknesses can reduce effective security if a non-brute-force attack achieves lower complexity, prioritizing the minimum over all known methods. Parallelization distributes workload across hardware, potentially shortening wall-clock time but preserving total operations at ; however, global limits on computational resources, such as energy or chip availability, constrain practicality. In quantum settings, Grover's algorithm offers a quadratic speedup for unstructured search, halving effective bit security (e.g., reducing -bit effort to ), though full implications for post-quantum migration are addressed elsewhere.[4][9] The bit measure primarily denotes time complexity in operations, but space requirements provide a complementary constraint. Attacks like meet-in-the-middle may demand storage alongside reduced time, creating trade-offs where space limits feasibility even if time is manageable; security level thus implicitly considers the dominant resource barrier, whether temporal or spatial. In symmetric cryptography, this framework directly quantifies effort for key exhaustion.[7][4]Symmetric cryptography
Relation to key size
In symmetric cryptography, the security level is directly proportional to the key size under ideal conditions, but practical considerations such as meet-in-the-middle attacks often reduce the effective strength to approximately half the key size in bits for certain constructions. According to Claude Shannon's 1949 theorem on perfect secrecy, a symmetric encryption scheme achieves perfect secrecy—and thus a security level equal to the key entropy—only if the key space is at least as large as the message space, ensuring that the ciphertext reveals no information about the plaintext beyond the key's entropy. This model assumes a one-time uniformly random key with entropy matching or exceeding the message length, establishing the foundational link where security level equals key entropy in bits. For an ideal single-encryption symmetric cipher with an n-bit key, brute-force key recovery requires 2^n operations, yielding an n-bit security level. However, the practical security level is limited to the minimum of n bits (from brute force) and the effort against known structural weaknesses, which can drop to n/2 bits due to meet-in-the-middle attacks. In the perfect secrecy model, this aligns with Shannon's bound, where deviations from ideal key entropy reduce achievable security. A key example of this reduction occurs in double encryption, where two independent n-bit keys are used in cascade (total key size 2n bits). The meet-in-the-middle attack exploits this by precomputing all 2^n possible encryptions from the plaintext with the second key and storing them, then performing 2^n decryptions from the ciphertext with the first key to find a matching intermediate value; the time and space complexity is thus O(2^n), providing only n-bit security instead of 2n bits. This vulnerability demonstrates why double encryption does not double the security, effectively halving the strength relative to the total key size; triple encryption is used instead to approach higher levels, as in 3DES. For the AES-128 block cipher, brute-force security is 128 bits, but the biclique attack—a meet-in-the-middle variant—reduces full-round key recovery to approximately 126.1 bits of effort (2^{126.1} operations), with 2^{88} chosen plaintexts, showing how structural attacks can marginally lower the practical level below the nominal key size while still providing robust n-bit security overall. In contrast to asymmetric cryptography, symmetric schemes maintain a near-linear relation, though the scaling differs significantly in effort requirements.Attack models and effort
In symmetric cryptography, attack models primarily focus on methods that exploit the structure of the algorithm or the key space to recover the secret key with less effort than exhaustive search. The brute-force attack involves systematically trying all possible keys until the correct one is found, requiring up to key trials for a -bit key, where each trial typically involves encrypting a known plaintext and comparing it to the ciphertext.[10] Dictionary attacks, applicable when keys are derived from human-chosen passwords, reduce effort by testing a predefined list of likely candidates rather than the full key space, with complexity proportional to the dictionary size rather than .[11] Side-channel attacks, while often practical, theoretically measure effort through the number of observable traces (e.g., power consumption or timing) needed to distinguish key-dependent patterns, though they are more implementation-specific than purely computational.[12] Effort estimation in symmetric cryptography quantifies security as the logarithm base 2 of the minimum resources (time or data) required for the best-known attack, often expressed in bits. For block ciphers, brute-force remains the baseline at bits for a -bit key, but cryptanalytic methods like differential and linear cryptanalysis can lower this if the cipher's design allows high-probability differentials or linear approximations. Differential cryptanalysis, introduced by Biham and Shamir, exploits probabilistic differences between plaintext pairs propagating through the cipher rounds; for the full 16-round Data Encryption Standard (DES), it requires approximately chosen plaintexts and equivalent time complexity to recover the key.[13] Linear cryptanalysis, developed by Matsui, uses linear approximations over XOR operations across rounds; applied to full DES, it demands about known plaintexts and encryptions for key recovery, establishing the security level as the exponent in this complexity.[14] These methods highlight how the security level for block ciphers is often the minimum of brute-force effort and the complexity of the strongest structural attack. A notable historical demonstration of brute-force effort occurred in 1998, when the Electronic Frontier Foundation's DES cracker—a specialized hardware machine costing about $250,000—recovered a 56-bit DES key in 56 hours by performing encryptions at a rate of roughly 90 billion keys per second, far exceeding theoretical estimates on general-purpose computers at the time and underscoring the gap between idealized and real-world computational feasibility.[15] To contextualize higher security levels, operations equate to approximately trials; even on a modern supercomputer capable of operations per second as of 2025, this would take approximately 12 days. However, as of 2025, 80-bit security is generally viewed as inadequate for sensitive long-term applications, with recommendations favoring at least 128 bits.[16]Asymmetric cryptography
Computational hardness assumptions
In asymmetric cryptography, security relies on computational hardness assumptions, which posit that certain mathematical problems are intractable to solve efficiently on classical computers. These assumptions underpin widely used schemes by ensuring that deriving private keys from public information requires infeasible computational effort. The primary problems include integer factorization, the discrete logarithm problem in finite fields, and the elliptic curve discrete logarithm problem.[17][18] The integer factorization problem forms the basis for the security of the RSA cryptosystem, introduced in 1978. It involves decomposing a large composite integer , where and are large primes, into its prime factors. The RSA scheme assumes that while multiplication of large primes is straightforward, factoring the resulting product is computationally hard, with no known polynomial-time algorithm available for general instances on classical computers. Security levels are thus tied to the size of , as larger instances exponentially increase the effort required for factorization using the best known algorithms, such as the general number field sieve.[17][17] The discrete logarithm problem (DLP) underpins the security of Diffie-Hellman key exchange and related schemes, first proposed in 1976. Formally, given a cyclic group of order , a generator , and an element , the DLP is to find an integer such that , or equivalently in the multiplicative group modulo a prime , . No efficient polynomial-time algorithm exists for solving the DLP in general groups suitable for cryptography, such as prime-order subgroups of , making the problem's hardness dependent on the group size and structure.[18][19][19] The elliptic curve discrete logarithm problem (ECDLP) extends the DLP to elliptic curve groups over finite fields, providing the foundation for elliptic curve cryptography (ECC) schemes. Proposed independently by Neal Koblitz and Victor Miller in 1985, the ECDLP requires finding an integer such that , where and are points on an elliptic curve defined over a finite field . Like the standard DLP, no polynomial-time solution is known on classical computers, but the ECDLP offers stronger security per bit due to the more compact group structure, with hardness scaling with the curve's order. These assumptions originated in the 1970s with the RSA paper's reliance on factoring hardness and Diffie-Hellman's use of the DLP, where security was heuristically argued based on the absence of efficient algorithms. Over time, cryptographic proofs evolved to provide more rigorous security guarantees, particularly in the random oracle model introduced by Bellare and Rogaway in 1993, which idealizes hash functions as random oracles to demonstrate that schemes remain secure if the underlying problems are hard. This model has facilitated provable security reductions for many asymmetric primitives, bridging practical implementations with theoretical foundations.[17][18][20]Equivalent security estimates
Equivalent security estimates for asymmetric cryptography map the computational effort required to solve underlying hard problems—such as integer factorization for RSA or the elliptic curve discrete logarithm problem (ECDLP) for ECC—to equivalent symmetric security levels, typically measured in bits of security against brute-force attacks. These estimates are derived from the complexity of the best-known classical algorithms, providing a benchmark for comparable protection levels. For instance, the National Institute of Standards and Technology (NIST) has established mappings based on empirical and asymptotic analyses of attack costs, ensuring that an asymmetric key size offers security roughly equivalent to a symmetric key of the specified bit length.[7] A primary estimation method involves assessing the running time of the general number field sieve (GNFS) for factoring the modulus in RSA, which provides the foundational security estimate. The asymptotic complexity of GNFS for factoring an integer is given by where denotes the subexponential function. The security level in bits is then approximately the base-2 logarithm of this time complexity, yielding a generic form , with for natural logarithms adjusted to bits via . This log-scale formula highlights how security grows sublinearly with modulus size, guiding the selection of larger keys for higher protection.[21] NIST's mappings translate these complexities into practical equivalents, correlating asymmetric key sizes with symmetric security levels based on GNFS and ECDLP solvers like Pollard's rho adapted for curves. For RSA, a 2048-bit modulus provides approximately 112 bits of security, comparable to breaking a 112-bit symmetric key via exhaustive search, while a 3072-bit modulus achieves 128 bits, aligning with AES-128's strength against known attacks. These estimates account for the highest attack efforts, such as GNFS variants optimized for general composites.[7] The following table summarizes key NIST equivalents for common security levels:| Security Level (bits) | RSA Modulus Size (bits) | ECC Curve Size (bits) |
|---|---|---|
| 112 | 2048 | 224–255 |
| 128 | 3072 | 256–383 |
| 192 | 7680 | 384–511 |
| 256 | 15360 | 512+ |
