Recent from talks
Nothing was collected or created yet.
Cryptography
View on Wikipedia
This article needs additional citations for verification. (March 2021) |

Cryptography, or cryptology (from Ancient Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively[1]), is the practice and study of techniques for secure communication in the presence of adversarial behavior.[2] More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages.[3] Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others.[4] Core concepts related to information security (data confidentiality, data integrity, authentication and non-repudiation) are also central to cryptography.[5] Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary.[6] Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that provably cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export.[7] In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation.[8][9] Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media.[10]
Terminology
[edit]
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century – originating from "The Gold-Bug", a story by Edgar Allan Poe.[11][12]
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext).[13] Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks.
There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key.[14] Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard).[15] Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English,[16] while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis.[17][18] English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. RFC 2828 advises that steganography is sometimes included in cryptology.[19]
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.[20][21]
History
[edit]Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communication, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others.
Classic cryptography
[edit]
The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet).[22] Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet.[23] Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (c. 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military).[24] Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[13] Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.
In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.[13]
In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries.[25]
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods.[26] Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.[27]

Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker.[28] Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.[29][30]


Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi.[30] Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.[31]
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II.[32] The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[33]
Early computer-era cryptography
[edit]Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States.[34] In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm.[35] In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column.[36] Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.[37]
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing.[38] The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography.[when?] The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.[5]
Modern cryptography
[edit]Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography.[39][40] His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis",[41] and as having turned cryptography from an "art to a science".[42] As a result of his contributions and work, he has been described as the "founding father of modern cryptography".[43]
Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics.[44] Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
Research into post-quantum cryptography (PQC) has intensified because practical quantum computers would break widely deployed public-key systems such as RSA, Diffie–Hellman and ECC. A 2017 review in Nature surveys the leading PQC families—lattice-based, code-based, multivariate-quadratic and hash-based schemes—and stresses that standardisation and deployment should proceed well before large-scale quantum machines become available.[45]
Symmetric-key cryptography
[edit]
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[35]

Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted).[46] Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption[47] to e-mail privacy[48] and secure remote access.[49] Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL.[5][50]
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher.[5] Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream.[51]
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[5][45] this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit."[52] Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm.[53] Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Public-key cryptography
[edit]
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.

In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key.[54] A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[55] The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[56]
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.[35] The X.509 standard defines the most commonly used format for public key certificates.[57]
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm.[58]
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[59] Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA.[59][60] In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.[61]

Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).[50]
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[5]
Cryptographic hash functions
[edit]Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit."[52] Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm.[53] Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Cryptanalysis
[edit]
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[62] Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts.[5] Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient.[63] Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations.[64] This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis[65] and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
Cryptographic primitives
[edit]Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
Cryptosystems
[edit]One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash[66] systems, signcryption systems, etc. Some more 'theoretical'[clarification needed] cryptosystems include interactive proof systems,[67] (like zero-knowledge proofs)[68] and systems for secret sharing.[69][70]
Lightweight cryptography
[edit]Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security.[71] Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology.[72]
Applications
[edit]This section needs expansion. You can help by adding to it. (December 2021) |
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys.[73] But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES,[74] as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension.
Cybersecurity
[edit]Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS.[75] "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram.[75]
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker.[75] Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.[75]
Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in.[75]
Cryptocurrencies and cryptoeconomics
[edit]Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP).
Quantum computing cybersecurity
[edit]Estimates suggest that a quantum computer could reduce the effort required to break today’s strongest RSA or elliptic-curve keys from millennia to mere seconds, rendering current protocols (such as the versions of TLS that rely on those keys) insecure.[76]
To mitigate this “quantum threat”, researchers are developing quantum-resistant algorithms whose security rests on problems believed to remain hard for both classical and quantum computers.[77]
Legal issues
[edit]Prohibitions
[edit]Cryptography has long been of interest to intelligence gathering and law enforcement agencies.[9] Secret communications may be criminal or even treasonous.[citation needed] Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography.[7] Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.[78]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography.[9] One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List.[79] Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
Export controls
[edit]In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed.[80][81] Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.[82]
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[83] Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000;[84] there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed]
NSA involvement
[edit]
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy.[9] The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography.[85] DES was designed to be resistant to differential cryptanalysis,[86] a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[87] According to Steven Levy, IBM discovered differential cryptanalysis,[81] but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).[81]
Digital rights management
[edit]Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[88] This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA.[89] Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security.[90] Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash[10] triggered by the perceived impact of such notices on fair use and free speech.
Forced disclosure of encryption keys
[edit]In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[8] Successful prosecutions have occurred under the Act; the first, in 2009,[91] resulted in a term of 13 months' imprisonment.[92] Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password.[93] The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment.[94] In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[95]
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
See also
[edit]- Collision attack
- Comparison of cryptography libraries
- Cryptovirology – Securing and encrypting virology
- Crypto Wars – Attempts to limit access to strong cryptography
- Encyclopedia of Cryptography and Security – Book by Technische Universiteit Eindhoven
- Global surveillance – Mass surveillance across national borders
- Indistinguishability obfuscation – Type of cryptographic software obfuscation
- Information theory – Scientific study of digital information
- Outline of cryptography
- List of cryptographers
- List of multiple discoveries
- List of cryptography books
- List of open-source Cypherpunk software
- List of unsolved problems in computer science – List of unsolved computational problems
- Pre-shared key – Method to set encryption keys
- Secure cryptoprocessor
- Strong cryptography – Term applied to cryptographic systems that are highly resistant to cryptanalysis
- Syllabical and Steganographical Table – Eighteenth-century work believed to be the first cryptography chart – first cryptography chart
- World Wide Web Consortium's Web Cryptography API – World Wide Web Consortium cryptography standard
References
[edit]- ^ Liddell, Henry George; Scott, Robert; Jones, Henry Stuart; McKenzie, Roderick (1984). A Greek-English Lexicon. Oxford University Press.
- ^ Rivest, Ronald L. (1990). "Cryptography". In J. Van Leeuwen (ed.). Handbook of Theoretical Computer Science. Vol. 1. Elsevier.
- ^ Bellare, Mihir; Rogaway, Phillip (21 September 2005). "Introduction". Introduction to Modern Cryptography. p. 10.
- ^ Sadkhan, Sattar B. (December 2013). "Key note lecture multidisciplinary in cryptology and information security". 2013 International Conference on Electrical Communication, Computer, Power, and Control Engineering (ICECCPCE). pp. 1–2. doi:10.1109/ICECCPCE.2013.6998773. ISBN 978-1-4799-5633-3. S2CID 22378547.
- ^ a b c d e f g Menezes, A.J.; van Oorschot, P.C.; Vanstone, S.A. (1997). Handbook of Applied Cryptography. Taylor & Francis. ISBN 978-0-8493-8523-0.
- ^ a b Biggs, Norman (2008). Codes: An introduction to Information Communication and Cryptography. Springer. p. 171.
- ^ a b "Overview per country". Crypto Law Survey. February 2013. Archived from the original on 1 January 2013. Retrieved 26 March 2015.
- ^ a b "UK Data Encryption Disclosure Law Takes Effect". PC World. 1 October 2007. Archived from the original on 20 January 2012. Retrieved 26 March 2015.
- ^ a b c d Ranger, Steve (24 March 2015). "The undercover war on your internet secrets: How online surveillance cracked our trust in the web". TechRepublic. Archived from the original on 12 June 2016. Retrieved 12 June 2016.
- ^ a b Doctorow, Cory (2 May 2007). "Digg users revolt over AACS key". Boing Boing. Archived from the original on 12 May 2015. Retrieved 26 March 2015.
- ^ Whalen, Terence (1994). "The Code for Gold: Edgar Allan Poe and Cryptography". Representations. 46 (46). University of California Press: 35–57. doi:10.2307/2928778. JSTOR 2928778.
- ^ Rosenheim, Shawn (1997). The Cryptographic Imagination: Secret Writing from Edgar Poe to the Internet. Johns Hopkins University Press. p. 20. ISBN 978-0801853319.
- ^ a b c d Kahn, David (1967). The Codebreakers. ISBN 978-0-684-83130-5.
- ^ "An Introduction to Modern Cryptosystems". Archived from the original on 17 November 2015. Retrieved 12 October 2015.
- ^ Sharbaf, M.S. (1 November 2011). "Quantum cryptography: An emerging technology in network security". 2011 IEEE International Conference on Technologies for Homeland Security (HST). pp. 13–19. doi:10.1109/THS.2011.6107841. ISBN 978-1-4577-1376-7. S2CID 17915038.
- ^ "cryptology | Britannica". www.britannica.com. Archived from the original on 10 July 2022. Retrieved 22 June 2022.
- ^ Oded Goldreich, Foundations of Cryptography, Volume 1: Basic Tools, Cambridge University Press, 2001, ISBN 0-521-79172-3
- ^ "Cryptology (definition)". Merriam-Webster's Collegiate Dictionary (11th ed.). Merriam-Webster. Retrieved 26 March 2015.
- ^ R. Shirey (May 2000). Internet Security Glossary. Internet Engineering Task Force. doi:10.17487/RFC2828. RFC 2828. Informational. Obsoleted by RFC 4949.
- ^ Military.com (13 May 2021). "What's a Cryptologic Linguist?". Military.com. Retrieved 17 July 2023.
- ^ James D. Benson; Michael J. Cummings; William S. Greaves, eds. (January 1988). Linguistics in a Systemic Perspective. John Benjamins Publishing Company. p. 38. ISBN 9789027278760.
- ^ Saltzman, Benjamin A. (1 October 2018). "Vt hkskdkxt: Early Medieval Cryptography, Textual Errors, and Scribal Agency". Speculum. 93 (4): 975–1009. doi:10.1086/698861. ISSN 0038-7134. S2CID 165362817. Archived from the original on 26 February 2022. Retrieved 26 February 2022.
- ^ Katz, Jonathan; Lindell, Yehuda (2014). Introduction to Modern Cryptography (2nd ed.). Chapman and Hall. p. 9. ISBN 9781466570269.
- ^ I︠A︡shchenko, V.V. (2002). Cryptography: an introduction. AMS Bookstore. p. 6. ISBN 978-0-8218-2986-8.
- ^ electricpulp.com. "CODES – Encyclopaedia Iranica". www.iranicaonline.org. Archived from the original on 5 March 2017. Retrieved 4 March 2017.
- ^ Kahn, David (1996). The Codebreakers: The Comprehensive History of Secret Communication from Ancient Times to the Internet. Simon and Schuster. ISBN 978-1439103555. Archived from the original on 1 July 2023. Retrieved 16 October 2020.
- ^ Broemeling, Lyle D. (1 November 2011). "An Account of Early Statistical Inference in Arab Cryptology". The American Statistician. 65 (4): 255–257. doi:10.1198/tas.2011.10191. S2CID 123537702.
- ^ Cryptography Exam Study Essentials - A Comprehensive Guide to Cryptography Concepts for Exams (1st ed.). Cybellium Ltd (published 26 October 2024). 2024. p. 78. ISBN 9781836794936.
- ^ Singh, Simon (2000). The Code Book. New York: Anchor Books. pp. 14–20. ISBN 978-0-385-49532-5.
- ^ a b Al-Kadi, Ibrahim A. (April 1992). "The origins of cryptology: The Arab contributions". Cryptologia. 16 (2): 97–126. doi:10.1080/0161-119291866801.
- ^ Schrödel, Tobias (October 2008). "Breaking Short Vigenère Ciphers". Cryptologia. 32 (4): 334–337. doi:10.1080/01611190802336097. S2CID 21812933.
- ^ Hakim, Joy (1995). A History of US: War, Peace and all that Jazz. New York: Oxford University Press. ISBN 978-0-19-509514-2.
- ^ Gannon, James (2001). Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century. Washington, D.C.: Brassey's. ISBN 978-1-57488-367-1.
- ^ "The Legacy of DES – Schneier on Security". www.schneier.com. 6 October 2004. Archived from the original on 23 February 2022. Retrieved 26 January 2022.
- ^ a b c Diffie, Whitfield; Hellman, Martin (November 1976). "New Directions in Cryptography" (PDF). IEEE Transactions on Information Theory. IT-22 (6): 644–654. CiteSeerX 10.1.1.37.9720. doi:10.1109/tit.1976.1055638. Archived (PDF) from the original on 3 December 2017. Retrieved 16 November 2015.
- ^ Singh, Simon (1999). The Code Book: The Science of Secrecy From Ancient Egypt To Quantum Cryptography (First Anchor Books ed.). New York: Anchor Books. pp. 278. ISBN 978-0-385-49532-5.
- ^ Cryptography: Theory and Practice, Third Edition (Discrete Mathematics and Its Applications), 2005, by Douglas R. Stinson, Chapman and Hall/CRC
- ^ Blaze, Matt; Diffie, Whitefield; Rivest, Ronald L.; Schneier, Bruce; Shimomura, Tsutomu; Thompson, Eric; Wiener, Michael (January 1996). "Minimal key lengths for symmetric ciphers to provide adequate commercial security". Fortify. Archived from the original on 24 September 2015. Retrieved 26 March 2015.
- ^ Piper, F. C.; Murphy, Sean (2002). Cryptography: A Very Short Introduction. Very short introductions. Oxford; New York: Oxford University Press. p. 75. ISBN 978-0-19-280315-3. OCLC 48932608.
- ^ Hoffstein, Jeffrey; Pipher, Jill Catherine; Silverman, Joseph H. (2014). An Introduction to Mathematical Cryptography (2nd ed.). New York: Springer. p. 263. ISBN 978-1-4939-1710-5. OCLC 891676484.
- ^ O'Regan, Gerard (2008). A Brief History of Computing. London: Springer. p. 61. ISBN 978-1-84800-083-4. OCLC 183149167.
- ^ Zheng, Zhiyong (2022). Modern Cryptography Volume 1: A Classical Introduction to Informational and Mathematical Principle. Financial Mathematics and Fintech. Singapore: Springer Singapore. pp. vi. doi:10.1007/978-981-19-0920-7. ISBN 978-981-19-0919-1.
- ^ Bruen, Aiden A.; Forcinito, Mario (2005). Cryptography, Information Theory, and Error-Correction: A Handbook for the 21st Century. Hoboken, N.J: Wiley-Interscience. p. 3. ISBN 978-0-471-65317-2. OCLC 56191935.
- ^ Diffie, W.; Hellman, M. (1 September 2006). "New directions in cryptography". IEEE Transactions on Information Theory. 22 (6): 644–654. doi:10.1109/TIT.1976.1055638. Archived from the original on 19 April 2022. Retrieved 19 April 2022.
- ^ a b Bernstein, Daniel J.; Lange, Tanja (14 September 2017). "Post-quantum cryptography". Nature. 549 (7671): 188–194. Bibcode:2017Natur.549..188B. doi:10.1038/nature23461. ISSN 0028-0836. PMID 28905891. S2CID 4446249. Archived from the original on 10 July 2022. Retrieved 26 August 2022.
- ^ "FIPS PUB 197: The official Advanced Encryption Standard" (PDF). Computer Security Resource Center. National Institute of Standards and Technology. Archived from the original (PDF) on 7 April 2015. Retrieved 26 March 2015.
- ^ "NCUA letter to credit unions" (PDF). National Credit Union Administration. July 2004. Archived (PDF) from the original on 12 September 2014. Retrieved 26 March 2015.
- ^ J. Callas; L. Donnerhacke; H. Finney; R. Thayer (November 1998). OpenPGP Message Format. Network Working Group. doi:10.17487/RFC2440. RFC 2440. Proposed Standard. Obsoleted by RFC 4880.
- ^ Golen, Pawel (19 July 2002). "SSH". WindowSecurity. Archived from the original on 29 October 2009. Retrieved 26 March 2015.
- ^ a b Schneier, Bruce (1996). Applied Cryptography (2nd ed.). Wiley. ISBN 978-0-471-11709-4.
- ^ Paar, Christof (2009). Understanding cryptography : a textbook for students and practitioners. Jan Pelzl. Berlin: Springer. p. 123. ISBN 978-3-642-04101-3. OCLC 567365751.
- ^ a b "Announcing Request for Candidate Algorithm Nominations for a New Cryptographic Hash Algorithm (SHA–3) Family" (PDF). Federal Register. 72 (212). 2 November 2007. Archived (PDF) from the original on 28 February 2008.
- ^ a b "NIST Selects Winner of Secure Hash Algorithm (SHA-3) Competition". NIST. National Institute of Standards and Technology. 2 October 2012. Archived from the original on 2 April 2015. Retrieved 26 March 2015.
- ^ Diffie, Whitfield; Hellman, Martin (8 June 1976). "Multiuser cryptographic techniques". Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS '76. Vol. 45. pp. 109–112. doi:10.1145/1499799.1499815. S2CID 13210741.
- ^ Ralph Merkle was working on similar ideas at the time and encountered publication delays, and Hellman has suggested that the term used should be Diffie–Hellman–Merkle asymmetric key cryptography.
- ^ Kahn, David (Fall 1979). "Cryptology Goes Public". Foreign Affairs. 58 (1): 141–159. doi:10.2307/20040343. JSTOR 20040343.
- ^ "Using Client-Certificate based authentication with NGINX on Ubuntu". SSLTrust. Archived from the original on 26 August 2019. Retrieved 13 June 2019.
- ^ Rivest, Ronald L.; Shamir, A.; Adleman, L. (1978). "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems" (PDF). Communications of the ACM. 21 (2): 120–126. CiteSeerX 10.1.1.607.2677. doi:10.1145/359340.359342. S2CID 2873616. Archived from the original (PDF) on 16 November 2001. Previously released as an MIT "Technical Memo" in April 1977, and published in Martin Gardner's Scientific American Mathematical recreations column
- ^ a b Wayner, Peter (24 December 1997). "British Document Outlines Early Encryption Discovery". The New York Times. Archived from the original on 27 June 2017. Retrieved 26 March 2015.
- ^ Cocks, Clifford (20 November 1973). "A Note on 'Non-Secret Encryption'" (PDF). CESG Research Report. Archived (PDF) from the original on 27 July 2011. Retrieved 22 July 2009.
- ^ Singh, Simon (1999). The Code Book. Doubleday. pp. 279–292. ISBN 9780385495318.
- ^ Shannon, Claude; Weaver, Warren (1949). The Mathematical Theory of Communication. Bibcode:1949mtc..book.....S.
- ^ "An Example of a Man-in-the-middle Attack Against Server Authenticated SSL-sessions" (PDF). Archived (PDF) from the original on 3 June 2016. Retrieved 13 October 2015.
- ^ Junod, Pascal (2001). "On the Complexity of Matsui's Attack". Selected Areas in Cryptography (PDF). Lecture Notes in Computer Science. Vol. 2259. pp. 199–211. doi:10.1007/3-540-45537-X_16. ISBN 978-3-540-43066-7.
- ^ Song, Dawn; Wagner, David A.; Tian, Xuqing (2001). "Timing Analysis of Keystrokes and Timing Attacks on SSH" (PDF). Tenth USENIX Security Symposium.
- ^ Brands, S. (1994). "Untraceable Off-line Cash in Wallet with Observers". Advances in Cryptology – CRYPTO' 93. Lecture Notes in Computer Science. Vol. 773. pp. 302–318. doi:10.1007/3-540-48329-2_26. ISBN 978-3-540-57766-9. Archived from the original on 26 July 2011.
- ^ Babai, László (1985). "Trading group theory for randomness". Proceedings of the seventeenth annual ACM symposium on Theory of computing – STOC '85. pp. 421–429. CiteSeerX 10.1.1.130.3397. doi:10.1145/22145.22192. ISBN 978-0-89791-151-1. S2CID 17981195.
- ^ Goldwasser, S.; Micali, S.; Rackoff, C. (1989). "The Knowledge Complexity of Interactive Proof Systems". SIAM Journal on Computing. 18 (1): 186–208. CiteSeerX 10.1.1.397.4002. doi:10.1137/0218012.
- ^ Blakley, G. (June 1979). "Safeguarding cryptographic keys". 1979 International Workshop on Managing Requirements Knowledge (MARK). Vol. 48. pp. 313–317. doi:10.1109/MARK.1979.8817296. ISBN 978-1-5090-3181-8.
- ^ Shamir, A. (1979). "How to share a secret". Communications of the ACM. 22 (11): 612–613. doi:10.1145/359168.359176. S2CID 16321225.
- ^ Gunathilake, Nilupulee A.; Al-Dubai, Ahmed; Buchana, William J. (2 November 2020). "Recent Advances and Trends in Lightweight Cryptography for IoT Security". 2020 16th International Conference on Network and Service Management (CNSM). Izmir, Turkey: IEEE. pp. 1–5. doi:10.23919/CNSM50824.2020.9269083. ISBN 978-3-903176-31-7. S2CID 227277538. Archived from the original on 24 April 2021. Retrieved 24 April 2021.
- ^ Thakor, Vishal A.; Razzaque, Mohammad Abdur; Khandaker, Muhammad R. A. (2021). "Lightweight Cryptography Algorithms for Resource-Constrained IoT Devices: A Review, Comparison and Research Opportunities". IEEE Access. 9: 28177–28193. Bibcode:2021IEEEA...928177T. doi:10.1109/ACCESS.2021.3052867. ISSN 2169-3536. S2CID 232042514.
- ^ Cohen, Fred (1995). "2.4 – Applications of Cryptography". all.net. Archived from the original on 24 August 1999. Retrieved 21 December 2021.
- ^ "4 Common Encryption Methods to Shield Sensitive Data From Prying Eyes". GetApp. Archived from the original on 14 May 2022. Retrieved 14 May 2022.
- ^ a b c d e Chamberlain, Austin (12 March 2017). "Applications of Cryptography | UCL Risky Business". blogs.ucl.ac.uk. Archived from the original on 26 February 2018. Retrieved 21 December 2021.
- ^ "Cryptography use cases: From secure communication to data security". IBM. 17 January 2024. Retrieved 1 August 2025.
- ^ "Prepping For Post-Quantum Cryptography". IEEE Spectrum. 16 April 2024. Retrieved 1 August 2025.
- ^ "6.5.1 What Are the Cryptographic Policies of Some Countries?". RSA Laboratories. Archived from the original on 16 April 2015. Retrieved 26 March 2015.
- ^ Rosenoer, Jonathan (1995). "Cryptography & Speech". CyberLaw. Archived from the original on 1 December 2005. Retrieved 23 June 2006.
- ^ "Case Closed on Zimmermann PGP Investigation". IEEE Computer Society's Technical Committee on Security and Privacy. 14 February 1996. Archived from the original on 11 June 2010. Retrieved 26 March 2015.
- ^ a b c Levy, Steven (2001). Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age. Penguin Books. p. 56. ISBN 978-0-14-024432-8. OCLC 244148644.
- ^ "Bernstein v USDOJ". Electronic Privacy Information Center. United States Court of Appeals for the Ninth Circuit. 6 May 1999. Archived from the original on 13 August 2009. Retrieved 26 March 2015.
- ^ "Dual-use List – Category 5 – Part 2 – "Information Security"" (PDF). Wassenaar Arrangement. Archived from the original on 26 September 2018. Retrieved 26 March 2015.
- ^ ".4 United States Cryptography Export/Import Laws". RSA Laboratories. Archived from the original on 31 March 2015. Retrieved 26 March 2015.
- ^ Schneier, Bruce (15 June 2000). "The Data Encryption Standard (DES)". Crypto-Gram. Archived from the original on 2 January 2010. Retrieved 26 March 2015.
- ^ Coppersmith, D. (May 1994). "The Data Encryption Standard (DES) and its strength against attacks" (PDF). IBM Journal of Research and Development. 38 (3): 243–250. doi:10.1147/rd.383.0243. Archived from the original on 4 March 2016. Retrieved 26 March 2015.
- ^ Biham, E.; Shamir, A. (1991). "Differential cryptanalysis of DES-like cryptosystems". Journal of Cryptology. 4 (1): 3–72. doi:10.1007/bf00630563. S2CID 206783462.
- ^ "The Digital Millennium Copyright Act of 1998" (PDF). United States Copyright Office. Archived (PDF) from the original on 8 August 2007. Retrieved 26 March 2015.
- ^ Ferguson, Niels (15 August 2001). "Censorship in action: why I don't publish my HDCP results". Archived from the original on 1 December 2001. Retrieved 16 February 2009.
- ^ Schneier, Bruce (6 August 2001). "Arrest of Computer Researcher Is Arrest of First Amendment Rights". InternetWeek. Archived from the original on 7 March 2017. Retrieved 7 March 2017.
- ^ Williams, Christopher (11 August 2009). "Two convicted for refusal to decrypt data". The Register. Archived from the original on 17 March 2015. Retrieved 26 March 2015.
- ^ Williams, Christopher (24 November 2009). "UK jails schizophrenic for refusal to decrypt files". The Register. Archived from the original on 26 March 2015. Retrieved 26 March 2015.
- ^ Ingold, John (4 January 2012). "Password case reframes Fifth Amendment rights in context of digital world". The Denver Post. Archived from the original on 2 April 2015. Retrieved 26 March 2015.
- ^ Leyden, John (13 July 2011). "US court test for rights not to hand over crypto keys". The Register. Archived from the original on 24 October 2014. Retrieved 26 March 2015.
- ^ "Order Granting Application under the All Writs Act Requiring Defendant Fricosu to Assist in the Execution of Previously Issued Search Warrants" (PDF). United States District Court for the District of Colorado. Archived (PDF) from the original on 9 June 2021. Retrieved 26 March 2015.
Further reading
[edit]- Arbib, Jonathan; Dwyer, John (2011). Discrete Mathematics for Cryptography. Algana Publishing. ISBN 978-1-907934-01-8.
- Becket, B (1988). Introduction to Cryptology. Blackwell Scientific Publications. ISBN 978-0-632-01836-9. OCLC 16832704. Excellent coverage of many classical ciphers and cryptography concepts and of the "modern" DES and RSA systems.
- Esslinger, Bernhard; et al. The CrypTool Script (PDF) (10th ed.). Archived from the original (PDF) on 22 July 2011. Retrieved 23 December 2013. CrypTool is the most widespread e-learning program about cryptography and cryptanalysis, open source.
- In Code: A Mathematical Journey by Sarah Flannery (with David Flannery). Popular account of Sarah's award-winning project on public-key cryptography, co-written with her father.
- James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001, ISBN 1-57488-367-4.
- Oded Goldreich, Foundations of Cryptography Archived 9 August 2016 at the Wayback Machine, in two volumes, Cambridge University Press, 2001 and 2004.
- Alvin's Secret Code by Clifford B. Hicks (children's novel that introduces some basic cryptography and cryptanalysis).
- Introduction to Modern Cryptography Archived 16 October 2009 at the Wayback Machine by Jonathan Katz and Yehuda Lindell.
- Ibrahim A. Al-Kadi, "The Origins of Cryptology: the Arab Contributions," Cryptologia, vol. 16, no. 2 (April 1992), pp. 97–126.
- Christof Paar, Jan Pelzl, Understanding Cryptography, A Textbook for Students and Practitioners. Archived 31 October 2020 at the Wayback Machine Springer, 2009. (Slides, online cryptography lectures and other information are available on the companion web site.) Very accessible introduction to practical cryptography for non-mathematicians.
- "Max Planck Encyclopedia of Public International Law". Archived from the original on 1 May 2018. Retrieved 15 December 2021., giving an overview of international law issues regarding cryptography.
- Introduction to Modern Cryptography by Phillip Rogaway and Mihir Bellare, a mathematical introduction to theoretical cryptography including reduction-based security proofs. PDF download Archived 24 September 2009 at the Wayback Machine.
- Stallings, William (2013). Cryptography and Network Security: Principles and Practice (6th ed.). Prentice Hall. ISBN 978-0-13-335469-0.
- Tenzer, Theo (2021): Super Secreto – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 978-3755761174.
- Johann-Christoph Woltag, 'Coded Communications (Encryption)' in Rüdiger Wolfrum (ed) Max Planck Encyclopedia of Public International Law (Oxford University Press 2009).
External links
[edit]
The dictionary definition of cryptography at Wiktionary
Media related to Cryptography at Wikimedia Commons- Cryptography on In Our Time at the BBC
- Crypto Glossary and Dictionary of Technical Cryptography Archived 4 July 2022 at the Wayback Machine
- A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes.
- For more on the use of cryptographic elements in fiction, see: Dooley, John F. (23 August 2012). "Cryptology in Fiction". Archived from the original on 29 July 2020. Retrieved 20 February 2015.
- The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography.
Cryptography
View on GrokipediaTerminology and Fundamentals
Definitions and Basic Principles
Cryptography is the discipline encompassing principles, means, and methods for transforming data to conceal its semantic content, thereby enabling secure communication amid adversarial threats.[1] At its core, it involves converting intelligible data, termed plaintext, into an unintelligible format known as ciphertext via encryption, which employs a cryptographic algorithm and a secret key; the inverse operation, decryption, reverses this to recover the plaintext using the corresponding key.[7][8] Cryptographic keys consist of bit strings that control the algorithm's operation, determining the specific transformation applied during encryption and decryption.[9] Algorithms, often called ciphers, specify the mathematical steps for these transformations, ranging from simple substitution methods to complex computational routines resistant to reversal without the key.[10] In symmetric encryption, a single shared key suffices for both encryption and decryption, facilitating efficiency but requiring secure key distribution; asymmetric encryption, by contrast, uses mathematically linked public-private key pairs, allowing public dissemination of the encryption key without compromising security.[11][9] Fundamental principles guiding cryptographic systems include confidentiality, which restricts access to authorized parties; data integrity, ensuring information remains unaltered during transmission or storage; authentication, verifying the legitimacy of communicants or data origins; and non-repudiation, binding actions to their performers to preclude denial.[12][10] These objectives derive from the need to counter threats like eavesdropping, tampering, impersonation, and disavowal, with effectiveness hinging on the secrecy and strength of keys alongside algorithm robustness against known attacks.[1][13]Security Models: Information-Theoretic vs Computational
Information-theoretic security, also known as unconditional or perfect security, refers to cryptographic systems where no information about the plaintext is leaked through the ciphertext, even to an adversary with unlimited computational resources and time. This concept was formalized by Claude Shannon in his 1949 paper "Communication Theory of Secrecy Systems," where perfect secrecy is defined such that the posterior probability distribution over possible plaintexts given the ciphertext is identical to the prior distribution, implying zero mutual information between plaintext and ciphertext.[14] Achieving this requires the key space to be at least as large as the message space, as per Shannon's theorem, ensuring that for every ciphertext, every possible plaintext is equally likely under some key.[15] The one-time pad exemplifies information-theoretic security: it encrypts a message by XORing it with a truly random key of equal length, used only once, producing ciphertext indistinguishable from random noise without the key. This construction, independently invented by Gilbert Vernam in 1917 and Joseph Mauborgne, guarantees perfect secrecy because the adversary cannot distinguish the ciphertext from uniform randomness, regardless of attack sophistication, provided key generation and usage rules are followed strictly.[16] However, practical limitations include the need for secure key distribution and storage of keys as long as messages, rendering it inefficient for most applications beyond niche uses like diplomatic communications.[14] In contrast, computational security assumes adversaries are polynomially bounded in resources, relying on the intractability of specific mathematical problems under feasible computation. Security holds if breaking the system requires superpolynomial time, such as solving the integer factorization problem for RSA or finding short vectors in lattices for some modern schemes, with no efficient algorithms known as of 2023 despite extensive cryptanalysis.[17] Examples include the Advanced Encryption Standard (AES), standardized by NIST in 2001 after a public competition, which resists all known attacks within 2^128 operations for the 128-bit variant, and elliptic curve cryptography, where security stems from the elliptic curve discrete logarithm problem.[18] This model underpins modern cryptography but remains conditional: advances in quantum computing, such as Shor's algorithm demonstrated on small instances in 2001, threaten systems like RSA by enabling efficient factoring.[17]| Aspect | Information-Theoretic Security | Computational Security |
|---|---|---|
| Adversary Model | Unlimited computation and time | Polynomial-time bounded |
| Security Guarantee | Absolute: no information leakage possible | Probabilistic: negligible success probability |
| Key Length Requirement | At least message length (e.g., one-time pad) | Fixed, independent of message (e.g., 256 bits for AES) |
| Practicality | Impractical for large-scale use due to key management | Widely deployed, efficient, but assumption-dependent |
| Examples | One-time pad | AES, RSA, elliptic curve Diffie-Hellman |
History
Ancient and Classical Periods
The earliest documented use of cryptography for secure correspondence emerged among the ancient Spartans around 400 BC, employing a transposition cipher device known as the scytale.[19] This method involved wrapping a strip of parchment around a cylindrical baton of fixed diameter, writing the message along the spiral, then unwrapping the strip to produce a jumbled text; reconstruction required a matching baton to realign the characters.[19] Historical accounts, including those from Plutarch, describe its application in military dispatches to prevent interception by enemies, though some scholars debate whether it served primarily as a cipher or a message authentication tool due to the need for identical batons at sender and receiver ends.[20] In classical Greece, further developments included references to cryptographic techniques in military treatises, such as those by Aeneas Tacticus in the 4th century BC, who discussed methods for securing communications against betrayal.[21] Earlier traces appear in Mesopotamian records around 1500 BC, where a scribe obscured a pottery glaze formula using cuneiform substitutions, representing an rudimentary form of secret writing rather than systematic encryption.[22] Egyptian tomb inscriptions from circa 1900 BC employed anomalous hieroglyphs, potentially to conceal ritual knowledge from the uninitiated, though this practice bordered on steganography—hiding meaning through obscurity—rather than formal ciphering.[23] During the Roman Republic, Julius Caesar reportedly utilized a substitution cipher in military correspondence around 58–50 BC, shifting each letter in the Latin alphabet by three positions (e.g., A to D, B to E), rendering plaintext unintelligible without the key shift value.[24] Known as the Caesar cipher, this monoalphabetic technique was simple yet effective against casual readers, as evidenced by Suetonius's accounts of Caesar's encrypted orders to commanders.[25] Its vulnerability to frequency analysis stemmed from preserved letter distributions, but it marked an advancement in deliberate alphabetic transposition for state secrecy.[24] These ancient methods relied on shared secrets or physical devices, lacking mathematical complexity, and were driven by wartime needs to protect strategic information from adversaries.[26] By the end of the classical period, around the 5th century AD, such practices had influenced later Roman and Byzantine codes, though systematic cryptanalysis remained undeveloped until medieval times.[19]Medieval to 19th Century Advances
During the Islamic Golden Age, Arab scholars made foundational contributions to cryptanalysis. Al-Kindi (c. 801–873 AD), in his treatise Risala fi fī l-ḥurūf wa-l-ʾaḥsāʾ (Manuscript on Deciphering Cryptographic Messages), systematically described frequency analysis, the first known method to break monoalphabetic substitution ciphers by comparing ciphertext letter frequencies to those in the target language.[2] This empirical approach exploited the non-uniform distribution of letters in natural language, enabling decryption without the key.[27] Al-Kindi also outlined early polyalphabetic substitution concepts, using multiple substitution alphabets to obscure frequency patterns, though full implementation awaited later developments.[28] In Europe, Renaissance humanists advanced cipher design amid diplomatic and ecclesiastical needs. Leon Battista Alberti's De componendis cifris (c. 1467) introduced the earliest documented polyalphabetic cipher, employing a rotating disk with mixed alphabets to vary substitutions periodically, enhancing resistance to frequency analysis. Johannes Trithemius (1462–1516) expanded this in Polygraphia (published 1518), presenting the tabula recta—a square table of shifted alphabets—and progressive ciphers where each letter shifted by an increasing amount, laying groundwork for systematic polyalphabetics despite mystical framing.[29] The 16th century saw practical polyalphabetic ciphers proliferate. Giovan Battista Bellaso described a keyed variant in La cifra del. Sig. (1553), using a repeating keyword to select rows from a tabula recta for substitution, later popularized by Blaise de Vigenère in Traicté des chiffres (1586) as the autokey-strengthened "le chiffre carré."[30][31] This Vigenère cipher resisted attacks for centuries due to its key-dependent multiple alphabets, finding use in military and state secrets. Mechanical aids emerged, such as wheel-based devices for generating substitutions, exemplified by 16th-century French cipher machines resembling books with dials for diplomatic encoding.[32] By the 19th century, cryptanalytic techniques caught up. Charles Babbage (c. 1846) and Friedrich Kasiski (1863) independently developed methods to determine Vigenère key lengths by identifying repeated ciphertext sequences, whose distances revealed key periodicity via greatest common divisors, followed by per-position frequency analysis.[33][34] Kasiski's Die Geheimschriften und die Dechiffrir-Kunst formalized this, undermining polyalphabetics reliant on short keys.[35] Innovative ciphers addressed these vulnerabilities. Charles Wheatstone invented the Playfair cipher in 1854, a digraphic substitution using a 5x5 key-derived grid to form rectangles or trapezoids from letter pairs, yielding digrams resistant to simple frequency analysis; Lord Playfair promoted its adoption, with British forces employing it during the Boer War (1899–1902).[36][37] These advances reflected cryptography's evolution from ad hoc substitutions to structured systems balancing security and usability, driven by statecraft demands.[38]20th Century Mechanization and Wars
The mechanization of cryptography in the 20th century advanced significantly through rotor-based cipher machines, which automated substitution ciphers using rotating wheels with wired permutations to scramble plaintext. Edward Hebern patented the first such device in the United States in 1924, building on a 1917 prototype that integrated electrical circuitry with typewriter components for enciphering messages.[39][2] These innovations addressed the limitations of manual systems, enabling faster encryption for military and diplomatic use amid rising global tensions. During World War I, cryptographic efforts relied primarily on manual methods, but the interwar period saw rotor machines proliferate. The German Enigma machine, invented by Arthur Scherbius and commercially introduced in 1923, featured multiple rotors, a reflector, and a plugboard, generating vast key spaces—approximately 10^23 possibilities in its military variants.[40][41] Adopted by the German military from 1926, Enigma secured naval, army, and air force communications during World War II, with operators selecting daily rotor orders, ring settings, and plugboard connections to vary substitutions dynamically.[3] Polish cryptanalysts, including Marian Rejewski, Jerzy Różycki, and Henryk Zygalski, achieved the first breaks of Enigma in December 1932 using mathematical analysis of message permutations and intercepted traffic, constructing "Zygalski sheets" and the electromechanical "Bomba" device by 1938 to exploit weaknesses in German procedures.[42] In 1939, they shared their techniques, replicas, and insights with British and French intelligence, enabling further advancements at Bletchley Park.[43] Alan Turing and team developed the Bombe machine, deploying over 200 by war's end to test rotor settings against cribs—known or guessed plaintext—deciphering millions of messages and contributing to Allied victories, such as in the Battle of the Atlantic.[3][42] For higher-level strategic communications, Germany employed the Lorenz SZ40/42 cipher attachment from 1941, a 12-wheel teleprinter machine producing a pseudorandom keystream added modulo-2 to plaintext, used for Hitler’s direct links to field commanders.[41] British codebreakers at Bletchley, led by Max Newman and Tommy Flowers, exploited operator errors and depth settings—reuse of identical keys—to recover wheels via hand methods, culminating in the Colossus, the world's first programmable electronic computer, operational by 1944 for automated cryptanalysis.[44] These breaks yielded Ultra intelligence, informing operations like D-Day.[45] The United States developed the SIGABA (ECM Mark II) rotor machine in the 1930s, featuring 15 rotors in two banks with irregular stepping to resist attacks, producing over 10^26 keys; it remained unbroken during the war and served until the 1950s.[46] Mechanized cryptography thus not only intensified wartime secrecy but also spurred computational breakthroughs, with codebreaking efforts demonstrating that procedural flaws often undermined even complex machines.[41]Computer Era and Public-Key Emergence (1970s–1990s)
The advent of digital computers in the mid-20th century enabled the implementation of more sophisticated cryptographic algorithms in software and hardware, marking the transition to the computer era of cryptography. In 1974, IBM researchers developed the Lucifer cipher, which evolved into the Data Encryption Standard (DES), a symmetric-key block cipher using a 56-bit key to encrypt 64-bit blocks through 16 rounds of Feistel network operations.[47] The U.S. National Bureau of Standards (NBS, now NIST) selected a modified version of Lucifer and published DES as Federal Information Processing Standard 46 in 1977, making it the first widely adopted cryptographic standard for non-classified government and commercial use.[48] DES relied on computational hardness assumptions, such as the difficulty of exhaustive key search without specialized hardware, though its key length later proved vulnerable to brute-force attacks by the 1990s.[2] A key challenge in symmetric cryptography remained secure key distribution over insecure channels, prompting innovations in asymmetric or public-key cryptography. In 1976, Whitfield Diffie and Martin Hellman published "New Directions in Cryptography," introducing the concept of public-key distribution systems where parties could agree on a shared secret key without prior exchange, using the discrete logarithm problem over finite fields for key exchange.[49] This protocol, known as Diffie-Hellman key exchange, allowed one party to generate a public value while keeping a private exponent secret, enabling secure key derivation resistant to eavesdroppers who observe only public parameters.[50] Their work broke the monopoly on high-quality cryptography held by governments and laid the foundation for asymmetric systems by decoupling encryption keys from decryption keys.[51] Building on this, Ron Rivest, Adi Shamir, and Leonard Adleman invented the RSA cryptosystem in 1977, providing a practical public-key method for both encryption and digital signatures based on the integer factorization problem.[52] In RSA, a public key consists of a modulus n (product of two large primes) and exponent e, while the private key is the corresponding decryption exponent d; encryption raises plaintext to e modulo n, and decryption uses d, secure under the assumption that factoring large n is computationally infeasible.[53] Published in Communications of the ACM and popularized in Scientific American, RSA enabled secure communication and authentication without shared secrets, revolutionizing applications like secure email and e-commerce precursors.[52] The 1980s and 1990s saw proliferation of public-key variants and standards, including ElGamal encryption in 1985 using discrete logs for probabilistic encryption and the Digital Signature Algorithm (DSA) proposed by NIST in 1991 for government use.[54] Phil Zimmermann released Pretty Good Privacy (PGP) in 1991, implementing RSA and Diffie-Hellman for email encryption, which faced U.S. export restrictions due to cryptography's classification as a munition but spurred civilian adoption.[2] These developments shifted cryptography from government silos to open research, with computational security models emphasizing average-case hardness against polynomial-time adversaries, though concerns over backdoors and key length persisted, as evidenced by DES's eventual replacement needs.[49] By the late 1990s, public-key infrastructure (PKI) emerged for certificate management, underpinning secure web protocols like SSL/TLS.[55]Digital Age Proliferation (2000s–Present)
In 2001, the National Institute of Standards and Technology (NIST) finalized the Advanced Encryption Standard (AES), selecting the Rijndael algorithm after a multi-year competition to replace the aging Data Encryption Standard (DES) for securing sensitive government data and beyond.[56] AES supports key sizes of 128, 192, or 256 bits and operates on 128-bit blocks, enabling efficient hardware and software implementations that facilitated its rapid integration into protocols like IPsec and Wi-Fi encryption standards such as WPA2.[57] This standardization marked a pivotal shift toward computationally secure symmetric cryptography in everyday digital infrastructure, with AES now underpinning billions of encrypted transactions daily across financial systems and data storage.[58] Elliptic curve cryptography (ECC) saw accelerated adoption throughout the 2000s, offering stronger security per bit length compared to RSA, which reduced computational overhead for resource-constrained devices like mobile phones and embedded systems.[59] NIST incorporated ECC into standards such as ECDSA for digital signatures in 2006, while the U.S. National Security Agency (NSA) endorsed its use for government systems in 2005, promoting curves like P-256 for key exchange and authentication.[60] By the mid-2000s, ECC underpinned protocols in SSL/TLS certificates and smart cards, enabling scalable public-key infrastructure (PKI) for e-commerce and secure communications, though concerns over curve selection—later tied to NSA influence—prompted scrutiny of potentially weakened parameters.[61] The proliferation of HTTPS, built on TLS evolutions from SSL, transformed web security, with adoption surging due to vulnerabilities like POODLE in older protocols and browser enforcement of encryption.[62] TLS 1.3, finalized in 2018 by the IETF, streamlined handshakes and mandated forward secrecy, contributing to over 95% of websites using HTTPS by 2024.[63] This widespread deployment, driven by certificate authorities issuing millions of TLS certificates annually, secured online banking, email, and cloud services against man-in-the-middle attacks, reflecting cryptography's embedding in the internet's core.[64] The launch of Bitcoin in 2008 by Satoshi Nakamoto introduced blockchain's reliance on cryptographic primitives like SHA-256 hashing for proof-of-work and ECDSA for transaction signing, spawning a ecosystem of cryptocurrencies that demonstrated decentralized consensus without trusted intermediaries.[65] By 2017, the market capitalization of cryptocurrencies exceeded $800 billion, accelerating innovations in zero-knowledge proofs (e.g., zk-SNARKs in Zcash, 2016) for privacy-preserving verifications and smart contracts on Ethereum (2015), which extended cryptography to programmable finance.[66] These applications highlighted cryptography's role in enabling trust-minimized systems, though vulnerabilities like the 2010 Bitcoin overflow bug underscored ongoing risks in implementation.[67] Edward Snowden's 2013 disclosures revealed NSA efforts to undermine cryptographic standards, including a backdoor in the Dual_EC_DRBG random number generator standardized by NIST in 2006, eroding trust in U.S.-led processes and spurring global adoption of end-to-end encryption in messaging apps like Signal (protocol open-sourced 2013).[68] [69] The revelations prompted the IETF to prioritize "pervasive monitoring" mitigation in protocols and accelerated audits of libraries like OpenSSL, whose Heartbleed vulnerability (disclosed 2014) exposed 17% of HTTPS servers to memory leaks, reinforcing demands for rigorous, open-source cryptographic hygiene.[70] [71] Anticipating quantum computing threats to ECC and RSA—via algorithms like Shor's—NIST initiated a post-quantum cryptography standardization in 2016, selecting algorithms such as CRYSTALS-Kyber for key encapsulation in 2022 and finalizing three standards (FIPS 203, 204, 205) in August 2024 for migration by 2030.[72] This effort, involving over 80 submissions and years of cryptanalysis, addresses lattice-based and hash-based schemes resilient to quantum attacks, with hybrid implementations already tested in TLS to bridge classical and quantum eras without disrupting deployed systems.[73] By 2025, enterprises faced mandates to inventory quantum-vulnerable cryptography, signaling a proactive proliferation toward resilient primitives amid advancing quantum hardware demonstrations.[74]Theoretical Foundations
Claude Shannon's Contributions
Claude Elwood Shannon's seminal 1949 paper, "Communication Theory of Secrecy Systems," published in the Bell System Technical Journal, established the theoretical foundations of cryptography by applying principles from information theory to analyze secrecy systems.[75] In this work, Shannon modeled encryption as a communication channel where the goal is to ensure that intercepted ciphertexts reveal no information about the plaintext to an unauthorized party possessing unlimited computational resources.[76] He categorized secrecy systems into types such as transposition, substitution, and product systems, evaluating their theoretical limits rather than practical implementations.[77] Shannon defined perfect secrecy as a property where the mutual information between the plaintext and ciphertext is zero, meaning the a posteriori probability distribution of possible plaintexts after observing the ciphertext equals the a priori distribution, providing no evidentiary advantage to the adversary.[75] He proved that achieving perfect secrecy requires the key space to be at least as large as the message space, with the key drawn uniformly at random and used only once; otherwise, redundancy in the plaintext or key reuse enables statistical inference attacks.[75] This information-theoretic criterion contrasts with weaker computational security assumptions, highlighting that practical ciphers must rely on the intractability of certain problems rather than absolute secrecy.[78] The one-time pad cipher, involving modular addition of a random key as long as the message, exemplifies perfect secrecy under Shannon's conditions, as the ciphertext distribution remains uniform and independent of the plaintext.[75] Shannon formalized its unbreakable nature theoretically, though he noted logistical challenges in key generation, distribution, and disposal preclude widespread use.[79] His analysis extended to the "secrecy capacity" of channels, quantifying the maximum secure rate as the difference between channel capacity and equivocation induced by noise or adversaries.[75] Building on his 1948 formulation of entropy as a measure of uncertainty in information sources, Shannon applied it to cryptography to assess key entropy requirements and system redundancy.[80] High-entropy keys resist brute-force attacks by maximizing uncertainty, while low-entropy plaintexts (e.g., natural language with predictable frequencies) demand longer keys for secrecy, underscoring the need to eliminate exploitable patterns.[81] This entropy-based framework influenced subsequent evaluations of cryptographic strength, distinguishing provably secure systems from those vulnerable to frequency analysis or other statistical methods.[75]Complexity Theory and Hard Problems
Cryptographic security relies on computational complexity theory to establish that certain problems lack efficient solutions by probabilistic polynomial-time algorithms, despite being solvable in exponential time. These hardness assumptions form the basis for primitives like encryption and signatures, positing that inverting specific functions or solving particular problems is infeasible for adversaries with bounded resources. Unlike information-theoretic security, which guarantees unconditional secrecy, computational security holds as long as no efficient attack exists, even if theoretical breaks are possible given unlimited computation.[82] This framework emphasizes average-case hardness over random instances generated by keys, rather than worst-case scenarios, aligning with practical usage where inputs follow probabilistic distributions.[83] Central to these foundations are one-way functions, defined as polynomial-time computable functions that are easy to evaluate but hard to invert: for any probabilistic polynomial-time adversary , the probability over random is negligible in the input length.[84] Trapdoor one-way functions extend this by incorporating a secret trapdoor that enables inversion, underpinning public-key systems. The existence of one-way functions, while unproven, is equivalent to the feasibility of basic private-key cryptography, including pseudorandom generators and secure symmetric encryption, via constructions that amplify weak hardness into strong security properties.[85] No explicit one-way function has been proven secure without relying on specific number-theoretic assumptions, but candidates derive from problems like modular exponentiation. Prominent hard problems include integer factorization, where decomposing a semiprime (with large primes of similar bit length) into its factors resists efficient algorithms; the general number field sieve represents the state-of-the-art, with the largest factored RSA-250 (829 bits) requiring approximately 2,900 core-years in 2020, rendering 2048-bit instances (common in TLS) computationally prohibitive at over a million core-years.[86] The discrete logarithm problem similarly assumes hardness in cyclic groups: given a generator and (or in elliptic curves), recovering defies subexponential attacks for parameters exceeding 256 bits, with index calculus methods scaling poorly for prime fields.[87] These assumptions hold empirically, as no polynomial-time algorithms exist despite decades of scrutiny, though quantum algorithms like Shor's threaten them, motivating post-quantum alternatives based on lattice problems (e.g., learning with errors) or hash-based signatures.[88] Provable security formalizes these via reductions: a scheme is secure if breaking it implies solving a hard problem with comparable efficiency. For instance, the RSA cryptosystem's semantic security under chosen-plaintext attacks reduces to the factoring assumption, meaning an adversary factoring moduli can decrypt ciphertexts, but the converse holds only probabilistically.[89] Diffie-Hellman key exchange security reduces to the computational Diffie-Hellman assumption, equivalent to discrete log hardness in generic groups. Such reductions quantify security losses (e.g., via negligible functions) but rely on black-box models, which complexity theory shows cannot capture all separations, as relativization barriers imply some impossibilities. Average-case hardness enables these, contrasting worst-case NP-hardness, since cryptographic instances avoid pathological cases; for lattices, worst-to-average reductions preserve security for approximate shortest vector problems.[90] Overall, while no assumption is unconditionally proven (as P vs. NP remains open), empirical evidence and lack of breaks sustain their use, with ongoing research refining parameters via competitions like NIST's post-quantum standardization.[91]Symmetric-Key Techniques
Block Ciphers: Evolution from DES to AES
The Data Encryption Standard (DES), a symmetric-key block cipher, processes plaintext in 64-bit blocks using a 56-bit effective key length derived from a 64-bit input with 8 parity bits. Originally developed by IBM in the early 1970s as a refinement of the Lucifer cipher, it employs a Feistel network structure with 16 rounds of substitution, permutation, and key mixing operations. The National Bureau of Standards (now NIST) published DES as Federal Information Processing Standard (FIPS) 46 on January 15, 1977, mandating its use for unclassified government data encryption. By the 1990s, DES's 56-bit key proved vulnerable to brute-force attacks as computational power advanced; for instance, the Electronic Frontier Foundation's DES Cracker hardware recovered a key in 56 hours in 1998, demonstrating feasibility for dedicated attackers.[92] This short key length, combined with no structural weaknesses but exhaustive search risks, prompted interim mitigations like Triple DES (3DES), which applies the DES algorithm three times sequentially (encrypt-decrypt-encrypt) with two or three distinct keys, yielding an effective strength of up to 112 bits against brute force.[48] NIST incorporated 3DES into FIPS 46-3, reaffirmed on October 25, 1999, as a backward-compatible extension while planning a successor.[48] However, 3DES's effective block size remained 64 bits, introducing risks like meet-in-the-middle attacks reducing security below the key length, and its slower performance due to triple processing. To address DES's obsolescence, NIST launched a public competition in 1997 for a new symmetric block cipher standard, soliciting algorithms resistant to cryptanalytic advances with a 128-bit block size and key lengths of 128, 192, or 256 bits.[93] Fifteen candidates were submitted; five advanced to the finalist round in 1999 after initial evaluations of security, efficiency, and implementation characteristics.[94] On October 2, 2000, NIST selected the Rijndael algorithm, designed by Belgian cryptographers Joan Daemen and Vincent Rijmen, as the winner, citing its balance of security margins, software/hardware performance, and flexibility.[93] Rijndael, standardized as the Advanced Encryption Standard (AES) in FIPS 197 on November 26, 2001, uses a substitution-permutation network with 10, 12, or 14 rounds depending on key length, providing provable resistance to differential and linear cryptanalysis beyond DES's Feistel design.[58] AES marked a shift toward larger parameters and open competition, supplanting DES entirely by 2005 when NIST withdrew FIPS 46-3 approval, though 3DES lingered in legacy systems until phased out.[95] Unlike DES's fixed structure tuned for 1970s hardware, AES prioritizes side-channel resistance and scalability, with no practical breaks despite extensive analysis; its adoption in protocols like TLS and IPsec underscores the evolution from ad-hoc government selection to rigorous, global peer review.[58]Stream Ciphers and Applications
Stream ciphers are symmetric-key algorithms that encrypt plaintext by combining it with a pseudorandom keystream generated from a secret key, typically via bitwise XOR operation on individual bits or bytes.[96][97] This process produces ciphertext of the same length as the plaintext, enabling encryption of data streams without fixed block sizes or padding requirements.[98] The keystream is produced by a pseudorandom number generator (PRNG) initialized with the key and often a nonce or counter to ensure uniqueness; security depends on the keystream's indistinguishability from true randomness and resistance to prediction without the key.[99] Unlike block ciphers, which process fixed-size blocks and may introduce latency in modes like CBC, stream ciphers support continuous, low-latency encryption ideal for real-time data flows such as voice or video streams.[100] However, they are vulnerable to keystream reuse: if the same keystream is XORed with two plaintexts, an attacker can recover the XOR of the plaintexts by XORing the ciphertexts, compromising confidentiality.[97] Early designs often employed linear feedback shift registers (LFSRs) for keystream generation, but these proved susceptible to linear cryptanalysis and correlation attacks unless combined with nonlinear components.[101] Notable historical stream ciphers include RC4, developed by Ron Rivest in 1987 with variable key lengths up to 2048 bits, which powered protocols like WEP for Wi-Fi (introduced 1997) and early TLS versions.[102] RC4's internal state permutations exhibited biases, enabling the Fluhrer-Mantin-Shamir (FMS) attack in 2001 that broke WEP with minimal traffic, and later statistical biases in TLS exposed in 2013, leading to its deprecation by 2015 in major browsers and RFC 7465.[103] Similarly, A5/1, a 64-bit stream cipher using three LFSRs for GSM mobile networks since the 1990s, was practically broken by 2009 through time-memory trade-off attacks requiring feasible precomputation and hours of runtime to decrypt conversations.[104] Modern stream ciphers prioritize software efficiency and cryptanalytic resistance. Salsa20, introduced by Daniel J. Bernstein in 2005, generates keystream via 20 rounds of addition-rotation-XOR (ARX) operations on a 512-bit state, offering high speed without hardware acceleration.[101] Its variant, ChaCha20, refines Salsa20 with increased constants and diffusion for better resistance to differential and linear attacks, maintaining 256-bit keys and nonce-based uniqueness.[105] ChaCha20 has withstood extensive analysis without practical breaks and is recommended for scenarios where AES is inefficient.[105] Applications of stream ciphers span legacy and contemporary protocols, though insecure ones like RC4 and A5/1 have been phased out. In wireless, A5/1 persists in some 2G networks despite vulnerabilities, while Wi-Fi evolved to block-based WPA2/3.[106] Secure modern uses include TLS 1.3 via the ChaCha20-Poly1305 authenticated encryption with associated data (AEAD) mode, defined in RFC 7905 (2016), which provides confidentiality, integrity, and efficiency for web traffic, especially on mobile devices lacking AES-NI instructions.[107] ChaCha20 also supports SSH encryption, as in draft-ietf-sshm-chacha20-poly1305 (updated 2025), enhancing remote access security.[108] These deployments pair stream encryption with authenticators like Poly1305 to mitigate malleability, ensuring robustness against active attacks.[105]Public-Key Systems
Diffie-Hellman and Key Agreement
Diffie–Hellman key exchange is a cryptographic protocol that allows two parties, without prior shared secrets, to jointly compute a shared secret key over a public communications channel, which can then serve as a symmetric encryption key.[109] The protocol was publicly introduced in 1976 by Whitfield Diffie and Martin E. Hellman in their seminal paper "New Directions in Cryptography," marking a foundational advancement in public-key cryptography by enabling secure key agreement without direct key transmission.[49] Although classified precursors existed earlier at institutions like GCHQ, the Diffie-Hellman publication spurred open research and practical implementations.[110] The protocol operates in a finite cyclic group, typically the multiplicative group of integers modulo a large prime , where is a prime number and is a generator (primitive root) of the group. Both parties publicly agree on and . One party, say Alice, selects a private exponent (random integer between 2 and ), computes the public value , and sends to Bob. Bob similarly chooses private , computes , and sends to Alice. Alice then computes the shared secret , while Bob computes . An eavesdropper observing , , , and cannot efficiently compute without solving for the discrete logarithm.[109][111] Security relies on the computational difficulty of the discrete logarithm problem: given , , and , finding is infeasible for large (typically 2048 bits or more in practice).[112] The protocol assumes the Diffie-Hellman problem—computing from and —is hard, which holds under standard cryptographic assumptions equivalent to discrete log hardness in prime-order groups. However, it does not inherently provide authentication, making it susceptible to man-in-the-middle attacks where an attacker impersonates one party to both; authentication is typically added via digital signatures or certificates in protocols using Diffie-Hellman.[49][109] Diffie-Hellman has been integral to numerous standards and protocols, including Transport Layer Security (TLS) for ephemeral key exchanges (DHE), Internet Key Exchange (IKE) in IPsec for VPNs, and Secure Shell (SSH).[113] Elliptic curve variants (ECDH) offer equivalent security with smaller key sizes, widely adopted for efficiency in mobile and constrained environments.[114] Despite advances in quantum computing threatening discrete log via Shor's algorithm, post-quantum alternatives are under development, with Diffie-Hellman still dominant in classical settings as of 2025.[115]RSA: Factoring-Based Security
The RSA cryptosystem, developed by Ronald Rivest, Adi Shamir, and Leonard Adleman in 1977 and published in 1978, derives its security from the computational intractability of factoring the product of two large prime numbers into their constituent factors.[52] In the algorithm, a modulus is generated, where and are distinct primes of comparable size (typically hundreds of digits each), and an encryption exponent is chosen coprime to . The private exponent satisfies , enabling decryption via for ciphertext . Factoring reveals and , allowing computation of and thus , which compromises the system; conversely, possession of does not efficiently yield the factors under the RSA assumption.[52][116] The integer factorization problem, particularly for semiprimes (products of two primes), lacks a proven polynomial-time classical algorithm, though subexponential methods exist. Early approaches like trial division and Pollard's rho are ineffective for large , scaling poorly beyond small factors. The quadratic sieve (QS), introduced in the 1980s, improved efficiency for numbers up to about 100 digits by sieving for smooth relations in a factor base, but was superseded for larger instances by the general number field sieve (GNFS), developed in the 1990s, which optimizes by working in both rational and algebraic number fields to find congruences yielding a dependency for square-root computation modulo . GNFS has asymptotic complexity , where , making it the fastest practical method for cryptographic sizes.[117][118] Empirical evidence of hardness is drawn from factorization challenges, such as the RSA Factoring Challenge numbers. The 768-bit (232-digit) RSA-768 was factored in December 2009 using GNFS, requiring approximately 2,000 core-years on 2009-era hardware and marking the largest general semiprime factored to date at that time. Subsequent records include RSA-240 (795 bits, 240 digits) factored in November 2019 after 900 core-years of computation on modern servers, and RSA-250 (829 bits, 250 digits) factored in May 2020 by a similar effort involving distributed sieving and linear algebra over thousands of cores. These feats underscore that while advances in hardware and algorithms erode smaller keys—e.g., 512-bit RSA moduli were routinely factored by the early 2000s—keys of 2048 bits or larger remain secure against classical attacks, with estimated GNFS times exceeding billions of years on current supercomputers.[119][120] Security recommendations reflect this: the U.S. National Institute of Standards and Technology (NIST) deems 2048-bit RSA moduli acceptable through 2030 for most applications, advising 3072 bits for extended protection against potential algorithmic improvements, while deprecating keys below 2048 bits post-2030 due to feasible factoring risks. Implementation flaws, such as poor prime generation leading to detectable patterns or small prime differences exploitable by Fermat's method, can undermine even large keys, emphasizing the need for rigorous randomness and balance in and . Quantum computing poses a future threat via Shor's algorithm, which factors in polynomial time on a fault-tolerant quantum machine, but current noisy intermediate-scale quantum devices require infeasible resources (e.g., millions of qubits for 2048-bit RSA), preserving classical security in the near term.[121]Elliptic Curve Variants
Elliptic curve cryptography employs various mathematical representations of elliptic curves to optimize computations such as point addition and scalar multiplication, with the short Weierstrass form serving as the foundational model for many standards.[122] This form facilitates general elliptic curve operations but can be less efficient for specific tasks like key exchange or signatures. Variants like Montgomery and twisted Edwards forms address these by exploiting symmetries for faster, constant-time implementations resistant to side-channel attacks.[123] The Standards for Efficient Cryptography Group (SECG) and the National Institute of Standards and Technology (NIST) have standardized numerous Weierstrass curves with predefined domain parameters, including prime field sizes and base points, to ensure interoperability. For instance, secp256r1 (also known as NIST P-256) uses a 256-bit prime field and provides approximately 128 bits of security, while secp256k1 employs a Koblitz curve over a 256-bit field with parameters and , originally selected for efficient arithmetic in software implementations.[124][122] These curves underpin protocols like ECDSA for digital signatures and ECDH for key agreement, with secp256k1 notably adopted in Bitcoin's protocol since 2009 for its balance of security and performance.[125] Montgomery curves, defined by , enable efficient ladder-based scalar multiplication without requiring full point recovery, making them suitable for Diffie-Hellman key exchange. Curve25519, a specific Montgomery curve over a 255-bit prime field, was designed by Daniel J. Bernstein in 2005 with parameters chosen for resistance to known attacks and high-speed implementation, as formalized in RFC 7748 (2016). This variant achieves 128-bit security and is widely used in modern protocols like TLS 1.3 due to its audited, transparent parameter generation process.[126] Twisted Edwards curves, given by , offer complete addition formulas that avoid exceptional cases, enhancing resistance to fault attacks and enabling unified, exception-free implementations. Ed25519, based on the Edwards form birationally equivalent to Curve25519, supports deterministic signatures via EdDSA and was standardized in RFC 8032 (2016), providing 128-bit security with smaller keys than comparable RSA systems. These non-Weierstrass variants prioritize software efficiency and verifiability over historical NIST selections. Security evaluations of NIST-recommended curves, such as those in FIPS 186-4, have faced scrutiny following 2013 disclosures of NSA influence in parameter selection, including the compromised Dual_EC_DRBG which relied on elliptic curves with non-transparent points.[127] While no exploitable weaknesses have been demonstrated in curves like P-256 themselves, the opaque generation process—contrasting with explicitly constructed alternatives like Curve25519—has led experts to recommend audited or random-parameter curves for new deployments to mitigate potential subversion risks.[128] NIST's SP 800-186 (2020) now permits additional curves like secp256k1 alongside its own, reflecting ongoing adaptation to these concerns.[122]Integrity and Authentication Primitives
Hash Functions: Design and Iterations (SHA Family)
Cryptographic hash functions must satisfy core security properties: preimage resistance, rendering it computationally infeasible to find an input producing a specified hash output; second-preimage resistance, preventing discovery of a distinct input yielding the same hash as a given input; and collision resistance, where locating any two inputs with identical hashes is equally intractable, ideally requiring effort proportional to for an -bit output under the birthday paradox.[129] These functions also demonstrate the avalanche effect, such that altering even one input bit propagates changes to roughly half the output bits, ensuring strong diffusion and resistance to differential analysis.[130] The SHA family, developed under NIST oversight primarily by the NSA, iterates on these principles through evolving constructions. SHA-1 and SHA-2 rely on the Merkle-Damgård paradigm, which pads the input message to a multiple of the block size (512 bits for SHA-1, 512 or 1024 bits for SHA-2 variants), appends the message length, and iteratively applies a compression function—combining bitwise operations (AND, OR, XOR, NOT), modular addition, rotations, and shifts—to an internal state initialized by a fixed vector, yielding the final digest after processing all blocks.[129] This structure inherits collision resistance from the underlying compression function's assumed one-wayness but proves vulnerable to extensions like length-extension attacks, where an attacker appends data to a known hash without knowing the original input.[130] SHA-0, an initial 160-bit prototype published in 1993, employed this construction but contained an undisclosed flaw enabling collisions via differential paths, prompting its immediate withdrawal before public release.[129] SHA-1, its 1995 revision specified in FIPS 180-1, modified the compression function with an extra bitwise expansion step to mitigate the weakness, outputting 160 bits and achieving initial collision resistance estimated at 80 bits; however, advances in cryptanalysis, including Wang's 2004 differential attack, eroded confidence, with NIST deprecating SHA-1 in 2011 and prohibiting its use in digital signatures by 2013.[129] A practical collision was realized in February 2017 by Google and CWI researchers, who generated two distinct PDF files sharing the same SHA-1 hash after approximately 6,500 CPU-years of computation, confirming theoretical breaks had become feasible and underscoring SHA-1's unsuitability for security-critical applications.[131][132] Addressing potential systemic risks in Merkle-Damgård designs—such as shared vulnerabilities exploitable across MD5, SHA-1, and early SHA-2—the SHA-2 family, published in 2001 and refined in FIPS 180-2 (2002) and FIPS 180-4 (2015), expanded to variants SHA-224, SHA-256, SHA-384, and SHA-512 (with truncated siblings SHA-512/224 and SHA-512/256), doubling or quadrupling block and state sizes while altering constants and primitive operations to avert carry-over attacks from SHA-1.[129] These maintain 112- to 256-bit collision resistance matching their halved output sizes, with no practical breaks reported as of 2025, though NIST recommends diversification beyond SHA-2 for long-term resilience.[130] SHA-3, standardized in FIPS 202 (2015) following NIST's 2007-2012 competition won by Keccak, diverges fundamentally by adopting the sponge construction: an inner state of fixed width (1600 bits) absorbs padded input blocks via the Keccak-f permutation (iterated 24 rounds of substitutions, permutations, and XORs), applying multi-rate padding (pad10*1) to handle arbitrary lengths; once absorbed, the state is "squeezed" to extract output by repeatedly applying the permutation and yielding rate-sized (e.g., 1088-bit) chunks.[133] This yields SHA3-224, SHA3-256, SHA3-384, and SHA3-512 with equivalent security to SHA-2 counterparts, plus extendable-output functions SHAKE128 and SHAKE256 for variable-length digests, offering resistance to length-extension and implementation diversity without relying on Merkle-Damgård's iterative chaining.[129]| Variant | Output Size (bits) | Block Size (bits) | Initial Publication | Collision Resistance (bits) | Construction |
|---|---|---|---|---|---|
| SHA-1 | 160 | 512 | 1995 (FIPS 180-1) | <80 | Merkle-Damgård |
| SHA-256 | 256 | 512 | 2001 (FIPS 180-2) | 128 | Merkle-Damgård |
| SHA-512 | 512 | 1024 | 2001 (FIPS 180-2) | 256 | Merkle-Damgård |
| SHA3-256 | 256 | Variable (r=1088) | 2015 (FIPS 202) | 128 | Sponge |
| SHA3-512 | 512 | Variable (r=576) | 2015 (FIPS 202) | 256 | Sponge |
Message Authentication Codes and AEAD
A message authentication code (MAC) is a symmetric-key cryptographic mechanism that generates a fixed-size tag from a message and a shared secret key, allowing a verifier to confirm the message's origin and detect alterations.[135] The security of a MAC relies on its resistance to existential forgery under adaptive chosen-message attacks, where an adversary cannot produce a valid tag for a new message with non-negligible probability after querying the MAC oracle polynomially many times.[136] MACs assume the secrecy of the key and do not provide confidentiality, focusing solely on integrity and authentication.[135] Common MAC constructions derive from hash functions or block ciphers. Hash-based MACs, such as HMAC, nest a cryptographic hash function H with the key K as follows: HMAC(K, m) = H((K* ⊕ opad) ∥ H((K* ⊕ ipad) ∥ m)), where opad and ipad are fixed padding constants.[136] Developed by Bellare, Canetti, and Krawczyk in 1996, HMAC inherits security from the compression function of H under minimal assumptions, proving secure if H behaves as a pseudorandom function family.[136] It was standardized in FIPS 198-1 in 2008 and RFC 2104 in 1997, supporting variable-length messages and widely deployed due to its efficiency with hashes like SHA-256.[135] [137] Block-cipher-based MACs include CBC-MAC, which chains blocks via a block cipher E starting from a zero initialization vector: tag = E_K(m_n ⊕ E_K(... ⊕ E_K(m_1) ...)), secure for fixed-length messages of length equal to a multiple of the block size if E is a pseudorandom permutation.[138] However, basic CBC-MAC is insecure for variable-length messages, as length-extension attacks allow forgery by appending blocks to a known tag.[139] To address this, variants like CMAC (or XCBC) incorporate key-dependent padding or distinct subkeys, providing provable security for arbitrary lengths under the pseudorandom cipher assumption, as specified in NIST SP 800-38B (2005).[138] Authenticated encryption with associated data (AEAD) extends MACs by combining confidentiality and authentication in a single primitive, encrypting a plaintext P to ciphertext C while producing a tag authenticating both P and optional unencrypted associated data A, all under a secret key K and nonce N.[140] AEAD schemes must satisfy privacy (indistinguishability of ciphertexts) and authenticity (rejection of invalid C, A, tag pairs) against chosen-plaintext and chosen-ciphertext adversaries, with associated data protected only for integrity, not secrecy.[141] This design avoids pitfalls of separate encrypt-then-MAC compositions, such as vulnerability to padding oracles if not carefully implemented, by integrating both via a unified nonce-based construction.[141] Prominent AEAD modes include Galois/Counter Mode (GCM), proposed by McGrew and Viega in 2004, which uses counter mode for encryption and a polynomial-based universal hash (GHASH) over GF(2^128) for authentication: the tag combines GHASH(A ∥ C) with a counter-derived value, masked by the key-derived hash subkey.[141] Standardized in NIST SP 800-38D (2007), GCM achieves 128-bit security for up to 2^32 messages per key if nonces are unique and the block cipher (typically AES) resists distinguishing attacks, though nonce reuse catastrophically leaks plaintext and enables forgeries.[140] [141] Other modes like CCM (Counter with CBC-MAC) parallelize encryption and MAC computation for efficiency in constrained environments.[139] AEAD is integral to protocols like TLS 1.3, prioritizing modes with parallelizable authentication to minimize latency.[140]Protocols and Advanced Constructions
Modes of Operation for Block Ciphers
Modes of operation for block ciphers define algorithms that apply a fixed-block-size symmetric cipher, such as AES, to variable-length data while achieving security goals like confidentiality, integrity, and resistance to certain attacks. These modes address the limitation of block ciphers processing only discrete blocks (e.g., 128 bits for AES) by specifying how plaintext blocks interact with ciphertext, initialization vectors (IVs), or nonces to produce secure output. Without a mode, direct application risks insecure patterns or reuse vulnerabilities; modes ensure semantic security under chosen-plaintext attacks when properly implemented.[142][143] The foundational confidentiality-only modes were standardized by the National Bureau of Standards (NBS, predecessor to NIST) in Federal Information Processing Standard (FIPS) PUB 81 in 1980 for use with DES, including Electronic Codebook (ECB), Cipher Block Chaining (CBC), Cipher Feedback (CFB), and Output Feedback (OFB). These were later updated in NIST Special Publication (SP) 800-38A in 2001 to include Counter (CTR) mode and apply to approved ciphers like AES. ECB encrypts each block independently, offering parallelism but leaking statistical patterns in plaintext (e.g., identical blocks yield identical ciphertext), making it unsuitable for most uses except single-block keys. CBC chains each plaintext block with the prior ciphertext via XOR before encryption, requiring a random IV for the first block to prevent deterministic attacks, but it is sequential and vulnerable to padding oracle exploits if not authenticated.[144][143] CFB and OFB transform the block cipher into a self-synchronizing or asynchronous stream cipher, respectively: CFB XORs plaintext with cipher output from the previous (shifted) ciphertext feedback, tolerating bit errors but propagating them; OFB generates a keystream by repeatedly encrypting an IV, avoiding error propagation but requiring full re-encryption on tampering. CTR mode, added in 2001, encrypts a counter (incremented per block from a nonce) to produce a keystream XORed with plaintext, enabling high parallelism, no padding, and misuse resistance if nonces are unique, though nonce reuse catastrophically leaks information. These modes mandate unique IVs or nonces per key to avoid two-time pad weaknesses, where reuse enables XOR-based plaintext recovery.[144][142] For combined confidentiality and authentication, NIST introduced modes like Galois/Counter Mode (GCM) in SP 800-38D (2007), which uses CTR for encryption and a polynomial-based Galois field multiplication for authentication tags, offering efficiency in hardware and resistance to forgery up to 2^32 blocks under a key. Counter with CBC-MAC (CCM) in SP 800-38C (2004) pairs CTR encryption with CBC-based MAC, suitable for constrained environments like wireless protocols. These authenticated encryption with associated data (AEAD) modes prevent malleability attacks inherent in confidentiality-only modes, where ciphertext modification can alter predictable plaintext without detection. XEX-based Tweaked Codebook with Ciphertext Stealing (XTS) in SP 800-38E (2010) targets disk sector encryption, using tweakable block ciphers to avoid IV management while handling partial blocks securely.| Mode | Primary Security Goal | Key Properties | Standardization | Common Applications |
|---|---|---|---|---|
| ECB | Confidentiality | Parallelizable, no IV; pattern leakage | SP 800-38A (2001) | Key wrapping (avoid for data) |
| CBC | Confidentiality | Chaining with IV; sequential; padding needed | FIPS 81 (1980); SP 800-38A | Legacy file encryption |
| CFB/OFB | Confidentiality (stream-like) | Error handling varies; no full-block alignment | FIPS 81; SP 800-38A | Streaming data |
| CTR | Confidentiality | Parallel, nonce-based; no error propagation | SP 800-38A | High-throughput protocols |
| GCM | Authenticated Encryption | CTR + GHASH tag; nonce misuse vulnerability | SP 800-38D (2007) | TLS, IPsec |
| CCM | Authenticated Encryption | CTR + CBC-MAC; fixed-length tags | SP 800-38C (2004) | 802.11 Wi-Fi |
| XTS | Confidentiality (tweakable) | Sector-specific; no IV | SP 800-38E (2010) | Disk encryption (e.g., BitLocker) |
Zero-Knowledge Proofs and Applications
Zero-knowledge proofs are cryptographic protocols enabling a prover to demonstrate possession of certain information to a verifier, confirming the validity of a statement without disclosing the underlying data or any extraneous details.[146] First formalized in 1985 by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in their paper "The Knowledge Complexity of Interactive Proof Systems," these proofs extend interactive proof systems by incorporating a zero-knowledge property, allowing simulation of transcripts that reveal no additional knowledge beyond the statement's truth.[147] Such proofs must satisfy three core properties: completeness, where an honest prover with valid information convinces an honest verifier with overwhelming probability; soundness, ensuring that a cheating prover cannot convince the verifier of a false statement except with negligible probability; and zero-knowledge, meaning the verifier's view is computationally indistinguishable from an interaction simulated without access to the secret, thus preserving privacy.[148] [149] Early constructions were interactive and computationally intensive, relying on assumptions like the hardness of quadratic residuosity. Non-interactive variants emerged to address practicality, including zk-SNARKs (zero-knowledge succinct non-interactive arguments of knowledge), which produce compact proofs verifiable in logarithmic time relative to the computation size, though they require a trusted setup phase vulnerable to compromise if the setup parameters are generated adversarially.[150] zk-SNARKs leverage pairing-based elliptic curve cryptography and polynomial commitments, enabling efficient verification for complex computations. In contrast, zk-STARKs (zero-knowledge scalable transparent arguments of knowledge) eliminate trusted setups through transparent hash-based commitments and FRI (Fast Reed-Solomon Interactive Oracle Proofs), yielding larger but post-quantum secure proofs resistant to quantum attacks via information-theoretic soundness.[150] [151] Applications span privacy-preserving transactions and scalable verification in distributed systems. In blockchain, zk-SNARKs underpin Zcash's shielded transactions, launched in October 2016, allowing users to prove valid spends of private balances without revealing amounts or addresses, relying on the Sapling upgrade's Groth16 protocol for efficiency. For scalability, zk-rollups aggregate thousands of off-chain transactions into succinct proofs submitted on-chain, as implemented in Ethereum layer-2 solutions like StarkNet using zk-STARKs, reducing gas costs by over 90% while maintaining settlement security on the base layer.[151] [152] Beyond blockchain, zero-knowledge proofs facilitate anonymous authentication in identity systems and verifiable outsourcing of computation, where clients confirm correct execution of programs without learning inputs or outputs. These protocols enhance causal integrity in adversarial settings by decoupling proof from disclosure, though practical deployments must mitigate risks like proof malleability or verifier collusion.[146]Homomorphic and Functional Encryption
Homomorphic encryption refers to cryptographic schemes enabling computations on encrypted data such that the result, when decrypted, matches the outcome of the same operations performed on the underlying plaintexts.[153] This property preserves data privacy during processing, as the data owner retains decryption control while outsourcing computation to untrusted parties.[154] Schemes are classified by computational capabilities. Partially homomorphic encryption (PHE) supports unlimited instances of a single operation, such as addition in the Paillier scheme or multiplication in RSA.[155] Somewhat homomorphic encryption (SHE) permits both addition and multiplication but restricts circuit depth due to accumulating noise that eventually prevents correct decryption.[156] Fully homomorphic encryption (FHE), introduced by Craig Gentry in 2009 using ideal lattices, overcomes this by incorporating "bootstrapping" to refresh ciphertexts and enable arbitrary-depth circuits without noise overflow.[153] Gentry's construction, detailed in his STOC 2009 paper, relies on the hardness of lattice problems like shortest vector approximation.[157] Functional encryption generalizes homomorphic approaches by allowing decryption keys that reveal only specified functions of the encrypted data, rather than the full plaintext.[158] Formally defined by Boneh, Sahai, and Waters in 2011, it supports predicates or computations where a key for function extracts from encryption of message , leaking nothing else.[159] Homomorphic encryption emerges as a special case where is the identity function, but functional encryption enables finer-grained control, such as inner-product evaluations or attribute-based access.[160] Constructions often build on bilinear maps or lattices, inheriting similar security assumptions.[159] Applications include privacy-preserving machine learning, where models train or infer on encrypted datasets, and secure multiparty computation in cloud environments.[161] For instance, FHE facilitates encrypted genomic analysis or financial modeling without exposing sensitive inputs.[162] However, practical deployment faces challenges: FHE operations incur 10^4 to 10^6 times the cost of plaintext equivalents due to large ciphertexts (kilobytes to megabytes) and bootstrapping overhead, which can require seconds per operation on standard hardware.[163] Noise growth in SHE and bootstrapping in FHE demand leveled implementations or periodic recryption, limiting scalability for deep neural networks.[164] Recent advances mitigate these issues. By 2023, libraries like TFHE and HElib optimized bootstrapping, achieving sub-second times for certain operations via techniques like programmable bootstrapping.[165] In 2025, schemes integrated FHE with support vector machines for privacy-preserving classification, reducing latency through lattice-based accelerations.[166] Functional encryption variants have enabled efficient predicate matching, though full realizations remain computationally intensive, often relying on indistinguishability obfuscation assumptions unproven in practice.[167] Security proofs hold under post-quantum assumptions like learning with errors, but real-world efficiency gains depend on hardware accelerations like GPUs.[168]Cryptanalysis Methods
Brute-Force and Known-Plaintext Attacks
A brute-force attack, also known as exhaustive key search, systematically enumerates all possible keys in a cipher's key space to identify the one that correctly decrypts a given ciphertext into intelligible plaintext.[169] The computational effort required scales exponentially with key length, typically demanding on the order of 2^k operations for a k-bit key, assuming uniform key distribution and no exploitable weaknesses. This method establishes a fundamental security baseline for symmetric ciphers, where resistance is quantified by the infeasibility of completing the search within practical time and resource constraints, such as those available to state actors or large-scale computing clusters. Historical demonstrations underscore the practical limits of brute-force viability. The Data Encryption Standard (DES), with its 56-bit key space of 2^56 possibilities, was rendered insecure by specialized hardware like the Electronic Frontier Foundation's (EFF) DES Cracker, completed in 1998 and capable of testing 90 billion keys per second.[170] This machine decrypted a DES-challenged message in 56 hours, confirming that brute-force attacks on short keys are achievable with dedicated resources costing under $250,000 at the time.[171] In contrast, modern standards like AES-128, with 2^128 keys, defy brute-force: even exhaustive parallelization across global supercomputing power would require time exceeding 156 times the universe's age (approximately 14 billion years) to exhaust half the space on average.[172] Known-plaintext attacks exploit access to one or more pairs of corresponding plaintext and ciphertext, enabling deduction of the key or recovery of additional plaintext without full key enumeration.[173] This scenario arises naturally when plaintext patterns are predictable, such as standard protocol headers or repeated salutations in diplomatic traffic. Classical substitution ciphers, including the Caesar shift, succumb rapidly; a single known letter pair reveals the fixed shift offset.[174] Polyalphabetic ciphers like Vigenère can be broken by aligning known cribs (plaintext snippets) against ciphertext to isolate the repeating key stream via frequency analysis or Kasiski examination.[175] In linear algebra-based systems such as the Hill cipher, known-plaintext pairs suffice to solve for the matrix key through Gaussian elimination, assuming sufficient crib length matching the block size. Robust modern ciphers mitigate these attacks by design; for instance, AES resists key recovery from known plaintext beyond brute-force equivalents due to its substitution-permutation structure, which diffuses information across rounds without linear shortcuts.[176] Nonetheless, known-plaintext vulnerabilities persist in implementations with weak padding or malleable modes, emphasizing the need for authenticated encryption to bind plaintext integrity.[177] These attacks highlight causal dependencies in cipher design: security degrades when plaintext predictability correlates with key material exposure, underscoring empirical testing against realistic adversary knowledge.Advanced Techniques: Differential, Linear, and Side-Channel
Differential cryptanalysis, introduced by Eli Biham and Adi Shamir in 1990, exploits probabilistic relationships between differences in pairs of plaintexts and the corresponding differences in ciphertexts to recover key bits in block ciphers.[178] The method tracks how a chosen input difference propagates through the cipher's rounds, particularly through substitution-permutation networks, using characteristics that predict output differences with high probability.[179] For the Data Encryption Standard (DES), it breaks reduced-round versions up to 8 rounds in minutes on a personal computer and up to 15 rounds with extensive computation, though full 16-round DES resists practical attacks due to S-box designs that weaken differential probabilities.[178] This technique influenced modern cipher design, such as AES, where developers bound maximum differential probabilities below 2^{-6} per round to ensure security margins.[180] Linear cryptanalysis, developed by Mitsuru Matsui in 1993, is a known-plaintext attack that identifies linear approximations approximating the cipher as an affine transformation with non-zero bias.[181] It constructs high-bias linear equations relating plaintext bits, ciphertext bits, and key bits across rounds, using the piling-up lemma to combine approximations: if independent approximations have biases ε_i, the combined bias is approximately ∏ ε_i.[180] Applied to DES, Matsui's algorithm recovers the full 16-round key using about 2^{43} known plaintext-ciphertext pairs, far fewer than brute force's 2^{56}, by iteratively guessing subkey bits and verifying via partial decryption.[181] Unlike differential methods, which focus on differences, linear analysis operates on parities and has prompted countermeasures like nonlinear components with low correlation biases in ciphers such as Serpent and Twofish.[180] Side-channel attacks target physical implementations rather than algorithmic weaknesses, extracting secrets from observable leaks like timing variations, power consumption, or electromagnetic emissions during computation.[182] Paul Kocher demonstrated timing attacks in 1996, showing how variable execution times in modular exponentiation—such as in RSA—reveal bits of private keys by measuring response times across multiple operations.[183] Power analysis extends this: simple power analysis (SPA) visually inspects traces for operation patterns, while differential power analysis (DPA), refined by Kocher, G.J. Suh, and J. Jaleel in 1999, statistically correlates hypothetical intermediate values with aggregated trace data to isolate key-dependent signals, succeeding with as few as hundreds of traces on devices like smart cards.[184] These attacks, effective against hardware and software realizations of algorithms like AES, underscore the need for constant-time implementations, masking, and noise injection, as algorithmic security alone proves insufficient against implementation-specific vulnerabilities.[185]Quantum-Specific Threats: Shor's and Grover's Algorithms
Shor's algorithm, introduced by Peter Shor at the 35th Annual Symposium on Foundations of Computer Science in November 1994, enables a quantum computer to factor large composite integers and solve discrete logarithm problems in polynomial time.[186] The algorithm leverages quantum superposition and the quantum Fourier transform to identify the period of a function related to the number to be factored, achieving an exponential speedup over the best-known classical algorithms, which require sub-exponential time.[187] This capability directly threatens public-key cryptographic systems reliant on the computational difficulty of these problems, including RSA encryption—where security depends on the intractability of factoring the product of two large primes—and Diffie-Hellman key exchange, as well as elliptic curve variants like ECDH and ECDSA.[188] Implementing Shor's algorithm to break 2048-bit RSA keys would require a fault-tolerant quantum computer with millions of logical qubits, far beyond current experimental systems limited to hundreds of noisy qubits as of 2024.[189] In contrast to Shor's targeted attack on specific mathematical primitives, Grover's algorithm, proposed by Lov Grover in 1996, offers a generic quadratic speedup for unstructured search problems, reducing the time complexity from to iterations on a quantum computer.[190] Applied to symmetric cryptography, it accelerates exhaustive key searches, effectively halving the security margin of block ciphers like AES; for instance, AES-128 provides only about 64 bits of security against Grover's attack, while AES-256 retains 128 bits.[191] This threat is less severe than Shor's, as it does not fundamentally break symmetric primitives but necessitates larger key sizes—doubling them restores classical-equivalent security—and affects hash functions by speeding up preimage and collision searches.[192] Practical deployment requires repeated oracle queries with quantum error correction, imposing significant resource demands, though parallelization and circuit optimizations remain subjects of ongoing research.[193] Both algorithms underscore the need for quantum-resistant alternatives, with Shor's posing an existential risk to asymmetric schemes and Grover's prompting incremental hardening of symmetric ones.Applications
Secure Network Communications (TLS/IPsec)
Transport Layer Security (TLS) provides cryptographic protection for communications between applications, operating at the transport layer to ensure confidentiality, integrity, and authentication of data exchanged over networks like the Internet. Originally developed as an upgrade to Netscape's Secure Sockets Layer (SSL) version 3.0, TLS version 1.0 was standardized by the IETF in RFC 2246 in January 1999.[194] Subsequent versions addressed vulnerabilities: TLS 1.1 in RFC 4346 (2006) improved resistance to cipher block chaining (CBC) attacks, while TLS 1.2 in RFC 5246 (2008) introduced support for stronger algorithms like AES and SHA-256.[195] The current standard, TLS 1.3 defined in RFC 8446 (August 2018), streamlines the handshake to a single round-trip using ephemeral Diffie-Hellman key exchange for forward secrecy, mandates authenticated encryption with associated data (AEAD) modes such as AES-256-GCM or ChaCha20-Poly1305, and eliminates insecure options like static RSA key transport and CBC ciphers.[196] [197] Key derivation employs HKDF based on HMAC-SHA-256, with certificates typically using RSA or ECDSA for server authentication.[196] IPsec secures Internet Protocol (IP) communications at the network layer, authenticating and encrypting IP packets to protect against eavesdropping, tampering, and replay attacks, making it suitable for site-to-site VPNs and remote access without application modifications. Defined in the IETF's security architecture (RFC 4301, December 2005, updating earlier RFC 2401), IPsec comprises protocols including Authentication Header (AH) for integrity and origin authentication without confidentiality, and Encapsulating Security Payload (ESP) for confidentiality via symmetric encryption (e.g., AES in GCM mode), integrity (via HMAC), and optional authentication.[198] [199] [200] Internet Key Exchange (IKE) version 2, specified in RFC 7296 (October 2014), manages security associations and key negotiation using Diffie-Hellman for shared secrets, supporting pre-shared keys or public-key authentication. IPsec operates in transport mode for host-to-host protection or tunnel mode for gateway encapsulation, with cryptographic algorithms selected via suites like those in NIST SP 800-77 (revised 2020), emphasizing AES and SHA-2 for compliance.[201] [202] While TLS excels in application-layer security for protocols like HTTPS, requiring explicit integration but offering fine-grained control and easier deployment for web traffic, IPsec provides transparent, network-wide protection at the IP level, ideal for securing entire subnets but with higher configuration complexity and potential performance overhead from per-packet processing.[203] Both leverage symmetric cryptography for bulk data (e.g., AES) post-handshake, asymmetric methods for initial authentication, and hash-based integrity checks, but TLS 1.3 prioritizes speed and privacy via 0-RTT options (with risks mitigated by anti-replay), whereas IPsec's IKE enables mutual authentication and perfect forward secrecy through ephemeral keys.[196] Deployment data as of 2025 shows TLS 1.3 adopted in over 90% of web connections for its resistance to downgrade attacks, while IPsec remains prevalent in enterprise VPNs despite quantum threats prompting transitions to post-quantum variants.[204]Data Protection and Storage
Cryptography protects data at rest—stored on disks, databases, or other media—by encrypting it to prevent unauthorized access in cases of theft, loss, or breach. Unlike data in transit, which requires resistance to interception, storage encryption must support efficient random access and sequential reads/writes without leaking patterns like file sizes or locations, often using tweakable modes to bind encryption to sector positions. The Advanced Encryption Standard (AES), specified in FIPS PUB 197 and approved by NIST for confidentiality of sensitive data, serves as the primary symmetric algorithm for this purpose due to its security margin against known attacks and hardware acceleration via AES-NI instructions in modern CPUs.[205] Full disk encryption (FDE) systems encrypt entire volumes transparently, integrating with operating systems to require authentication before decryption. These typically employ AES in XTS mode (XEX-based Tweaked Codebook with Ciphertext Stealing), standardized in IEEE 1619-2007 and recommended by NIST SP 800-38E for disk sectors, as it avoids vulnerabilities in chained modes like CBC, such as malleability or padding oracle attacks, while enabling parallel processing and direct sector access. XTS uses two keys: one for block encryption and a tweak key derived from the sector index to ensure unique ciphertexts per position, mitigating copy-paste or replay threats across sectors. Performance overhead remains low—typically under 5% for I/O-bound operations on hardware with AES acceleration—due to kernel-level implementation and negligible CPU impact for sequential workloads.[206][207] Microsoft's BitLocker, introduced in Windows Vista (2007) and enhanced in subsequent versions, implements FDE using AES-128 or AES-256 in XTS mode for fixed drives, with keys protected by Trusted Platform Module (TPM) hardware or passphrase derivation via PBKDF2. It supports multi-factor recovery via 48-digit keys stored in Active Directory or Microsoft accounts, addressing boot-time integrity via Secure Boot integration. Apple's FileVault, available since macOS 10.3 (2003) and rebuilt on Core Storage in 10.7 (2011), employs AES-XTS-128 with 256-bit keys, leveraging the Mac's T2 or Apple Silicon secure enclave for key storage and automatic encryption of new data. In Linux, dm-crypt—a kernel device-mapper target since version 2.6—pairs with LUKS (Linux Unified Key Setup) format for FDE, supporting AES-XTS via the crypto API and keyslots for multiple passphrases, often used in distributions like Ubuntu for /home or root partitions.[208][209][210] Beyond FDE, granular approaches include file-level or field-level encryption in databases and applications. For instance, transparent data encryption (TDE) in systems like SQL Server encrypts database files at rest using AES, with master keys managed separately to avoid re-encrypting live data. Key management remains a core challenge: keys must be generated securely (e.g., using NIST SP 800-57 randomness), rotated periodically, and stored in hardware security modules (HSMs) or key management services to prevent exposure, as compromised keys nullify encryption regardless of algorithm strength. NIST guidelines emphasize separating key-encrypting keys (KEKs) from data-encrypting keys (DEKs) and auditing access, while avoiding hard-coded or weak derivation methods that could enable brute-force recovery.[211][212] Self-encrypting drives (SEDs) hardware-accelerate this via TCG Opal standards, performing AES encryption in controller firmware without host CPU overhead, though they require software for key provisioning to avoid backdoor risks from default manufacturer keys. Empirical data from benchmarks show SEDs reduce latency by offloading crypto, achieving near-native throughput for encrypted SSDs, but firmware vulnerabilities have prompted calls for verified implementations. Overall, while storage encryption introduces minimal measurable performance degradation—often 1-3% in real-world tests on NVMe drives—it demands robust key hygiene to counter threats like physical extraction or insider access, as evidenced by breaches where unencrypted backups exposed terabytes of data.[213][214]Blockchain, Cryptocurrencies, and Economic Incentives
Blockchain technology leverages cryptographic primitives such as hash functions and digital signatures to construct a distributed, append-only ledger resistant to tampering. Each block incorporates a hash of the preceding block, computed via algorithms like SHA-256, ensuring that any modification propagates discrepancies across the chain, thereby enforcing chronological integrity without central authority. Transactions within blocks are authenticated using asymmetric cryptography, typically elliptic curve digital signature algorithm (ECDSA) with the secp256k1 curve, where senders sign data with private keys to prove ownership while public keys enable verification by network participants. Merkle trees aggregate transaction hashes into a root hash per block, facilitating efficient proof-of-inclusion without downloading entire datasets.[215][216][217] Cryptocurrencies exemplify these mechanisms in practice, with Bitcoin—detailed in a whitepaper published October 31, 2008, by the pseudonymous Satoshi Nakamoto—pioneering a peer-to-peer electronic cash system. Bitcoin defines coins as chains of ECDSA signatures, where each transfer signs a hash of prior transaction details, preventing double-spending through network-wide validation. The network's genesis block was mined January 3, 2009, initiating a blockchain secured by proof-of-work (PoW), wherein participants (miners) expend computational resources to find a nonce yielding a block hash below a target difficulty, adjusted every 2016 blocks to maintain approximately 10-minute intervals. This process, reliant on SHA-256 double-hashing, not only timestamps transactions but also probabilistically selects the longest chain as canonical, resolving forks via economic majority.[215][218][219] Economic incentives underpin blockchain security by aligning participant self-interest with network integrity, particularly in PoW systems like Bitcoin's. Miners receive block rewards—initially 50 BTC, halving every 210,000 blocks, reaching 3.125 BTC as of the 2024 halving—and transaction fees, totaling over 900,000 BTC issued by October 2025. Honest mining yields expected returns proportional to invested hash power, whereas attacks like selfish mining or 51% dominance require controlling majority resources, costing more in electricity and hardware than potential gains from honest participation, assuming rational actors and positive coin value. This game-theoretic structure approximates a Nash equilibrium, deterring deviations as the cost of subverting the chain exceeds rewards, with empirical security evidenced by Bitcoin's uninterrupted operation since inception despite attempts.[220][221][222] Alternative consensus models, such as proof-of-stake (PoS) adopted by Ethereum following its September 2022 transition, shift incentives from computational expenditure to stake-weighted selection, where validators risk forfeiture (slashing) of locked cryptocurrency for malfeasance like proposing invalid blocks. Economic analyses indicate PoS may offer comparable or superior security for high-throughput chains by reducing energy demands—Bitcoin's PoW network consumed approximately 150 TWh annually in 2023—while tying validator commitment to skin-in-the-game, though PoS introduces risks like long-range attacks mitigated by checkpoints. Hybrid or stake-based systems thus extend cryptographic verification with incentive designs calibrated to scale, prioritizing verifiability over trust.[223][224][225]Quantum-Resistant Developments
Quantum Key Distribution Experiments
The first experimental demonstration of quantum key distribution (QKD) occurred in October 1989, when Charles Bennett, Gilles Brassard, and colleagues implemented the BB84 protocol using polarization-encoded photons transmitted over a distance of 32.5 cm on an optical table, successfully sharing a 403-bit secret key.[226] This proof-of-concept validated the use of quantum measurements for detecting eavesdropping but was limited to laboratory conditions due to high photon loss and rudimentary detection technology.[227] In the early 1990s, experiments advanced to fiber-optic channels, with demonstrations over tens of kilometers using attenuated laser pulses and single-photon detectors, addressing attenuation challenges through error correction and privacy amplification protocols.[226] A notable milestone was the 1992 implementation by Bennett et al., extending the range while confirming security against individual attacks, though collective attacks remained theoretically unaddressed until later proofs.[228] By the mid-1990s, entanglement-based QKD, proposed by Artur Ekert in 1991, was experimentally realized, using Bell inequality violations to certify security independently of device assumptions.[226] Terrestrial free-space and submarine cable experiments emerged in the 2000s, achieving links over 100 km; for instance, a 2007 demonstration used active phase randomization to mitigate side-channel vulnerabilities in practical setups.[229] Underwater QKD was first tested in 2021 over 10 meters in a controlled tank using BB84 with decoy states, highlighting potential for maritime applications despite scattering losses.[230] Drone-based mobile QKD was demonstrated in 2024, enabling dynamic aerial links over hundreds of kilometers without fixed infrastructure.[231] Satellite-based QKD marked a breakthrough for global-scale distribution, with China's Micius satellite achieving entanglement distribution over 1,200 km in 2017 and intercontinental key exchange equivalent to 7,600 km ground distance by 2018, overcoming atmospheric turbulence via adaptive optics.[232] Continuous-variable QKD (CV-QKD), using coherent states for compatibility with telecom infrastructure, reached 421 km over fiber in 2018 with rates exceeding 1 kbit/s after error correction.[233] Recent experiments (2020–2025) focus on scalability and integration: In 2024, Oak Ridge National Laboratory demonstrated QKD for network function virtualization over optical networks, generating secure keys at 1.7 kbit/s over 20 km.[234] UK's 2025 trial established a 100+ km ultra-secure link using twin-field QKD, achieving positive key rates without trusted nodes.[235] A October 2025 experiment set a record for QKD transmission distance in a hybrid classical-quantum setup, emphasizing composable security proofs for real-world deployment.[236] Device-independent QKD protocols, closing implementation loopholes, were experimentally validated in 2023 with violation of CHSH inequalities exceeding local bounds by 10 standard deviations.[237] These advances underscore QKD's transition from theory to field trials, though practical key rates remain below classical methods, limited by photon detection efficiencies below 50% and decoherence.[238]Post-Quantum Algorithms and NIST Standards (2024)
In response to the anticipated threat posed by large-scale quantum computers capable of breaking widely used public-key algorithms like RSA and ECC via Shor's algorithm, post-quantum cryptography focuses on developing mathematical problems believed to resist both classical and quantum attacks. These include lattice-based problems (e.g., Learning With Errors or LWE), hash-based signatures, and code-based schemes, which rely on computational hardness assumptions not efficiently solvable by known quantum algorithms. NIST initiated its post-quantum cryptography standardization process in December 2016 with a public call for algorithm submissions, followed by multiple rounds of evaluation involving cryptanalysis from global experts. By July 2022, NIST selected four primary candidates for standardization: CRYSTALS-Kyber for key encapsulation, CRYSTALS-Dilithium and Falcon for digital signatures, and SPHINCS+ as a hash-based backup signature scheme.[239] On August 13, 2024, NIST published the first three Federal Information Processing Standards (FIPS) specifying these algorithms in finalized form, approved by the Secretary of Commerce and effective immediately for federal use. FIPS 203 defines ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism), a renamed and slightly modified version of Kyber, which uses module-LWE hardness for secure key exchange with parameter sets offering security levels comparable to AES-128, AES-192, and AES-256. FIPS 204 specifies ML-DSA (Module-Lattice-Based Digital Signature Algorithm) from Dilithium, employing Fiat-Shamir with Aborts over module lattices for efficient signatures resistant to forgery under quantum threats. FIPS 205 outlines SLH-DSA (Stateless Hash-Based Digital Signature Algorithm) from SPHINCS+, a stateless scheme based on Merkle trees and few-time signatures, providing diversity against lattice vulnerabilities. A fourth standard, FN-DSA based on Falcon's NTRU lattice signatures, was anticipated for release by late 2024 to offer compact signatures for constrained environments.[72][240]| Standard | Algorithm Basis | Primary Use | Security Levels | Key Features |
|---|---|---|---|---|
| FIPS 203 (ML-KEM) | Module-LWE lattices | Key encapsulation | 1, 3, 5 (equiv. AES-128/192/256) | IND-CCA2 secure; efficient for hybrid use with classical crypto[240] |
| FIPS 204 (ML-DSA) | Module-SIS/LWE lattices | Digital signatures | 2, 3, 5 | EUF-CMA secure; balances speed and size |
| FIPS 205 (SLH-DSA) | Hash functions (e.g., SHAKE, SHA2) | Digital signatures (backup) | 3, 4, 5 | Provably secure under collision resistance; larger signatures |
