Hubbry Logo
Digital Signature AlgorithmDigital Signature AlgorithmMain
Open search
Digital Signature Algorithm
Community hub
Digital Signature Algorithm
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Digital Signature Algorithm
Digital Signature Algorithm
from Wikipedia

The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. In a digital signature system, there is a keypair involved, consisting of a private and a public key. In this system a signing entity that declared their public key can generate a signature using their private key, and a verifier can assert the source if it verifies the signature correctly using the declared public key. DSA is a variant of the Schnorr and ElGamal signature schemes.[1]: 486 

The National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) in 1991, and adopted it as FIPS 186 in 1994.[2] Five revisions to the initial specification have been released. The newest specification is: FIPS 186-5 from February 2023.[3] DSA is patented but NIST has made this patent available worldwide royalty-free. Specification FIPS 186-5 indicates DSA will no longer be approved for digital signature generation, but may be used to verify signatures generated prior to the implementation date of that standard.

Overview

[edit]

The DSA works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message).

History

[edit]

In 1982, the U.S government solicited proposals for a public key signature standard. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS). Initially there was significant criticism, especially from software companies that had already invested effort in developing digital signature software based on the RSA cryptosystem.[1]: 484  Nevertheless, NIST adopted DSA as a Federal standard (FIPS 186) in 1994. Five revisions to the initial specification have been released: FIPS 186–1 in 1998,[4] FIPS 186–2 in 2000,[5] FIPS 186–3 in 2009,[6] FIPS 186–4 in 2013,[3] and FIPS 186–5 in 2023.[7] Standard FIPS 186-5 forbids signing with DSA, while allowing verification of signatures generated prior to the implementation date of the standard as a document. It is to be replaced by newer signature schemes such as EdDSA.[8]

DSA is covered by U.S. patent 5,231,668, filed July 26, 1991 and now expired, and attributed to David W. Kravitz,[9] a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C.", and NIST has made this patent available worldwide royalty-free.[10] Claus P. Schnorr claims that his U.S. patent 4,995,082 (also now expired) covered DSA; this claim is disputed.[11]

In 1993, Dave Banisar managed to get confirmation, via a FOIA request, that the DSA algorithm hasn't been designed by the NIST, but by the NSA.[12]

OpenSSH announced that DSA was going to be removed in 2025. The support was entirely dropped in version 10.0.[13][14]

Operation

[edit]

The DSA algorithm involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification.

1. Key generation

[edit]

Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system, while the second phase computes a single key pair for one user.

Parameter generation

[edit]
  • Choose an approved cryptographic hash function with output length bits. In the original DSS, was always SHA-1, but the stronger SHA-2 hash functions are approved for use in the current DSS.[3][15] If is greater than the modulus length , only the leftmost bits of the hash output are used.
  • Choose a key length . The original DSS constrained to be a multiple of 64 between 512 and 1024 inclusive. NIST 800-57 recommends lengths of 2048 (or 3072) for keys with security lifetimes extending beyond 2010 (or 2030).[16]
  • Choose the modulus length such that and . FIPS 186-4 specifies and to have one of the values: (1024, 160), (2048, 224), (2048, 256), or (3072, 256).[3]
  • Choose an -bit prime .
  • Choose an -bit prime such that is a multiple of .
  • Choose an integer randomly from .
  • Compute . In the rare case that try again with a different . Commonly is used. This modular exponentiation can be computed efficiently even if the values are large.

The algorithm parameters are (, , ). These may be shared between different users of the system.

Per-user keys

[edit]

Given a set of parameters, the second phase computes the key pair for a single user:

  • Choose an integer randomly from .
  • Compute .

is the private key and is the public key.

2. Key distribution

[edit]

The signer should publish the public key . That is, they should send the key to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private key secret.

3. Signing

[edit]

A message is signed as follows:

  • Choose an integer randomly from
  • Compute . In the unlikely case that , start again with a different random .
  • Compute . In the unlikely case that , start again with a different random .

The signature is

The calculation of and amounts to creating a new per-message key. The modular exponentiation in computing is the most computationally expensive part of the signing operation, but it may be computed before the message is known. Calculating the modular inverse is the second most expensive part, and it may also be computed before the message is known. It may be computed using the extended Euclidean algorithm or using Fermat's little theorem as .

4. Signature Verification

[edit]

One can verify that a signature is a valid signature for a message as follows:

  • Verify that and .
  • Compute .
  • Compute .
  • Compute .
  • Compute .
  • The signature is valid if and only if .

Correctness of the algorithm

[edit]

The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows:

First, since , it follows that by Fermat's little theorem. Since and is prime, must have order .

The signer computes

Thus

Since has order we have

Finally, the correctness of DSA follows from

Sensitivity

[edit]

With DSA, the entropy, secrecy, and uniqueness of the random signature value are critical. It is so critical that violating any one of those three requirements can reveal the entire private key to an attacker.[17] Using the same value twice (even while keeping secret), using a predictable value, or leaking even a few bits of in each of several signatures, is enough to reveal the private key .[18]

This issue affects both DSA and Elliptic Curve Digital Signature Algorithm (ECDSA) – in December 2010, the group fail0verflow announced the recovery of the ECDSA private key used by Sony to sign software for the PlayStation 3 game console. The attack was made possible because Sony failed to generate a new random for each signature.[19]

This issue can be prevented by deriving deterministically from the private key and the message hash, as described by RFC 6979. This ensures that is different for each and unpredictable for attackers who do not know the private key .

In addition, malicious implementations of DSA and ECDSA can be created where is chosen in order to subliminally leak information via signatures. For example, an offline private key could be leaked from a perfect offline device that only released innocent-looking signatures.[20]

Implementations

[edit]

Below is a non-exhaustive list of cryptographic libraries that provide support for DSA:

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Digital Signature Algorithm (DSA) is a public-key cryptographic algorithm developed by the National Institute of Standards and Technology (NIST) for generating and verifying digital signatures that ensure the authenticity and of digital messages. It operates based on the algebraic difficulty of the problem within a , using to produce signatures without encrypting the message itself. DSA was proposed by NIST in August 1991 as part of the Digital Signature Standard (DSS) and first specified in (FIPS) Publication 186 in May 1994. DSA relies on a set of domain parameters shared publicly: a large prime modulus p (with L typically 1024, 2048, or 3072), a prime q (with N of 160, 224, or 256, where q divides p-1), and a generator g of the cyclic of order q p. For , the signer selects a random per-message secret k (1 ≤ k < q) and a private key x (1 ≤ x < q), computing r = (gk mod p) mod q and s = k-1 (H(m) + x·r) mod q, where H(m) is the hash of the message m using an approved function like SHA-256; the is the pair (r, s). Verification involves the recipient using the public key y = gx mod p to recompute a value v = ((gu1 · yu2) mod p) mod q, where w = s-1 mod q, u1 = (H(m) · w) mod q, and u2 = (r · w) mod q, accepting the if v = r. The security strength, estimated up to 128 bits for approved parameter sets, assumes the intractability of computing discrete logarithms and secure for k. Historically, DSA was created collaboratively by NIST and the (NSA) to provide a U.S. government-approved alternative to proprietary algorithms like RSA for non-secret applications, with NIST committing to make it available worldwide on a basis despite potential . It formed the core of the DSS through revisions in FIPS 186-1 (1998), 186-2 (2000), 186-3 (2009), and 186-4 (2013), which expanded parameter options and integrated elliptic curve variants (ECDSA). However, in FIPS 186-5 (published February 3, 2023), DSA was deprecated for generating new digital signatures due to its declining industry adoption and vulnerabilities highlighted in academic analyses, such as susceptibility to certain attacks if parameters are weakly generated; it remains permissible only for verifying pre-existing signatures compliant with prior standards. The current DSS now prioritizes more efficient and secure alternatives like ECDSA, RSA, and post-quantum algorithms.

Introduction

Overview

The Digital Signature Algorithm (DSA) is a public-key cryptographic standard specified in the Federal Information Processing Standard (FIPS) 186 for generating and verifying digital signatures, providing of the signer and of the signed . It relies on the computational difficulty of the problem in finite fields to ensure security. Originally proposed by the National Institute of Standards and Technology (NIST) in as part of the Standard (DSS), DSA was designed for federal use in protecting unclassified information and has since been adopted in various applications. At a high level, DSA operates through a key pair consisting of a private key known only to the signer and a corresponding public key available to verifiers. To sign a , the signer hashes the using an approved function and combines it with the private key and a random per- value to produce a pair. Verification involves recomputing the hash and using the public key to check the 's validity against the original . Unlike RSA signatures, which are based on the difficulty of and involve direct of the message hash, DSA employs over a with carefully selected parameters such as large primes p and q, and a generator g. DSA typically pairs with hash functions from the SHA family, such as or members of (e.g., SHA-256), to process messages before signing.

Historical Development

The Digital Signature Algorithm (DSA) draws its foundational concepts from earlier discrete logarithm-based signature schemes, notably the introduced by in 1985, which provided a framework for public-key signatures relying on the computational difficulty of the problem in finite fields. This work influenced subsequent developments in digital signatures by demonstrating how discrete logs could enable secure, verifiable authentication without revealing private keys. DSA adapted and refined these ideas to create a more efficient variant tailored for standardization. DSA was created collaboratively by the National Institute of Standards and Technology (NIST) and the (NSA). In 1991, NIST, in collaboration with the NSA, proposed DSA as the core component of the Digital Signature Standard (DSS), aiming to establish a federal standard for digital signatures in applications like electronic mail and data interchange. NIST adopted DSA into Federal Information Processing Standard (FIPS) 186 on May 19, 1994, marking its formal integration into U.S. government cryptographic practices. Subsequent revisions refined the algorithm to address evolving security needs: FIPS 186-1 in 1998 added RSA as an option and clarified aspects of DSA parameter generation; FIPS 186-2 in 2000 introduced elliptic curve variants (ECDSA) and approved as a hash function; FIPS 186-3 in 2009 added support for hashes and larger key sizes; and FIPS 186-4 in 2013 further expanded elliptic curve options while approving additional hashes such as SHA-256 and increasing DSA modulus sizes to 2048 or 3072 bits for enhanced security. Despite this, DSA has become less efficient for modern systems due to the need for significantly larger key sizes compared to elliptic curve cryptography (ECC) variants like ECDSA to achieve equivalent security levels. As of 2025, under FIPS 186-5 (issued February 3, 2023), DSA is approved solely for legacy verification purposes, with ECDSA and other algorithms preferred for new digital signature generations to align with current performance and security priorities.

Cryptographic Foundations

As of FIPS 186-5 (published February 2023), DSA is deprecated for generating new digital signatures and approved only for verifying existing signatures generated under prior standards. The parameters and key generation below are as specified in the withdrawn FIPS 186-4.

System Parameters

The Digital Signature Algorithm (DSA) relies on a set of global system parameters that define the underlying finite field and subgroup for cryptographic operations. These parameters are fixed for a given domain or application and must be generated or selected with high assurance to ensure security. The primary parameters are the prime modulus pp, the subgroup order qq, and the generator gg. The prime modulus pp is a large with bit length LL (approved values: , 2048, or 3072), chosen such that qq divides p1p-1, ensuring a cyclic of order qq modulo pp where the problem is computationally hard. Generation methods in FIPS 186-4 produce p with appropriate factors of p-1 to resist known attacks. Specifically, 2L1<p<2L2^{L-1} < p < 2^L, with generation involving probabilistic primality testing like the Miller-Rabin algorithm applied over multiple rounds to verify primality with overwhelming probability. The subgroup order qq is a prime number of 160 to 256 bits that exactly divides p1p-1, forming a large prime factor of p1p-1 to create a suitable cyclic subgroup for the discrete logarithm problem. Specifically, 2N1<q<2N2^{N-1} < q < 2^N for approved NN values of 160 (with L=1024L=1024), 224 (with L=2048L=2048), or 256 (with L=2048L=2048 or 3072). Like pp, qq undergoes primality testing via Miller-Rabin, and its selection ensures the subgroup provides adequate security without excessive computational cost. NIST recommends parameter sets from FIPS 186-4, such as (2048-bit pp, 224-bit qq) for 112-bit security strength and (3072-bit pp, 256-bit qq) for 128-bit security, aligning with symmetric key equivalents per SP 800-57. The generator gg is an integer between 1 and p1p-1 that serves as a primitive element of the subgroup of order qq in the multiplicative group modulo pp. It is computed by selecting a random base hh (where 1<h<p11 < h < p-1) and setting g=h(p1)/qmodpg = h^{(p-1)/q} \mod p; if g=1g = 1, a new hh is chosen until g>1g > 1. To verify that the order of gg is exactly qq, check that gq1(modp)g^q \equiv 1 \pmod{p} and that no smaller positive exponent divides qq to yield 1 (since qq is prime, this primarily confirms g≢1(modp)g \not\equiv 1 \pmod{p}, but full verification tests against divisors ensure the subgroup property). The generation process for all parameters uses a seed and hash function (e.g., SHA-256) in a deterministic manner, as specified in FIPS 186-4 Appendix A, to allow reproducible validation.
Approved Parameter Sets (L, N)Bit Length of p (L)Bit Length of q (N)Security Strength (bits)
(1024, 160)102416080
(2048, 224)2048224112
(2048, 256)2048256128
(3072, 256)3072256128
These sets provide scalable security, with larger parameters offering protection against advances in computing power.

Key Generation

The key generation process in the Digital Signature Algorithm (DSA) produces a pair of mathematically related keys: a private key for signing and a public key for verification, derived from the shared system parameters pp, qq, and gg. The private key xx is selected as a random in the interval [1,q1][1, q-1], where qq has NN (approved values: 160, 224, or 256 bits), ensuring xx has sufficient to resist brute-force attacks. This selection must employ a (CSPRNG), approved by standards such as those in , to provide uniform distribution and unpredictability, matching the security strength of the parameter set (e.g., at least 112 bits for N=256N=256). The public key yy is then computed as y=gxmodp,y = g^x \mod p, where pp has bit length LL (1024, 2048, or 3072 bits), resulting in yy being an LL-bit integer in [2,p1][2, p-1]. This exponentiation leverages the discrete logarithm problem's hardness in the subgroup generated by gg modulo pp, ensuring that recovering xx from yy, gg, and pp is computationally infeasible for appropriately chosen parameters. Upon generation, the key pair undergoes validation to confirm : specifically, verify that y1y \neq 1 (indicating x≢0(modq)x \not\equiv 0 \pmod{q}) and that the loggy=x\log_g y = x remains intractable, which holds under the system's security assumptions without explicit computation of the logarithm. The private key xx must be stored securely, using mechanisms like modules or encrypted storage as per guidelines, to prevent exposure that could compromise signatures. Conversely, the public key yy is disseminated openly, typically embedded in digital certificates conforming to standards like for trust anchoring in public key infrastructures.

Core Operations

Signature Generation

The signature generation process in the Digital Signature Algorithm (DSA) produces a as a pair of integers (r,s)(r, s) for a given mm, using the signer's private key xx and the domain parameters pp, qq, and gg. This step ensures the 's authenticity and integrity by binding it cryptographically to the private key, which corresponds to the public key y=gxmodpy = g^x \mod p. The process is probabilistic, relying on a fresh random value to achieve against . To begin, preprocess the message by computing its hash value h=HASH(m)h = \mathrm{HASH}(m) using an approved , such as SHA-256, which outputs a fixed-length digest. If the hash output length exceeds the bit length NN of the prime qq, truncate hh to the to obtain the value zz used in subsequent computations. This hashing step reduces the message to a compact representation suitable for the in DSA, ensuring efficiency while preserving properties essential for security. Next, generate a random kk as an uniformly selected from the interval [1,q1][1, q-1], using an approved random bit generator to ensure unpredictability and high . The value kk serves as a per-signature secret nonce and must be kept confidential; it is discarded after use and never reused for another . Compute rr as follows: r(gkmodp)modqr \equiv (g^k \mod p) \mod q If r=0r = 0, discard kk and select a new one, repeating the computation to avoid invalid signatures. Then, compute the k1k^{-1} of kk modulo qq, and use it to derive ss: sk1(z+xr)modqs \equiv k^{-1} (z + x r) \mod q If s=0s = 0, discard kk and select a new one, recomputing both rr and ss. The resulting is the (r,s)(r, s), where both components are integers in [1,q1][1, q-1]. The requirement for a fresh kk per is critical, as reusing the same kk across multiple signatures enables an attacker to recover the private key xx through algebraic manipulation of the signature equations.

Signature Verification

The signature verification process in the Digital Signature Algorithm (DSA) uses the signer's public key to validate the authenticity and of a without revealing the private key. This procedure ensures that the signature was generated by the holder of the corresponding private key and that the message has not been altered. The verification relies on the mathematical properties of the problem in a , confirming the signature components through probabilistic computations. To perform verification, the inputs consist of the message mm, the signature pair (r,s)(r, s), the public key yy, and the domain parameters pp, qq, and gg, where pp is a large prime modulus, qq is a prime divisor of p1p-1, and gg is a generator of the of order qq pp. An approved , such as those specified in FIPS 180, is also required to compute the message digest. The verifier first checks that both rr and ss are integers in the interval [1,q1][1, q-1]; if either is outside this range, the signature is rejected as invalid. Next, compute the modular inverse ws1(modq)w \equiv s^{-1} \pmod{q}, which exists since ss is coprime to qq. Then, derive u1HASH(m)w(modq)u_1 \equiv \text{HASH}(m) \cdot w \pmod{q} and u2rw(modq)u_2 \equiv r \cdot w \pmod{q}, where HASH(m)\text{HASH}(m) is the integer representation of the hash output (typically the leftmost NN bits of the hash, with NN being the bit length of qq). The verification value vv is then calculated as: v(gu1yu2(modp))(modq).v \equiv \left( g^{u_1} \cdot y^{u_2} \pmod{p} \right) \pmod{q}. If v=rv = r, the signature is valid, confirming the message's origin and ; otherwise, it is rejected. This step involves efficient operations, which are the primary computational bottleneck but can be optimized using algorithms like binary for large pp.

Security Analysis

Correctness Proof

The correctness of the Digital Signature Algorithm (DSA) is established by demonstrating that the verification equation holds true for a valid , ensuring that the computed value vv equals the signature component rr when the message and signature are authentic and unaltered. This proof relies on the of the algorithm, substituting the signing equations into the verification process under the assumption of correct operations modulo the primes pp and qq, where pp is a large prime, qq divides p1p-1, and gg is a generator of the of order qq modulo pp. Consider a valid signature (r,s)(r, s) generated for a mm with hash value z=Hash(m)z = \text{Hash}(m), private key xx, and ephemeral secret kk, where 1kq11 \leq k \leq q-1 and the inverse k1modqk^{-1} \mod q exists. The signing equations are r=(gkmodp)modqr = (g^k \mod p) \mod q and s=k1(z+xr)modqs = k^{-1} (z + x r) \mod q, with 0<r<q0 < r < q and 0<s<q0 < s < q. The public key is y=gxmodpy = g^x \mod p. In verification, compute w=s1modqw = s^{-1} \mod q, u1=(zw)modqu_1 = (z w) \mod q, u2=(rw)modqu_2 = (r w) \mod q, and v=((gu1yu2)modp)modqv = ((g^{u_1} y^{u_2}) \mod p) \mod q. Substituting w=s1w = s^{-1} yields u1=zs1modqu_1 = z s^{-1} \mod q and u2=rs1modqu_2 = r s^{-1} \mod q. Now substitute the signing equation for ss into these: since s=k1(z+xr)modqs = k^{-1} (z + x r) \mod q, it follows that s1=k(z+xr)1modqs^{-1} = k (z + x r)^{-1} \mod q. Thus, u1=zk(z+xr)1modqu_1 = z k (z + x r)^{-1} \mod q and u2=rk(z+xr)1modqu_2 = r k (z + x r)^{-1} \mod q. The exponent in the verification computation becomes u1+xu2=[zk(z+xr)1+xrk(z+xr)1]modq=k(z+xr)(z+xr)1modq=kmodqu_1 + x u_2 = [z k (z + x r)^{-1} + x r k (z + x r)^{-1}] \mod q = k (z + x r) (z + x r)^{-1} \mod q = k \mod q. Therefore, gu1yu2=gu1(gx)u2=gu1+xu2=gkmodpg^{u_1} y^{u_2} = g^{u_1} (g^x)^{u_2} = g^{u_1 + x u_2} = g^k \mod p, so v=(gkmodp)modq=rv = (g^k \mod p) \mod q = r. This confirms vrmodqv \equiv r \mod q, validating the signature. The proof assumes that all inverses exist (i.e., k≢0modqk \not\equiv 0 \mod q and s≢0modqs \not\equiv 0 \mod q, enforced during and signing by regenerating kk if necessary) and that arithmetic is performed correctly in the multiplicative groups modulo pp and qq. For invalid signatures, such as those with r=0r = 0, s=0s = 0, or altered message hashes, the range checks fail or the substitution does not yield v=rv = r, leading to rejection. This ensures the algorithm's integrity under ideal conditions without computational errors.

Vulnerabilities and Sensitivity

The Digital Signature Algorithm (DSA) is theoretically secure under the discrete logarithm problem, but practical implementations are susceptible to several vulnerabilities, particularly those arising from the ephemeral nonce kk used in signature generation. A critical weakness occurs when the same nonce kk is reused across multiple signatures for the same private key but different messages. In this scenario, an attacker can recover the private key xx using the signature equations. Specifically, for two signatures (r1,s1)(r_1, s_1) on message hash h1h_1 and (r2,s2)(r_2, s_2) on h2h_2, the relation x(s1h2s2h1)(s2r1s1r2)1(modq)x \equiv (s_1 h_2 - s_2 h_1) (s_2 r_1 - s_1 r_2)^{-1} \pmod{q} holds, allowing direct computation of xx modulo the subgroup order qq. This attack requires only the public signatures and hashes, compromising the entire key pair. If the nonce kk is not fully random but exhibits predictability, such as through biased generation or partial bit leakage, more advanced attacks become feasible. Linear congruence or lattice-based methods, including solutions to the Hidden Number Problem, can exploit known or guessed bits of kk from multiple signatures to recover the private key. A notable real-world example is the 2010 Sony incident, where a flawed generator in the ECDSA (analogous to DSA) produced predictable nonces, enabling hackers to extract 's private signing key after analyzing a small number of signatures. Such predictability often stems from insufficient in the RNG, amplifying the risk even without full reuse. Side-channel attacks further expose DSA implementations to physical or remote observation. Timing attacks target variations in during r=(gkmodp)r = (g^k \mod p) computation, where execution time leaks information about the or bits of kk, potentially revealing upper bits of the private key xx through statistical analysis of multiple signatures. Power analysis attacks, including differential power analysis, monitor energy consumption during nonce kk generation and inversion, as the modular operations in k1modqk^{-1} \mod q exhibit data-dependent patterns that correlate with secret values. These vulnerabilities are exacerbated in non-constant-time implementations and have been demonstrated on hardware like smartcards. Parameter choices also introduce weaknesses. DSA's security relies on the difficulty of the in the of order qq; small qq values, such as the original 160-bit specification, are vulnerable to , which solves the problem in approximately 2802^{80} operations for 160 bits—now feasible with modern computing resources. NIST has disallowed generation of DSA signatures with 160-bit qq (providing only 80-bit security) after December 31, 2013, while verification of existing signatures remains allowed for legacy use, recommending transitions to at least 224-bit qq paired with 2048-bit pp for 112-bit security (though 112-bit security is planned for deprecation after 2030). DSA is particularly sensitive to flaws in , as the nonce kk must be uniformly random and unpredictable. The , once standardized by NIST, contained a potential backdoor allowing prediction of outputs if certain points were known, which could generate biased or foreseeable kk values, facilitating key recovery attacks similar to nonce reuse. This concern led to its withdrawal in 2014, underscoring the need for vetted RNGs like those in SP 800-90A. To mitigate these vulnerabilities, deterministic nonce generation per RFC 6979 derives kk from the private key and message hash using , ensuring uniqueness without relying on RNGs and preventing reuse or predictability issues. Implementations should employ constant-time arithmetic to resist timing and power analysis, such as Montgomery ladders for . Additionally, adopting larger parameters as per NIST guidelines—e.g., 256-bit qq with 3072-bit pp for 128-bit security—bolsters resistance to attacks like Pollard's rho. These strategies maintain DSA's utility while addressing practical risks.

Applications and Implementations

Software and Library Support

The Digital Signature Algorithm (DSA) is implemented in several widely used cryptographic libraries, providing APIs for key generation, signing, and verification compliant with standards such as FIPS 186-4. includes comprehensive DSA support through functions such as DSA_generate_key for key pair generation, DSA_sign for producing signatures, and DSA_verify for validation, with parameter handling aligned to FIPS 186-4 requirements for cryptography (FFC) generation and validation. In the Bouncy Castle library, the DSAKeyPairGenerator class facilitates DSA key pair creation and supports initialization with custom parameters, including user-specified prime moduli (p), order (q), and generator (g), enabling flexible deployment in Java-based applications. The Crypto++ C++ library provides a full DSA per FIPS 186 up to 3072-bit moduli, featuring classes like DSAPublicKey and DSAPrivateKey with built-in parameter validation via the Validate method to ensure primes meet length constraints (e.g., multiples of 8 bits between 512 and 3072 bits). Python's cryptography library exposes DSA operations through its hazmat primitives layer, where generate_private_key(key_size) creates a key pair (with supported sizes of 1024, 2048, 3072, or 4096 bits) and the sign(data, algorithm) method on DSAPrivateKey instances produces DER-encoded signatures using hashes like SHA-256. In the Rust programming language, the RustCrypto project provides the 'dsa' crate, a pure Rust implementation of DSA compliant with FIPS 186-4, supporting key generation, signing, and verification. However, due to DSA's deprecation in FIPS 186-5 for generating new signatures and security concerns with insecure parameter sizes (e.g., 160-bit q providing only 80-bit security), its use is discouraged for new applications; refer to the Security Analysis section for details on vulnerabilities. DSA finds practical use in protocols such as SSH via the ssh-dss key type for , though it is deprecated and disabled by default in 7.0 and later due to security concerns. In legacy TLS implementations (up to version 1.2), DSA signatures appear in cipher suites like TLS_DHE_DSS_WITH_AES_128_CBC_SHA, but TLS 1.3 removes support for DSA entirely. For PGP, GnuPG supports DSA keys primarily for signing messages and subkeys, often paired with ElGamal for encryption, adhering to OpenPGP standards. Performance-wise, DSA with a 2048-bit prime p requires larger keys than equivalent-strength ECDSA (e.g., over a 224-bit ), resulting in slower signing and verification.

Standards and Recommendations

The Federal Information Processing Standard (FIPS) 186-5, published by the National Institute of Standards and Technology (NIST) in February 2023, specifies algorithms for digital signatures but removes the Digital Signature Algorithm (DSA) as an approved method for generating new signatures due to limited industry adoption and security analyses indicating better alternatives. Instead, FIPS 186-5 retains DSA solely for verifying existing signatures, while emphasizing migration to more efficient schemes like the (ECDSA) or (EdDSA); it supports hash functions including and families for compatible algorithms, though DSA verification inherits prior pairings. For legacy DSA implementations, approved parameter sets from FIPS 186-4 (superseded but referenced for verification) define domain parameters based on finite-field , with specific bit lengths for the modulus pp (L bits) and subgroup order qq (N bits) to achieve target security levels. These sets ensure at least 112-bit security for ongoing verification, as smaller parameters like L=1024, N=160 (80-bit security) are legacy-only and discouraged. The table below summarizes the approved (L, N) pairs:
Security Strength (bits)L (bits of p)N (bits of q)
1122048224
1122048256
1283072256
DSA pairings with hash functions must use approved secure hashes per FIPS 180-4, such as SHA-224, SHA-256, SHA-384, or SHA-512, matched to the parameter set's security strength; has been disallowed for new DSA signatures since 2011 and fully deprecated for generation by December 31, 2013, per NIST SP 800-131A Revision 1, though it may appear in legacy verification contexts. Internationally, ISO/IEC 14888-3:2018 standardizes discrete logarithm-based mechanisms with appendix, explicitly including DSA alongside variants like EC-DSA and others, providing specifications for parameter and operations to ensure . As of 2025, NIST recommends DSA exclusively for legacy systems verifying pre-existing signatures, urging adoption of ECDSA or for new deployments due to DSA's larger key sizes and performance drawbacks; for any continued use, parameters must meet a minimum L=2048 bits to provide at least 112-bit security. The draft NIST SP 800-131A Revision 3 (October 2024) proposes disallowing DSA immediately upon finalization, with verification permitted indefinitely for legacy purposes, aligning with a broader transition timeline extending certain classical algorithms until 2030 in limited federal contexts.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.