Hubbry Logo
Timing attackTiming attackMain
Open search
Timing attack
Community hub
Timing attack
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Timing attack
Timing attack
from Wikipedia

An example of a timing attack being performed on a web cache. The left graph denotes a timing attack successfully detecting a cached image whereas the right one shows an attack that fails to do the same.

In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker may be able to work backwards to the input.

Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables such as cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, and accuracy of the timing measurements. Any algorithm that has data-dependent timing variation is vulnerable to timing attacks. Removing timing-dependencies is difficult since varied execution time can occur at any level.

Vulnerability to timing attacks is often overlooked in the design phase and can be introduced unintentionally with compiler optimizations. Countermeasures include blinding and constant-time functions.

Constant-time challenges

[edit]

Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, known as a constant-time algorithm. A trivial "timing-safe implementation" can be found here [1]. Imagine an implementation in which every call to a subroutine always returns exactly after time T has elapsed, where T is the maximum time it takes to execute that routine on every possible authorized input. Such a hypothetical implementation would leak no information about the data supplied to that invocation (in reality, non-data-dependent timing variations are unavoidable). The downside of such an approach is that the time used for all executions becomes that of the worst-case performance of the function [2]. It would appear that blinding should be applied to avoid vulnerability to timing attacks.

The data-dependency of timing may stem from one of the following:[3]

  • Non-local memory access, as the CPU may cache the data. Software run on a CPU with a data cache will exhibit data-dependent timing variations as a result of memory looks into the cache.
  • Conditional jumps. Modern CPUs try to speculatively execute past conditional jumps by guessing. Guessing wrongly (not uncommon with essentially random secret data) entails a measurable large delay as the CPU tries to backtrack. This requires writing branch-free code.
  • Some "complicated" mathematical operations, depending on the actual CPU hardware:
    • Integer division is almost always non-constant time. The CPU uses a microcode loop that uses a different code path when either the divisor or the dividend is small.
    • CPUs without a barrel shifter run shifts and rotations in a loop, one position at a time. As a result, the amount to shift must not be secret.
    • Older CPUs run multiplications in a way similar to division.

Examples

[edit]

The execution time for the square-and-multiply algorithm used in modular exponentiation depends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and the error correction techniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, including RSA, ElGamal, and the Digital Signature Algorithm.

In 2003, Boneh and Brumley demonstrated a practical network-based timing attack on SSL-enabled web servers, based on a different vulnerability having to do with the use of RSA with Chinese remainder theorem optimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use of blinding techniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.[4][5]

Some versions of Unix use a relatively expensive implementation of the crypt library function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases.[citation needed] The login program in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applying brute-force to produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.[citation needed]

Two otherwise securely isolated processes running on a single system with either cache memory or virtual memory can communicate by deliberately causing page faults and/or cache misses in one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.[6][7]

The 2017 Meltdown and Spectre attacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks.[8] As of early 2018, almost every computer system in the world is affected by Spectre.[9][10][11]

In 2018, many internet servers were still vulnerable to slight variations of the original timing attack on RSA, two decades after the original vulnerability was discovered.[12]

String comparison algorithms

[edit]

The following C code demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:

bool insecure_string_compare(const void *a, const void *b, size_t length) {
  const char *ca = a, *cb = b;
  for (size_t i = 0; i < length; i++)
    if (ca[i] != cb[i])
      return false;
  return true;
}

By comparison, the following version runs in constant-time by testing all characters and using a bitwise operation to accumulate the result:

bool constant_time_string_compare(const void *a, const void *b, size_t length) {
  const char *ca = a, *cb = b;
  bool result = true;
  for (size_t i = 0; i < length; i++)
    result &= ca[i] == cb[i];
  return result;
}

In the world of C library functions, the first function is analogous to memcmp(), while the latter is analogous to NetBSD's consttime_memequal() or[13] OpenBSD's timingsafe_bcmp() and timingsafe_memcmp. On other systems, the comparison function from cryptographic libraries like OpenSSL and libsodium can be used.

Notes

[edit]

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A timing attack is a in that exploits measurable differences in the execution time of cryptographic operations to infer secret information, such as private keys, without directly accessing the system's internals. These variations often arise from implementation details like performance optimizations, conditional branching, cache hits, or processor-specific instruction timings, allowing attackers to statistically analyze response times from known inputs like . First systematically described by Paul Kocher in 1996, timing attacks are computationally efficient and can succeed remotely over networks if the target system processes queries from the attacker. Kocher's seminal work demonstrated practical timing attacks against implementations of Diffie-Hellman , RSA encryption/decryption, and the Digital Signature Standard (DSS), showing how fixed exponents or private keys could be extracted entirely. These vulnerabilities extend beyond classical ; recent research has uncovered timing attacks on post-quantum schemes like CRYSTALS-Kyber, where implementation flaws allow key recovery, as seen in the 2024 KyberSlash vulnerability. Hardware platforms are also affected, such as Apple's M-series chips in the 2024 "GoFetch" attack, which leverages data prefetching to leak keys from cryptographic implementations. Despite mitigations, timing attacks remain a persistent threat, highlighting the gap between theoretical security and real-world implementations. Countermeasures against timing attacks focus on eliminating or masking temporal leaks while balancing performance and security. Constant-time algorithms ensure operations take identical durations regardless of input or secret values, achieved by avoiding data-dependent branches and using uniform instruction paths—though this reduces efficiency. Blinding randomizes computations by multiplying inputs with ephemeral values (e.g., in RSA, blinding the ciphertext with remodnr^e \mod n before decryption and unblinding afterward), preventing attackers from correlating timings to specific key bits, at a modest 2-10% overhead. Additional techniques include inserting random delays to add noise, though these are less reliable as attackers can average out variations with sufficient samples. Modern libraries like OpenSSL incorporate these in hardened modes, but ongoing vigilance is required for emerging threats in quantum-resistant and hardware-accelerated environments.

Fundamentals

Definition and Principles

A timing attack is a type of in which an attacker measures the time taken by a to execute cryptographic or other sensitive operations on various inputs, exploiting variations in these execution times to infer secret information, such as private keys. These discrepancies arise because the duration of computations can depend subtly on the secret values involved, allowing the attacker to deduce bits of the secret through repeated measurements. The core principle behind timing attacks is that operations like conditional branches, memory accesses, or arithmetic computations—such as in —may execute at different speeds based on the secret data and input. For instance, hardware optimizations, cache behaviors, or instruction timings can introduce measurable differences when intermediate results vary with the secret. In general, the observed time TT for an operation can be modeled as T=f(s)+nT = f(s) + n, where f(s)f(s) is a function of the secret ss, and nn represents from environmental factors or variability; the attacker reconstructs ss by collecting multiple TT values and applying statistical analysis to correlate timing patterns with possible secret values. This approach was first formalized as a practical by Paul Kocher in 1996. Timing attacks fall under the broader category of side-channel attacks, which leverage physical or implementation-specific leakages—such as timing, power consumption, or electromagnetic emissions—rather than black-box that only considers algorithmic inputs and outputs without regard to hardware or software realizations. Timing serves as a particularly reliable side channel because modern hardware exhibits predictable execution times for operations, enabling attackers to detect even microsecond-scale variations with sufficient samples, often thousands, under controlled conditions like remote network access. Examples include leaks in RSA decryption or AES encryption, though specific mechanisms are explored elsewhere.

Historical Development

The concept of timing as a potential leak in secure systems emerged in the early within the study of covert channels in multilevel secure environments. Researchers identified timing channels as mechanisms where processes could inadvertently transmit through variations in execution time, compromising in trusted systems. A seminal contribution was Richard A. Kemmerer's Matrix Methodology, which provided a systematic approach to detecting both storage and timing channels by modeling shared system resources and their potential for unauthorized signaling. The first practical demonstration of timing attacks specifically targeting cryptographic implementations occurred in 1996, when . Kocher published "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems." In this work, Kocher showed how precise measurements of execution times for on processors could reveal private keys, with experiments recovering 90% of bits in 512-bit RSA keys after observing around 20,000 operations. This paper established timing analysis as a viable side-channel , shifting focus from theoretical leaks to exploitable vulnerabilities in real-world cryptosystems. Subsequent advancements in the late 1990s and early extended timing attacks to remote scenarios and hardware-specific leaks. In 2003, David Boneh, Richard A. DeMillo, and Richard J. Lipton demonstrated practical remote timing attacks on OpenSSL's RSA implementation over networks, recovering keys by analyzing response times over local networks, such as across buildings about half a mile apart, even under moderate network . The saw proliferation through cache-timing variants, notably Daniel J. Bernstein's 2005 attack on AES, which exploited cache access latencies in software implementations to recover full 128-bit keys using hundreds of millions of encryptions (approximately 400 million packets), fewer traces relative to some prior methods in terms of per-byte efficiency. In the 2010s, timing attacks evolved with modern hardware architectures, incorporating flaws. The 2018 Spectre attacks, co-authored by Paul Kocher and others, revealed how branch prediction and timing side effects in processors like and could leak kernel memory across security boundaries, affecting billions of devices and prompting widespread mitigations. Post-2010 research highlighted platform-specific vulnerabilities, such as timing leaks in processors due to behaviors, enabling cross-VM key extraction in cloud environments. Recent trends up to 2025 have integrated to enhance , automating in noisy measurements. For instance, a 2023 study introduced , a machine learning-assisted timing attack on garbled circuits, achieving key recovery with reduced traces by classifying execution profiles via neural networks. These developments underscore ongoing adaptations to hardware like , as seen in 2020s attacks such as the 2024 KyberSlash vulnerability exploiting secret-dependent division timings in post-quantum implementations on platforms including ARM, enabling key recovery in under 4 minutes on the target device.

Attack Mechanisms

Basic Timing Analysis

A basic timing attack proceeds through three primary phases: measurement, modeling, and analysis. In the phase, the attacker collects timing data for target operations either locally, by executing code on the same system, or remotely, by observing network latency or shared resource contention. High-resolution timers, such as the rdtsc instruction on x86 processors, enable precise cycle-level measurements of execution times. To mitigate noise from system variability, attackers typically average multiple traces, typically requiring thousands of samples for reliable key recovery, as demonstrated with 5,000 samples achieving over 88% success in experiments. The modeling phase represents the observed time TiT_i for the ii-th measurement as Ti=μ+σf(ki)+ϵT_i = \mu + \sigma \cdot f(k_i) + \epsilon, where μ\mu is the mean baseline execution time, σ\sigma captures variance due to conditional branches, f(ki)f(k_i) is a function dependent on the secret key bit kik_i, and ϵ\epsilon accounts for random noise. This formulation assumes that execution time varies predictably with secret-dependent decisions, such as conditional multiplications. Kocher's original setup demonstrated this by timing private key operations over repeated trials to isolate key-influenced components. During analysis, statistical techniques correlate timing variations with secret values. Hypothesis testing evaluates whether observed time differences align with key bit hypotheses, while regression models fit timing data to predict key dependencies, often using variance reduction as a metric for correct guesses. For instance, in a modular exponentiation using the square-and-multiply , each key bit determines whether an additional occurs after squaring: if the bit is 1, compute Rb=(sby)modnR_b = (s_b \cdot y) \mod n; if 0, skip the multiply. The resulting time distribution for a sequence of inputs allows derivation of per-bit guess probabilities, such as P(correct)=[Φ](/page/Phi)(j(bc)2(wb))P(\text{correct}) = [\Phi](/page/Phi)\left( \sqrt{ \frac{j (b - c)}{2(w - b)} } \right)
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.