Hubbry Logo
Ziggurat algorithmZiggurat algorithmMain
Open search
Ziggurat algorithm
Community hub
Ziggurat algorithm
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Ziggurat algorithm
Ziggurat algorithm
from Wikipedia

The ziggurat algorithm is an algorithm for pseudo-random number sampling. Belonging to the class of rejection sampling algorithms, it relies on an underlying source of uniformly-distributed random numbers, typically from a pseudo-random number generator, as well as precomputed tables. The algorithm is used to generate values from a monotonically decreasing probability distribution. It can also be applied to symmetric unimodal distributions, such as the normal distribution, by choosing a value from one half of the distribution and then randomly choosing which half the value is considered to have been drawn from. It was developed by George Marsaglia and others in the 1960s.

A typical value produced by the algorithm only requires the generation of one random floating-point value and one random table index, followed by one table lookup, one multiply operation and one comparison. Sometimes (2.5% of the time, in the case of a normal or exponential distribution when using typical table sizes)[citation needed] more computations are required. Nevertheless, the algorithm is computationally much faster[citation needed] than the two most commonly used methods of generating normally distributed random numbers, the Marsaglia polar method and the Box–Muller transform, which require at least one logarithm and one square root calculation for each pair of generated values. However, since the ziggurat algorithm is more complex to implement it is best used when large quantities of random numbers are required.

The term ziggurat algorithm dates from Marsaglia's paper with Wai Wan Tsang in 2000; it is so named because it is conceptually based on covering the probability distribution with rectangular segments stacked in decreasing order of size, resulting in a figure that resembles a ziggurat.

The Ziggurat algorithm used to generate sample values with a normal distribution. (Only positive values are shown for simplicity.) The pink dots are initially uniform-distributed random numbers. The desired distribution function is first segmented into equal areas "A". One layer i is selected at random by the uniform source at the left. Then a random value from the top source is multiplied by the width of the chosen layer, and the result is x tested to see which region of the layer it falls into with 3 possible outcomes: 1) (left, solid black region) the sample is clearly under the curve and may be output immediately, 2) (right, vertically striped region) the sample value may lie under the curve, and must be tested further. In that case, a random y value within the chosen layer is generated and compared to f(x). If less, the point is under the curve and the value x is output. If not, (the third case), the chosen point x is rejected and the algorithm is restarted from the beginning.

Theory of operation

[edit]

The ziggurat algorithm is a rejection sampling algorithm; it randomly generates a point in a distribution slightly larger than the desired distribution, then tests whether the generated point is inside the desired distribution. If not, it tries again. Given a random point underneath a probability density curve, its x coordinate is a random number with the desired distribution.

The distribution the ziggurat algorithm chooses from is made up of n equal-area regions; rectangles that cover the bulk of the desired distribution, on top of a non-rectangular base that includes the tail of the distribution.

Given a monotone decreasing probability density function , defined for all , the base of the ziggurat is defined as all points inside the distribution and below . This consists of a rectangular region from to , and the (typically infinite) tail of the distribution, where .

This layer (call it layer 0) has area A. On top of this, add a rectangular layer of width and height , so it also has area . The top of this layer is at height and intersects the density function at a point , where . This layer includes every point in the density function between and , but (unlike the base layer) also includes points such as which are not in the desired distribution.

Further layers are then stacked on top. To use a precomputed table of size (n = 256 is typical), one chooses such that , meaning that the top box, layer , reaches the distribution's peak at exactly.

Layer extends vertically in range , and can be divided into two regions horizontally: the (generally larger) portion in range which is entirely contained within the desired distribution, and the (small) portion in range , which is only partially contained.

Ignoring for a moment the problem of layer 0, and given uniform random variables and , the ziggurat algorithm can be described as:

  1. Choose a random layer .
  2. Let .
  3. If , return .
  4. Let .
  5. Compute . If , return .
  6. Otherwise, choose new random numbers and go back to step 1.

Step 1 amounts to choosing a low-resolution y coordinate. Step 3 tests if the x coordinate is clearly within the desired density function without knowing more about the y coordinate. If it is not, step 4 chooses a high-resolution y coordinate, and step 5 does the rejection test.

With closely spaced layers, the algorithm terminates at step 3 a very large fraction of the time. For the top layer , however, this test always fails, because .

Layer 0 can also be divided into a central region and an edge, but the edge is an infinite tail. To use the same algorithm to check if the point is in the central region, generate a fictitious . This will generate points with with the correct frequency, and in the rare case that layer 0 is selected and , use a special fallback algorithm to select a point at random from the tail. Because the fallback algorithm is used less than one time in a thousand, speed is not essential.

Thus, the full ziggurat algorithm for one-sided distributions is:

  1. Choose a random layer .
  2. Let
  3. If , return .
  4. If , generate a point from the tail using the fallback algorithm.
  5. Let .
  6. Compute . If , return .
  7. Otherwise, choose new random numbers and go back to step 1.

For a two-sided distribution, the result must be negated 50% of the time. This can often be done conveniently by choosing and, in step 3, testing if .

Fallback algorithms for the tail

[edit]

Because the ziggurat algorithm only generates most outputs very rapidly, and requires a fallback algorithm whenever it is always more complex than a more direct implementation. The specific fallback algorithm depends on the distribution.

For an exponential distribution, the tail looks just like the body of the distribution. One way is to fall back to the most elementary algorithm E = −ln(U1) and let x = x1 − ln(U1). Another is to call the ziggurat algorithm recursively and add x1 to the result.

For a normal distribution, Marsaglia suggests a compact algorithm:

  1. Let x = −ln(U1)/x1.
  2. Let y = −ln(U2).
  3. If 2y > x2, return x + x1.
  4. Otherwise, go back to step 1.

Since x1 ≈ 3.5 for typical table sizes, the test in step 3 is almost always successful. Since −ln(U1) is an exponentially distributed variate, an implementation of the exponential distribution may be used.

Optimizations

[edit]

The algorithm can be performed efficiently with precomputed tables of xi and yi = f(xi), but there are some modifications to make it even faster:

  • Nothing in the ziggurat algorithm depends on the probability distribution function being normalized (integral under the curve equal to 1), removing normalizing constants can speed up the computation of f(x).
  • Most uniform random number generators are based on integer random number generators which return an integer in the range [0,  232 − 1]. A table of 2−32xi lets you use such numbers directly for U0.
  • When computing two-sided distributions using a two-sided U0 as described earlier, the random integer can be interpreted as a signed number in the range [−231, 231 − 1], and a scale factor of 2−31 can be used.
  • Rather than comparing U0xi to xi +1 in step 3, it is possible to precompute xi +1/xi and compare U0 with that directly. If U0 is an integer random number generator, these limits may be premultiplied by 232 (or 231, as appropriate) so an integer comparison can be used.
  • With the above two changes, the table of unmodified xi values is no longer needed and may be deleted.
  • When generating IEEE 754 single-precision floating point values, which only have a 24-bit mantissa (including the implicit leading 1), the least-significant bits of a 32-bit integer random number are not used. These bits may be used to select the layer number. (See the references below for a detailed discussion of this.)
  • The first three steps may be put into an inline function, which can call an out-of-line implementation of the less frequently needed steps.

Generating the tables

[edit]

It is possible to store the entire table precomputed, or just include the values n, y1, A, and an implementation of f −1(y) in the source code, and compute the remaining values when initializing the random number generator.

As previously described, you can find xi = f −1(yi) and yi +1yi + A/xi. Repeat n − 1 times for the layers of the ziggurat. At the end, you should have yn = f(0). There will be some round-off error, but it is a useful sanity test to see that it is acceptably small.

When actually filling in the table values, just assume that xn = 0 and yn = f(0), and accept the slight difference in layer n − 1's area as rounding error.

Finding x1 and A

[edit]

Given an initial (guess at) x1, you need a way to compute the area t of the tail for which x > x1. For the exponential distribution, this is just ex1, while for the normal distribution, assuming you are using the unnormalized f(x) = ex2/2, this is π/2erfc(x/2). For more awkward distributions, numerical integration may be required.

With this in hand, from x1, you can find y1 = f(x1), the area t in the tail, and the area of the base layer A = x1y1 + t.

Then compute the series yi and xi as above. If yi > f(0) for any i < n, then the initial estimate x1 was too low, leading to too large an area A. If yn < f(0), then the initial estimate x1 was too high.

Given this, use a root-finding algorithm (such as the bisection method) to find the value x1 which produces yn−1 as close to f(0) as possible. Alternatively, look for the value which makes the area of the topmost layer, xn−1(f(0) − yn−1), as close to the desired value A as possible. This saves one evaluation of f −1(x) and is actually the condition of greatest interest.

McFarland's variation

[edit]

Christopher D. McFarland has proposed a further-optimized version.[1] This applies three algorithmic changes, at the expense of slightly larger tables.

First, the common case considers only the rectangular portions, from (0, yi −1) to (xiyi) The odd-shaped regions to the right of these (mostly almost triangular, plus the tail) are handled separately. This simplifies and speeds up the algorithm's fast path.

Second, the exact area of the odd-shaped regions is used; they are not rounded up to include the entire rectangle to (xi −1yi). This increases the probability that the fast path will be used.

One major consequence of this is that the number of layers is slightly less than n. Even though the area of the odd-shaped portions is taken exactly, the total adds up to more than one layer's worth. The area per layer is adjusted so that the number of rectangular layers is an integer. If the initial 0 ≤ i < n exceeds the number of rectangular layers, phase 2 proceeds.

If the value sought lies in any of the odd-shaped regions, the alias method is used to choose one, based on its true area. This is a small amount of additional work, and requires additional alias tables, but chooses one of the layers' right-hand sides.

The chosen odd-shaped region is rejection sampled, but if a sample is rejected, the algorithm does not return to the beginning. The true area of each odd-shaped region was used to choose a layer, so the rejection-sampling loop stays in that layer until a point is chosen.

Third, the almost-triangular shape of most odd-shaped portions is exploited, although this must be divided into three cases depending on the second derivative of the probability distribution function in the selected layer.

If the function is convex (as the exponential distribution is everywhere, and the normal distribution is for |x| > 1), then the function is strictly contained within the lower triangle. Two unit uniform deviates U1 and U2 are chosen, and before they are scaled to the rectangle enclosing the odd-shaped region, their sum is tested. If U1 + U2 > 1, the point is in the upper triangle and can be reflected to (1−U1, 1−U2). Then, if U1 + U2 < 1−ε, for some suitable tolerance ε, the point is definitely below the curve and can immediately be accepted. Only for points very close to the diagonal is it necessary to compute the distribution function f(x) to perform an exact rejection test. (The tolerance ε should in theory depend on the layer, but a single maximum value can be used on all layers with little loss.)

If the function is concave (as the normal distribution is for |x| < 1), it includes a small portion of the upper triangle so reflection is impossible, but points whose normalized coordinates satisfy U1 + U2 ≤ 1 can be immediately accepted, and points for which U1 + U2 > 1+ε can be immediately rejected.

In the one layer which straddles |x| = 1, the normal distribution has an inflection point, and the exact rejection test must be applied if 1−ε <U1 + U2 < 1+ε.

The tail is handled as in the original Ziggurat algorithm, and can be thought of as a fourth case for the shape of the odd-shaped region to the right.

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Ziggurat algorithm is a technique designed for efficiently generating random variates from univariate probability distributions with densities that decrease monotonically in the tails, such as the and exponential distributions. Introduced by George Marsaglia and Wai Wan Tsang in 2000, it approximates the target density using a stack of rectangular "ziggurats" or slabs beneath the curve, enabling most samples to be accepted with a single uniform random variate and minimal computation. At its core, the algorithm relies on precomputed tables of widths wiw_i and heights kik_i for a fixed number of slabs (typically 128 or 256), where a random index selects a slab, and a point within it is tested for acceptance under the density function. In approximately 99% of invocations, the sample is accepted directly without further rejection steps, making it exceptionally fast—capable of producing over 15 million normal variates per second on mid-2000s hardware. For the rare tail cases beyond the slabs, it falls back to specialized methods, such as Marsaglia's setup for normal tails involving logarithmic uniforms. The method's simplicity allows inline implementation or other languages, with setup routines generating the tables based on the desired distribution. The Ziggurat algorithm has become a standard in statistical computing libraries due to its speed and ease of adaptation, outperforming alternatives like the Box-Muller transform for normal generation in high-volume scenarios. Subsequent refinements, such as Jurgen Doornik's 2005 ZIGNOR variant, addressed minor randomness quality issues in the original by decoupling uniform generation for indexing and coordinates, while preserving the core efficiency. It remains influential in fields requiring massive random sampling, including simulations and .

Introduction

Definition and Purpose

The Ziggurat algorithm is a technique designed to generate pseudorandom variates from target probability density functions (PDFs), particularly those that are decreasing or unimodal, such as the Gaussian, exponential, or gamma distributions. In , uniform random numbers are used to sample points from a proposal distribution that envelopes the target density f(x)f(x), accepting only those points that fall below the target curve to ensure the correct distribution is approximated. The core purpose of the Ziggurat method is to enable fast generation of such variates, especially where direct methods like are computationally inefficient due to the need for solving complex equations or handling unbounded supports. Developed by George Marsaglia and Wai Wan Tsang, it prioritizes speed for large-scale simulations requiring millions of samples. Conceptually, the algorithm constructs a "" by stacking horizontal beneath the target curve, forming a stepped that covers the PDF from its mode downward, with each layer representing an equal-area region for efficient sampling. This structure, named after the ancient Mesopotamian stepped pyramids, allows for rapid proposal generation by selecting a and sampling a point uniformly within it, minimizing the envelope's excess area over the target to reduce rejections. The are arranged such that the topmost layer aligns with the PDF's peak, and subsequent layers decrease in height while extending further along the x-axis, providing a tight bound particularly suited to heavy-tailed or unbounded distributions like . For a sampled point (u,x)(u, x) where uu is on [0,1][0, 1] and xx falls within a selected , the acceptance criterion is to retain xx if uf(0)<f(x)u \cdot f(0) < f(x), ensuring the generated variates follow the target distribution ff. This setup yields high rates, often around 99%, which significantly lowers computational overhead compared to traditional inverse methods. The algorithm was specifically engineered for high-speed performance in numerical simulations, achieving rates such as 15 million normal variates per second on a 400 MHz processor using optimized C implementations.

Historical Background

The Ziggurat algorithm traces its origins to rejection sampling techniques pioneered in the mid-20th century, with foundational work in the 1960s including the rectangle-wedge-tail method developed by George Marsaglia, Marshall D. MacLaren, and Thomas A. Bray for efficient generation of normal random variates. This approach laid the groundwork by using geometric coverings to accelerate sampling from unimodal densities, though it was computationally intensive for tails. The Ziggurat method emerged as a table-based refinement of such ideas, formalizing a layered rectangle structure to minimize rejection rates. In the early 1980s, Marsaglia and Wai Wan Tsang began developing the core concepts for sampling from decreasing densities, culminating in their 1984 publication, "A Fast, Easily Implemented Method for Sampling from Decreasing or Symmetric Unimodal Density Functions," which introduced an efficient acceptance-rejection framework using stacked rectangles. This work, initially focused on practical implementation for symmetric and decreasing distributions, represented an internal evolution toward streamlined random variate generation but remained less widely disseminated until later refinements. The algorithm received its name and broader recognition in the seminal 2000 paper by Marsaglia and Tsang, "The Ziggurat Method for Generating Random Variables," which presented the one-dimensional version tailored for the normal distribution and emphasized its speed through precomputed tables. This publication marked the evolution of the 1984 method into a publicly accessible, optimized technique, capable of generating millions of variates per second on contemporary hardware. Extensions in the same paper adapted the approach for the exponential distribution and other decreasing densities, enhancing its versatility. Further evolution came in 2005 with a comment on the implementation by Philip H.W. Leong and colleagues, who proposed improvements addressing potential issues with the uniform random number generator in prior versions. The algorithm's practical impact was solidified in the late 1990s through its adoption in MATLAB's randn function for normal random number generation, influencing widespread use in statistical software and simulations.

Theoretical Foundation

Rectangle Covering Approach

The rectangle covering approach forms the core of the Ziggurat algorithm, providing a geometric approximation to the target probability density function f(x)f(x) that facilitates efficient rejection sampling. The density f(x)f(x) is assumed to be positive and monotonically decreasing for x>0|x| > 0, and integrable over the real line with total integral equal to 1. A canonical example is the standard normal density, given by ϕ(x)=12πexp(x22)\phi(x) = \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}{2}\right)
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.