Hubbry Logo
search
logo
2254078

Hypergeometric distribution

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Hypergeometric
Probability mass function
Hypergeometric PDF plot
Cumulative distribution function
Hypergeometric CDF plot
Parameters
Support
PMF
CDF where is the generalized hypergeometric function
Mean
Mode
Variance
Skewness
Excess kurtosis



MGF
CF

In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes (random draws for which the object drawn has a specified feature) in draws, without replacement, from a finite population of size that contains exactly objects with that feature, where in each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of successes in draws with replacement.

Definitions

[edit]

Probability mass function

[edit]

The following conditions characterize the hypergeometric distribution:

  • The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories (e.g. Pass/Fail or Employed/Unemployed).
  • The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population).

A random variable follows the hypergeometric distribution if its probability mass function (pmf) is given by[1]

where

  • is the population size,
  • is the number of success states in the population,
  • is the number of draws (i.e. quantity drawn in each trial),
  • is the number of observed successes,
  • is a binomial coefficient.

The pmf is positive when .

A random variable distributed hypergeometrically with parameters , and is written and has probability mass function above.

Combinatorial identities

[edit]

As required, we have

which essentially follows from Vandermonde's identity from combinatorics.

Also note that

This identity can be shown by expressing the binomial coefficients in terms of factorials and rearranging the latter. Additionally, it follows from the symmetry of the problem, described in two different but interchangeable ways.

For example, consider two rounds of drawing without replacement. In the first round, out of neutral marbles are drawn from an urn without replacement and coloured green. Then the colored marbles are put back. In the second round, marbles are drawn without replacement and colored red. Then, the number of marbles with both colors on them (that is, the number of marbles that have been drawn twice) has the hypergeometric distribution. The symmetry in and stems from the fact that the two rounds are independent, and one could have started by drawing balls and colouring them red first.

Note that we are interested in the probability of successes in draws without replacement, since the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble. Keep in mind not to confuse with the binomial distribution, which describes the probability of successes in draws with replacement.

Properties

[edit]

Working example

[edit]

The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two colors of marbles, red and green. Define drawing a green marble as a success and drawing a red marble as a failure. Let N describe the number of all marbles in the urn (see contingency table below) and K describe the number of green marbles, then N − K corresponds to the number of red marbles. Now, standing next to the urn, you close your eyes and draw n marbles without replacement. Define X as a random variable whose outcome is k, the number of green marbles drawn in the experiment. This situation is illustrated by the following contingency table:

drawn not drawn total
green marbles k Kk K
red marbles nk N + k − n − K N − K
total n N − n N

Indeed, we are interested in calculating the probability of drawing k green marbles in n draws, given that there are K green marbles out of a total of N marbles. For this example, assume that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?

This problem is summarized by the following contingency table:

drawn not drawn total
green marbles k = 4 Kk = 1 K = 5
red marbles nk = 6 N + k − n − K = 39 N − K = 45
total n = 10 N − n = 40 N = 50

To find the probability of drawing k green marbles in exactly n draws out of N total draws, we identify X as a hyper-geometric random variable to use the formula

To intuitively explain the given formula, consider the two symmetric problems represented by the identity

  1. left-hand side - drawing a total of only n marbles out of the urn. We want to find the probability of the outcome of drawing k green marbles out of K total green marbles, and drawing n-k red marbles out of N-K red marbles, in these n rounds.
  2. right hand side - alternatively, drawing all N marbles out of the urn. We want to find the probability of the outcome of drawing k green marbles in n draws out of the total N draws, and K-k green marbles in the rest N-n draws.

Back to the calculations, we use the formula above to calculate the probability of drawing exactly k green marbles

Intuitively we would expect it to be even more unlikely that all 5 green marbles will be among the 10 drawn.

As expected, the probability of drawing 5 green marbles is roughly 35 times less likely than that of drawing 4.

Symmetries

[edit]

Swapping the roles of green and red marbles:

Swapping the roles of drawn and not drawn marbles:

Swapping the roles of green and drawn marbles:

These symmetries generate the dihedral group .

Order of draws

[edit]

The probability of drawing any set of green and red marbles (the hypergeometric distribution) depends only on the numbers of green and red marbles, not on the order in which they appear; i.e., it is an exchangeable distribution. As a result, the probability of drawing a green marble in the draw is[2]

This is an ex ante probability—that is, it is based on not knowing the results of the previous draws.

Tail bounds

[edit]

Let and . Then for we can derive the following bounds:[3]

where

is the Kullback–Leibler divergence and it is used that .[4]

Note: In order to derive the previous bounds, one has to start by observing that where are dependent random variables with a specific distribution . Because most of the theorems about bounds in sum of random variables are concerned with independent sequences of them, one has to first create a sequence of independent random variables with the same distribution and apply the theorems on . Then, it is proved from Hoeffding [3] that the results and bounds obtained via this process hold for as well.

If n is larger than N/2, it can be useful to apply symmetry to "invert" the bounds, which give you the following: [4] [5]

Statistical Inference

[edit]

Hypergeometric test

[edit]

The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of successes (out of total draws) from a population of size containing successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing or more successes from the population in total draws. In a test for under-representation, the p-value is the probability of randomly drawing or fewer successes.

Biologist and statistician Ronald Fisher

The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test.[6] Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see[7]).

The test is often used to identify which sub-populations are over- or under-represented in a sample. This test has a wide range of applications. For example, a marketing group could use the test to understand their customer base by testing a set of known customers for over-representation of various demographic subgroups (e.g., women, people under 30).

[edit]

Let and .

  • If then has a Bernoulli distribution with parameter .
  • Let have a binomial distribution with parameters and ; this models the number of successes in the analogous sampling problem with replacement. If and are large compared to , and is not close to 0 or 1, then and have similar distributions, i.e., .
  • If is large, and are large compared to , and is not close to 0 or 1, then

where is the standard normal distribution function

The following table describes four distributions related to the number of successes in a sequence of draws:

With replacements No replacements
Given number of draws binomial distribution hypergeometric distribution
Given number of failures negative binomial distribution negative hypergeometric distribution

Multivariate hypergeometric distribution

[edit]
Multivariate hypergeometric distribution
Parameters




Support
PMF
Mean
Variance



The model of an urn with green and red marbles can be extended to the case where there are more than two colors of marbles. If there are Ki marbles of color i in the urn and you take n marbles at random without replacement, then the number of marbles of each color in the sample (k1, k2,..., kc) has the multivariate hypergeometric distribution:

This has the same relationship to the multinomial distribution that the hypergeometric distribution has to the binomial distribution—the multinomial distribution is the "with-replacement" distribution and the multivariate hypergeometric is the "without-replacement" distribution.

The properties of this distribution are given in the adjacent table,[8] where c is the number of different colors and is the total number of marbles in the urn.

Example

[edit]

Suppose there are 5 black, 10 white, and 15 red marbles in an urn. If six marbles are chosen without replacement, the probability that exactly two of each color are chosen is

Occurrence and applications

[edit]

Application to auditing elections

[edit]
Samples used for election audits and resulting chance of missing a problem

Election audits typically test a sample of machine-counted precincts to see if recounts by hand or machine match the original counts. Mismatches result in either a report or a larger recount. The sampling rates are usually defined by law, not statistical design, so for a legally defined sample size n, what is the probability of missing a problem which is present in K precincts, such as a hack or bug? This is the probability that k = 0 . Bugs are often obscure, and a hacker can minimize detection by affecting only a few precincts, which will still affect close elections, so a plausible scenario is for K to be on the order of 5% of N. Audits typically cover 1% to 10% of precincts (often 3%),[9][10][11] so they have a high chance of missing a problem. For example, if a problem is present in 5 of 100 precincts, a 3% sample has 86% probability that k = 0 so the problem would not be noticed, and only 14% probability of the problem appearing in the sample (positive k ):

The sample would need 45 precincts in order to have probability under 5% that k = 0 in the sample, and thus have probability over 95% of finding the problem:

Application to Texas hold'em poker

[edit]

In hold'em poker players make the best hand they can combining the two cards in their hand with the 5 cards (community cards) eventually turned up on the table. The deck has 52 and there are 13 of each suit. For this example assume a player has 2 clubs in the hand and there are 3 cards showing on the table, 2 of which are also clubs. The player would like to know the probability of one of the next 2 cards to be shown being a club to complete the flush.
(Note that the probability calculated in this example assumes no information is known about the cards in the other players' hands; however, experienced poker players may consider how the other players place their bets (check, call, raise, or fold) in considering the probability for each scenario. Strictly speaking, the approach to calculating success probabilities outlined here is accurate in a scenario where there is just one player at the table; in a multiplayer game this probability might be adjusted somewhat based on the betting play of the opponents.)

There are 4 clubs showing so there are 9 clubs still unseen. There are 5 cards showing (2 in the hand and 3 on the table) so there are still unseen.

The probability that one of the next two cards turned is a club can be calculated using hypergeometric with and . (about 31.64%)

The probability that both of the next two cards turned are clubs can be calculated using hypergeometric with and . (about 3.33%)

The probability that neither of the next two cards turned are clubs can be calculated using hypergeometric with and . (about 65.03%)

Application to Keno

[edit]

The hypergeometric distribution is indispensable for calculating Keno odds. In Keno, 20 balls are randomly drawn from a collection of 80 numbered balls in a container, rather like American Bingo. Prior to each draw, a player selects a certain number of spots by marking a paper form supplied for this purpose. For example, a player might play a 6-spot by marking 6 numbers, each from a range of 1 through 80 inclusive. Then (after all players have taken their forms to a cashier and been given a duplicate of their marked form, and paid their wager) 20 balls are drawn. Some of the balls drawn may match some or all of the balls selected by the player. Generally speaking, the more hits (balls drawn that match player numbers selected) the greater the payoff.

For example, if a customer bets ("plays") $1 for a 6-spot (not an uncommon example) and hits 4 out of the 6, the casino would pay out $4. Payouts can vary from one casino to the next, but $4 is a typical value here. The probability of this event is:

Similarly, the chance for hitting 5 spots out of 6 selected is while a typical payout might be $88. The payout for hitting all 6 would be around $1500 (probability ≈ 0.000128985 or 7752-to-1). The only other nonzero payout might be $1 for hitting 3 numbers (i.e., you get your bet back), which has a probability near 0.129819548.

Taking the sum of products of payouts times corresponding probabilities we get an expected return of 0.70986492 or roughly 71% for a 6-spot, for a house advantage of 29%. Other spots-played have a similar expected return. This very poor return (for the player) is usually explained by the large overhead (floor space, equipment, personnel) required for the game.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The hypergeometric distribution is a discrete probability distribution that models the probability of k successes in n draws without replacement from a finite population of size N that contains exactly K items of the success type.[1] This distribution arises in sampling scenarios where the population is finite and draws affect subsequent probabilities, distinguishing it from the binomial distribution, which assumes independence via replacement or an infinite population.[2] The probability mass function is where k ranges from max(0, n + K - N) to min(n, K), and the binomial coefficients \binom{a}{b} count combinations of b items from a.[2] The expected value is n K / N, reflecting the proportion of successes in the population scaled by sample size, while the variance is n (K/N) (1 - K/N) (N - n)/(N - 1), which accounts for the finite population correction that reduces variability compared to the binomial case.[3] For large N relative to n, the hypergeometric approximates the binomial distribution with success probability K/N.[4] Key applications include quality control inspections, where a batch of N items with K defectives is sampled without replacement, and exact tests for independence in contingency tables, such as Fisher's exact test in statistics.[5]

Definition

Probability Mass Function

The probability mass function (PMF) of the hypergeometric distribution specifies the probability $ \Pr(X = k) $ that a random variable $ X $, representing the number of observed successes in $ n $ draws without replacement from a population of size $ N $ with $ K $ total successes, equals a specific integer $ k $. This PMF is expressed as
pX(k)=(Kk)(NKnk)(Nn), p_X(k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}},
where $ \binom{\cdot}{\cdot} $ denotes the binomial coefficient, defined for $ k $ ranging over the integers satisfying $ \max(0, n + K - N) \leq k \leq \min(n, K) $, and $ p_X(k) = 0 $ otherwise.[2][6] This formula derives from combinatorial counting principles: the denominator $ \binom{N}{n} $ counts all possible ways to select $ n $ items from the $ N $ available, while the numerator $ \binom{K}{k} \binom{N-K}{n-k} $ enumerates the favorable outcomes where exactly $ k $ successes are selected from the $ K $ successes and $ n-k $ non-successes from the $ N-K $ non-successes.[7] The ratio yields the exact probability under the uniform assumption over all subsets of size $ n $.[8] The PMF is zero outside the specified support because it is impossible to observe more successes than available in the population ($ k > K ),morethandrawn(), more than drawn ( k > n $), or negative successes; the lower bound ensures feasibility given the non-successes.[2] All probabilities sum to 1 over the support, confirming it as a valid PMF for the discrete uniform sampling model.[6]

Parameters and Support

The hypergeometric distribution is parameterized by three non-negative integers: the total population size NN, the number of success states (or "marked" items) in the population KK, and the number of draws (sample size) nn.[9] These parameters must satisfy the constraints 0KN0 \leq K \leq N and 0nN0 \leq n \leq N, ensuring the model reflects a finite population sampled without replacement where the number of successes cannot exceed the population totals.[9] The support of the random variable XX (the number of successes in the sample) consists of all integers kk in the range from max(0,n+KN)\max(0, n + K - N) to min(n,K)\min(n, K), inclusive; probabilities are zero outside this interval due to the combinatorial impossibility of exceeding available successes or draws while accounting for the finite non-successes in the population.[10] This bounded support distinguishes the hypergeometric from distributions like the binomial, as it enforces dependence induced by without-replacement sampling.[9]

Mathematical Properties

Moments and Expectations

The expected value of the hypergeometric random variable XX, denoting the number of successes in a sample of size nn drawn without replacement from a population of size NN containing KK successes, is E[X]=nKN\mathbb{E}[X] = n \frac{K}{N}. This follows from expressing XX as the sum of nn indicator variables IjI_j for the jj-th draw being a success, where E[Ij]=KN\mathbb{E}[I_j] = \frac{K}{N} for each jj by symmetry, and applying linearity of expectation E[X]=j=1nE[Ij]=nKN\mathbb{E}[X] = \sum_{j=1}^n \mathbb{E}[I_j] = n \frac{K}{N}, independent of the without-replacement dependence.[2][11] The variance is Var(X)=nKN(1KN)NnN1\mathrm{Var}(X) = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1}. To derive this, compute Var(X)=j=1nVar(Ij)+jCov(Ij,I)\mathrm{Var}(X) = \sum_{j=1}^n \mathrm{Var}(I_j) + \sum_{j \neq \ell} \mathrm{Cov}(I_j, I_\ell), where Var(Ij)=KN(1KN)\mathrm{Var}(I_j) = \frac{K}{N} \left(1 - \frac{K}{N}\right) and Cov(Ij,I)=KN(1KN)1N1\mathrm{Cov}(I_j, I_\ell) = \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{-1}{N-1} for jj \neq \ell, yielding nKN(1KN)+n(n1)KN(1KN)1N1=nKN(1KN)NnN1n \frac{K}{N} \left(1 - \frac{K}{N}\right) + n(n-1) \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{-1}{N-1} = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} after simplification; the factor NnN1\frac{N-n}{N-1} reflects reduced variability from sampling without replacement relative to the binomial case.[2][11] Higher moments exist in closed form but grow complex. The skewness is γ1=(N2K)(N2n)(N2)[nKN(1KN)NnN1]3/2(N1)3N(N2)\gamma_1 = \frac{(N - 2K)(N - 2n)}{(N-2) \left[ n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} \right]^{3/2}} \sqrt{\frac{(N-1)^3}{N (N-2)}}, measuring asymmetry that is positive if K<N/2K < N/2 and n<N/2n < N/2 (or vice versa) and vanishes when K=N/2K = N/2.[12] The excess kurtosis is κ=N1(N2)(N3)[16K(NK)N(N1)6n(Nn)N(N1)+6nK(NK)(Nn)N2(N1)2/(N2)]\kappa = \frac{N-1}{(N-2)(N-3)} \left[ 1 - 6 \frac{K(N-K)}{N(N-1)} - 6 \frac{n(N-n)}{N(N-1)} + 6 \frac{n K (N-K) (N-n)}{N^2 (N-1)^2 / (N-2)} \right], often less than 3 for moderate n/Nn/N, indicating lighter tails than the normal distribution; exact computation for specific parameters requires evaluating these or using the moment-generating function.[12] Recursive relations, such as E[Xr]=nKNE[(Y+1)r1]\mathbb{E}[X^r] = \frac{n K}{N} \mathbb{E}[(Y+1)^{r-1}] where YHypergeometric(N1,K1,n1)Y \sim \mathrm{Hypergeometric}(N-1, K-1, n-1), facilitate numerical evaluation of raw moments.[11]

Combinatorial Identities and Symmetries

The summation of the probability mass function over its support equals unity, as k(Kk)(NKnk)(Nn)=1\sum_{k} \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} = 1, a direct consequence of Vandermonde's identity k(Kk)(NKnk)=(Nn)\sum_{k} \binom{K}{k} \binom{N-K}{n-k} = \binom{N}{n}.[13] This identity counts the total number of ways to choose nn items from NN by partitioning the choices into those including kk successes from KK and nkn-k failures from NKN-K, for all feasible kk. The hypergeometric distribution possesses a combinatorial symmetry interchanging the roles of the number of successes KK and the sample size nn:
(Kk)(NKnk)(Nn)=(nk)(NnKk)(NK). \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} = \frac{\binom{n}{k} \binom{N-n}{K-k}}{\binom{N}{K}}.
This equality holds because both expressions compute the probability of exactly kk overlaps between a fixed set of nn sample positions and a randomly selected set of KK success positions in the population of NN, via complementary counting arguments.[14] The right-hand form interprets the scenario as the distribution of successes falling into a prespecified sample when successes are assigned randomly to the population, dual to the standard sampling view.

Tail Bounds and Inequalities

A fundamental tail inequality for the hypergeometric random variable XHG(N,K,n)X \sim \mathrm{HG}(N, K, n) with mean μ=nK/N\mu = nK/N is Hoeffding's bound, which states that
Pr(Xμ+t)exp(2t2n) \Pr(X \geq \mu + t) \leq \exp\left( -\frac{2t^2}{n} \right)
for all t>0t > 0. This result, derived for sums of bounded random variables including those from sampling without replacement, matches the corresponding bound for the binomial approximation and highlights that the negative associations in hypergeometric sampling preserve concentration comparable to independent trials.[15] Serfling (1974) provided a refinement incorporating the finite population correction factor f=(n1)/Nf = (n-1)/N, yielding the tighter upper tail bound
Pr(X(KN+t)n)exp(2t2n1f)=exp(2t2nNNn+1) \Pr\left(X \geq \left(\frac{K}{N} + t\right)n \right) \leq \exp\left( -\frac{2t^2 n}{1 - f} \right) = \exp\left( -2t^2 n \cdot \frac{N}{N - n + 1} \right)
for t>0t > 0. This adjustment accounts for reduced variance in without-replacement sampling relative to the infinite population case, making it superior to Hoeffding's bound when nn is a substantial fraction of NN.[15] Exponential bounds with higher-order terms further sharpen these estimates. For instance, Bardenet and Maillard (2015) derived improved exponential inequalities for the upper tail, incorporating factors like (1n/N)(1 - n/N) and quartic terms in the deviation, which outperform Serfling's bound in regimes where more than half the population is sampled. More recently, George (2024) unified existing inequalities and proposed refined confidence bounds derived from Serfling's form, such as c=NNn+12nNln(δ/2)c = N \sqrt{ -\frac{N-n+1}{2nN} \ln(\delta/2) } for the deviation ensuring Pr(Xμc)δ\Pr(|X - \mu| \geq c) \leq \delta when nN/2n \leq N/2.[15] Simple yet effective recent derivations include a Chernoff-style bound using Kullback-Leibler divergence:
Pr(Xd)exp[KD(dKnN)], \Pr(X \geq d) \leq \exp\left[ -K D\left( \frac{d}{K} \Big\| \frac{n}{N} \right) \right],
for integer dμ+1d \geq \mu + 1, where D(xy)=xln(x/y)+(1x)ln((1x)/(1y))D(x \| y) = x \ln(x/y) + (1-x) \ln((1-x)/(1-y)), which sensitizes the bound to the sampling fraction n/Nn/N and excels when n>Kn > K. An alternative β\beta-bound expresses the tail as Pr(Xd)In/N(d,Kd+1)\Pr(X \geq d) \leq I_{n/N}(d, K - d + 1), with Ix(a,b)I_x(a,b) the regularized incomplete beta function, offering computational advantages and tighter performance over Serfling in symmetric regimes. These advancements, validated via simulations against benchmarks like Hoeffding and Chatterjee (2007), underscore ongoing refinements tailored to specific parameter ranges in hypergeometric tails.

Approximations and Limitations

Binomial Approximation Conditions

The hypergeometric distribution can be approximated by the binomial distribution with parameters nn and p=K/Np = K/N when the population size NN is sufficiently large relative to the sample size nn, rendering the dependence between draws negligible and approximating sampling with replacement.[16][17] This holds because the hypergeometric probability mass function Pr(X=k)=(Kk)(NKnk)(Nn)\Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} simplifies asymptotically to the binomial form (nk)pk(1p)nk\binom{n}{k} p^k (1-p)^{n-k} as NN \to \infty with nn and pp fixed, since the ratios in the falling factorials approach independence.[18] A practical rule of thumb for the approximation's adequacy is n/N<0.05n/N < 0.05, ensuring the relative error in probabilities remains small across the support.[19][17] Some sources relax this to n/N<0.10n/N < 0.10, though accuracy diminishes for values near this threshold, particularly for tail probabilities or when pp is extreme (close to 0 or 1).[20][21] The means coincide exactly as $ \mathbb{E}[X] = n \cdot (K/N) $, but the hypergeometric variance $ n p (1-p) \frac{N-n}{N-1} $ approaches the binomial variance $ n p (1-p) $ only when NnN11\frac{N-n}{N-1} \approx 1, reinforcing the nNn \ll N requirement.[22] Violation of these conditions leads to underestimation of variance and poorer fit in finite samples, as verified in numerical comparisons.[22][20]

Normal and Other Approximations

The hypergeometric random variable XHypergeometric(N,K,n)X \sim \text{Hypergeometric}(N, K, n) with mean μ=nKN\mu = n \frac{K}{N} and variance σ2=nKN(1KN)NnN1\sigma^2 = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} converges in distribution to a normal random variable with the same mean and variance as NN \to \infty and nn \to \infty, provided n2/N0n^2 / N \to 0.[23] Under these conditions, the local limit theorem yields P(X=k)12πσ2exp((kμ)22σ2)P(X = k) \approx \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left( -\frac{(k - \mu)^2}{2\sigma^2} \right).[23] Stronger uniform convergence bounds, such as those from the Berry–Esseen theorem adapted to the hypergeometric case, hold for a wide range of KN\frac{K}{N} and nN\frac{n}{N}, with error rates on the order of O(1min(np,n(1p)))O\left( \frac{1}{\sqrt{\min(np, n(1-p))}} \right) where p=KNp = \frac{K}{N}.[24] A continuity correction enhances the approximation for tail probabilities: P(Xk)Φ(k+0.5μσ)P(X \leq k) \approx \Phi\left( \frac{k + 0.5 - \mu}{\sigma} \right), where Φ\Phi is the standard normal cumulative distribution function; this adjustment accounts for the discreteness of XX by expanding the interval to [k+0.5,)[k+0.5, \infty) or similar.[23] When nN\frac{n}{N} approaches a constant t(0,1)t \in (0,1), the variance requires adjustment to σ2(1t)\sigma^2 (1 - t), and the normal density scales accordingly to reflect the finite population correction.[23] Empirical rules of thumb for practical use include requiring σ29\sigma^2 \geq 9 or np(1p)10np(1-p) \geq 10 (adjusted for the hypergeometric variance) to ensure reasonable accuracy, though these are heuristic and depend on the specific parameter regime.[16] For rare events where KN0\frac{K}{N} \to 0 as NN \to \infty while λ=nKN\lambda = n \frac{K}{N} remains fixed and finite, the hypergeometric distribution approximates a Poisson distribution with parameter λ\lambda, as the without-replacement sampling behaves similarly to independent rare trials.[25] This limit arises because the probability mass function P(X=k)=(Kk)(NKnk)(Nn)P(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} simplifies to λkeλk!\frac{\lambda^k e^{-\lambda}}{k!} under the specified asymptotics, with dependencies between draws becoming negligible.[25] The approximation improves when nn is moderate relative to NN and KK is small, but degrades if depletion effects are significant (i.e., nn comparable to KK). Bounds like the Stein-Chen method quantify the total variation distance between the distributions as O(n2N+λK)O\left( \frac{n^2}{N} + \frac{\lambda}{K} \right).[25] Other approximations, such as Edgeworth expansions for higher-order corrections to the normal or saddlepoint approximations for tail probabilities, extend these limits but require more computational effort and are typically used when exact hypergeometric probabilities are intractable for large NN.[26] These methods incorporate skewness and kurtosis of the hypergeometric (e.g., skewness γ1=(N2K)(N1)1/2(N2n)(N2)(NKn(NK)(Nn))1/2\gamma_1 = \frac{(N-2K)(N-1)^{1/2} (N-2n)}{(N-2)(N K n (N-K)(N-n))^{1/2}}) to refine the normal approximation beyond the central limit regime.[26]

Computational and Practical Limitations

Exact evaluation of the hypergeometric probability mass function $ \Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} $ requires computing binomial coefficients, whose values grow rapidly with increasing $ N $, $ K $, and $ n $, often exceeding the dynamic range of double-precision floating-point numbers (approximately $ 10^{308} $) for $ N > 1000 $. [27] [28] This overflow occurs because intermediate factorials or products in naive multiplicative formulas for $ \binom{N}{k} $ become unrepresentable, yielding infinite or erroneous results. [29] To address numerical instability, modern implementations employ logarithmic transformations, computing $ \log \Pr(X = k) $ via differences of log-gamma functions: $ \log \binom{N}{k} \approx \lgamma(N+1) - \lgamma(k+1) - \lgamma(N-k+1) $, where $ \lgamma $ is evaluated using asymptotic expansions or table lookups for large arguments to maintain precision up to relative errors of $ 10^{-15} $ or better. [30] Recursive ratio methods, multiplying successive terms $ \frac{\Pr(X = k+1)}{\Pr(X = k)} = \frac{(k+1)(N-K-n+k+1)}{(n-k)(K-k)} $, further avoid large intermediates by starting from a mode or boundary and iterating, though they still rely on log-space accumulation for exponentiation back to probabilities. [27] For cumulative distribution functions or tail probabilities, such as in Fisher's exact test for 2×2 contingency tables, exact computation demands summing over up to $ \min(n, K) $ terms, each potentially requiring the above techniques; while single-term evaluation is $ O(1) $ with precomputation, full p-values exhibit worst-case time complexity $ O(N) $ due to the summation extent and binomial evaluations, becoming prohibitive for $ N > 10^5 $ without optimization. [31] [32] In practice, for large-scale applications like gene set enrichment analysis with $ N $ in the millions (e.g., human genome ~3×10^7 bases), exact tails involve thousands of terms with minuscule probabilities (~10^{-100}), leading to underflow, rounding error propagation in summation, and excessive runtime, necessitating Monte Carlo simulation or Poisson/binomial approximations despite their asymptotic validity only when $ n \ll N $. [31] [33]

Illustrative Examples

Basic Sampling Example

A prototypical scenario for the hypergeometric distribution involves drawing a fixed-size sample without replacement from a finite population divided into two mutually exclusive categories, such as "success" and "failure." Formally, let the population size be N, with K successes and N - K failures; a sample of n items is selected, where n ≤ N, and X denotes the number of successes observed in the sample, with X ranging from max(0, n + K - N) to min(n, K). The probability that X = k is P(X = k) = \frac{\binom{K}{k} \binom{N - K}{n - k}}{\binom{N}{n}}, where \binom{a}{b} is the binomial coefficient representing the number of ways to choose b items from a without regard to order.[2][34] To illustrate, consider an urn containing N = 10 balls, of which K = 4 are red (successes) and 6 are blue (failures); draw n = 3 balls without replacement. The possible values of X (number of red balls drawn) are k = 0, 1, 2, 3. The probabilities are computed as follows:
kP(X = k)Calculation
01/6 ≈ 0.1667\frac{\binom{4}{0} \binom{6}{3}}{\binom{10}{3}} = \frac{1 \cdot 20}{120}
11/2 = 0.5\frac{\binom{4}{1} \binom{6}{2}}{\binom{10}{3}} = \frac{4 \cdot 15}{120}
20.3\frac{\binom{4}{2} \binom{6}{1}}{\binom{10}{3}} = \frac{6 \cdot 6}{120}
31/30 ≈ 0.0333\frac{\binom{4}{3} \binom{6}{0}}{\binom{10}{3}} = \frac{4 \cdot 1}{120}
These values sum to 1, confirming the distribution's validity as a probability model.[35] The dependence between draws (due to no replacement) distinguishes this from the binomial distribution, where probabilities remain constant across trials.[2]

Real-World Scenario Interpretation

In quality control processes, the hypergeometric distribution quantifies the probability of encountering a specific number of defective items when sampling without replacement from a finite production batch, accounting for the depletion of the population that alters successive draw probabilities unlike independent trials in binomial models.[36] For example, consider a factory producing 1,000 widgets where quality assurance reveals K=50 defectives prior to full shipment; inspectors then draw n=100 widgets randomly without replacement to evaluate the lot. The random variable X representing observed defectives follows Hypergeometric(N=1000, K=50, n=100), with P(X=k) = [C(50,k) * C(950,100-k)] / C(1000,100), enabling calculation of risks such as P(X ≥ 10) to inform acceptance thresholds that balance false positives and negatives in lot disposition.[37] This interpretation underscores the distribution's utility in finite-population scenarios where sampling fraction n/N exceeds typical binomial approximations (e.g., here ~10%), as dependencies inflate variance relative to np(1-p).[38] In electoral auditing, the hypergeometric distribution interprets the consistency between sampled ballots and aggregate tallies to detect irregularities in finite vote universes without replacement assumptions.[39] For instance, in a jurisdiction with N=10,000 ballots where K=6,000 validly favor Candidate A per official count, auditors might hand-recount n=500 randomly selected ballots, modeling X=observed A votes as Hypergeometric(N=10,000, K=6,000, n=500); deviations like P(X ≤ 240) could signal fraud probabilities under null hypotheses of accurate reporting, guiding risk-limiting audits that scale sample sizes inversely with desired error bounds.[40] Such applications highlight causal dependencies in vote pools, where early discrepancies propagate evidential weight, prioritizing empirical verification over approximations valid only for negligible sampling fractions.[41]

Statistical Inference

Point and Interval Estimation

The method of moments provides a straightforward point estimator for the success proportion p=K/Np = K/N, given by the sample proportion p^=k/n\hat{p} = k/n. This follows from equating the observed mean kk to the theoretical expectation E[X]=npE[X] = n p, yielding an unbiased estimator since E[p^]=pE[\hat{p}] = p. The corresponding estimator for KK is K^=p^N=kN/n\hat{K} = \hat{p} N = k N / n, which is rounded to the nearest integer when KK must be integral.[42] The maximum likelihood estimator (MLE) for KK maximizes the hypergeometric probability mass function P(X=kN,K,n)P(X = k \mid N, K, n) over integer values of KK between max(0,n+KN)\max(0, n + K - N) wait, max(0,k+nN)\max(0, k + n - N) and min(k,n)\min(k, n), but typically from 0 to N. Computation involves finding the KK where the likelihood ratio L(K+1)/L(K)1L(K)/L(K1)L(K+1)/L(K) \leq 1 \leq L(K)/L(K-1), with L(K+1)/L(K)=(K+1)(NKn+k)(K+1k)(NK)L(K+1)/L(K) = \frac{(K+1)(N - K - n + k)}{(K + 1 - k)(N - K)}. For large NN and nn, the MLE approximates the method of moments estimator but incorporates discreteness effects, often computed numerically or via software implementing recursive evaluation. In related capture-recapture contexts modeled by the hypergeometric distribution, bias-reduced MLE variants like K^=(n+1)(k+1)N+21\hat{K} = \frac{(n+1)(k+1)}{N+2} - 1 (floored if necessary) are used, though exact form requires case-specific verification.[43][44] Interval estimation for pp or KK accounts for the variance Var(X)=np(1p)NnN1\mathrm{Var}(X) = n p (1-p) \frac{N-n}{N-1}, which includes a finite population correction factor NnN1\frac{N-n}{N-1}. An approximate 1α1 - \alpha confidence interval for pp is p^±zα/2p^(1p^)(Nn)n(N1)\hat{p} \pm z_{\alpha/2} \sqrt{ \frac{\hat{p} (1 - \hat{p}) (N - n)}{n (N - 1)} }, where zα/2z_{\alpha/2} is the 1α/21 - \alpha/2 quantile of the standard normal distribution; this performs well when np(1p)5n p (1-p) \geq 5 and NN is not too small relative to nn. For KK, the interval is NN times the one for pp, clipped to integers [0, N].[42] Exact confidence intervals, preferred for small samples to achieve nominal coverage despite discreteness, are constructed by inverting hypergeometric tests: the 1α1 - \alpha interval for KK comprises all integers KK' such that the two-sided p-value for testing H0:K=KH_0: K = K' given observed kk exceeds α\alpha, computed as min(j=0kP(X=jK),j=kmin(n,K)P(X=jK))×2+P(X=kK)\min\left( \sum_{j=0}^k P(X=j \mid K'), \sum_{j=k}^{\min(n,K')} P(X=j \mid K') \right) \times 2 + P(X=k \mid K'). Efficient algorithms using tail probability recursions enable fast computation without full enumeration, yielding shortest intervals with guaranteed coverage at least 1α1 - \alpha. These methods outperform approximations in finite samples and are implemented in statistical software.[45][46]

Hypothesis Testing with Fisher's Exact Test

Fisher's exact test utilizes the hypergeometric distribution to conduct precise hypothesis testing for independence between two dichotomous variables represented in a 2×2 contingency table, particularly suitable for small sample sizes where asymptotic approximations like the chi-squared test fail.[47] The test conditions on the observed row and column marginal totals, treating one cell entry—such as the count of successes in the first group—as a realization from a hypergeometric distribution with population size NN equal to the grand total, KK as the total successes in the population, and nn as the sample size from the first group.[48] Under the null hypothesis of independence (equivalent to an odds ratio of 1), this conditional distribution holds exactly, without reliance on large-sample assumptions.[49] The probability mass function for the hypergeometric random variable XX (representing the cell count) is given by Pr(X=k)=(Kk)(NKnk)(Nn)\Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}}, where kk ranges from max(0,n+KN)\max(0, n + K - N) to min(n,K)\min(n, K).[48] To compute the p-value, all possible tables with the fixed margins are enumerated, each assigned a hypergeometric probability, and the p-value is the sum of probabilities for tables at least as extreme as the observed one. For a two-sided test, this typically includes tables with probabilities less than or equal to that of the observed table; one-sided variants sum over the tail in the direction of the alternative hypothesis.[48] [49] This approach ensures the test maintains its nominal significance level exactly, even with sparse data where more than 20% of expected cell frequencies are below 5 or any below 1, conditions under which the chi-squared test's approximation is unreliable.[47] For instance, in analyzing whether a treatment affects binary outcomes across two groups, the test evaluates evidence against the null by quantifying the rarity of the observed association under hypergeometric sampling.[49] Computational implementation often involves software like R's fisher.test() function, which handles enumeration directly for moderate sizes or simulation for larger ones.[48]

Multivariate Hypergeometric Distribution

The multivariate hypergeometric distribution describes the joint probability distribution of counts obtained when drawing a fixed sample size nn without replacement from a finite population of size NN divided into k2k \geq 2 mutually exclusive categories, where category ii has NiN_i items and i=1kNi=N\sum_{i=1}^k N_i = N./12:_Finite_Sampling_Models/12.03:_The_Multivariate_Hypergeometric_Distribution) Let XiX_i denote the count of items from category ii in the sample for i=1,,ki = 1, \dots, k; then the random vector (X1,,Xk)(X_1, \dots, X_k) follows this distribution, denoted as MultiHyper(N;N1,,Nk;n)\text{MultiHyper}(N; N_1, \dots, N_k; n), with the constraint i=1kXi=n\sum_{i=1}^k X_i = n.[50] The probability mass function is given by
P(X1=x1,,Xk=xk)=i=1k(Nixi)(Nn) P(X_1 = x_1, \dots, X_k = x_k) = \frac{\prod_{i=1}^k \binom{N_i}{x_i}}{\binom{N}{n}}
for non-negative integers xix_i satisfying i=1kxi=n\sum_{i=1}^k x_i = n and 0xiNi0 \leq x_i \leq N_i for each ii, and zero otherwise; here, ()\binom{\cdot}{\cdot} denotes the binomial coefficient./12:_Finite_Sampling_Models/12.03:_The_Multivariate_Hypergeometric_Distribution) This formulation arises directly from the uniform probability over all (Nn)\binom{N}{n} possible samples, with the numerator counting favorable outcomes for the specified counts.[50] The marginal distribution of any single XiX_i is univariate hypergeometric with parameters NN, NiN_i, and nn, reducing the multivariate case to the standard hypergeometric when k=2k=2.[50] The mean of XiX_i is E[Xi]=nNiNE[X_i] = n \frac{N_i}{N}, reflecting the proportional representation of category ii in the population./12:_Finite_Sampling_Models/12.03:_The_Multivariate_Hypergeometric_Distribution) The variance is Var(Xi)=nNiN(1NiN)NnN1\text{Var}(X_i) = n \frac{N_i}{N} \left(1 - \frac{N_i}{N}\right) \frac{N-n}{N-1}, which is smaller than the binomial variance np(1p)n p (1-p) (with p=Ni/Np = N_i/N) due to the finite-population correction factor (Nn)/(N1)(N-n)/(N-1).[50] For iji \neq j, the covariance is Cov(Xi,Xj)=nNiNNjNNnN1\text{Cov}(X_i, X_j) = -n \frac{N_i}{N} \frac{N_j}{N} \frac{N-n}{N-1}, negative as expected from the fixed total sample size inducing dependence.[51] Higher-order moments, including central and noncentral forms, can be derived recursively or via generating functions, with explicit formulas available for practical computation.[52] The distribution models scenarios like randomized allocation or quality sampling across multiple defect types, where dependencies among categories must be accounted for explicitly.[53]

Negative and Noncentral Variants

The negative hypergeometric distribution models the number of failures preceding a predetermined number of successes in sampling without replacement from a finite population of size NN containing KK successes, where sampling continues until rr successes are obtained.[54] Let YY denote the number of failures observed before the rr-th success; then YY follows a negative hypergeometric distribution with parameters NN, KK, and rr, where 0<rK0 < r \leq K and YY ranges from 0 to NKN - K. This distribution arises in scenarios such as quality inspections where defects (failures) are counted until a fixed number of acceptable items (successes) are found, or in gaming contexts like drawing cards until a specific number of a suit is reached.[55] The probability mass function is given by
Pr(Y=k)=(Kr1)(NKk)(Nk+r1)Kr+1Nkr+1, \Pr(Y = k) = \frac{\dbinom{K}{r-1} \dbinom{N-K}{k}}{\dbinom{N}{k+r-1}} \cdot \frac{K - r + 1}{N - k - r + 1},
for k=0,1,,NKk = 0, 1, \dots, N - K, reflecting the hypergeometric probability of r1r-1 successes in the first k+r1k + r - 1 draws multiplied by the conditional probability of a success on the next draw. The expected value is \E[Y]=rNK+1K+1\E[Y] = r \cdot \frac{N - K + 1}{K + 1}, and the variance is \Var(Y)=r(NK+1)(N+1)(K+1)2(K+2)(K+1r)\Var(Y) = r \cdot \frac{(N - K + 1)(N + 1)}{(K + 1)^2 (K + 2)} \cdot (K + 1 - r).[56] Noncentral hypergeometric distributions extend the standard (central) hypergeometric by incorporating bias through an odds parameter [57], which modifies selection probabilities to reflect differential attractiveness or weights of population subgroups; when ω=1\omega = 1, the distribution reduces to the central case. Two principal variants exist: Fisher's noncentral hypergeometric distribution and Wallenius' noncentral hypergeometric distribution, differing in their modeling of bias.[58] Fisher's variant models the conditional distribution of independent but biased binomial or Poisson counts given their fixed sum, yielding the probability mass function
Pr(X=k)=(Kk)(NKnk)ωkj(Kj)(NKnj)ωj, \Pr(X = k) = \frac{\dbinom{K}{k} \dbinom{N-K}{n-k} \omega^k}{\sum_{j} \dbinom{K}{j} \dbinom{N-K}{n-j} \omega^j},
for kk in the feasible range, where NN is population size, KK subgroup size, nn draws, and ω\omega the odds ratio favoring the first subgroup; it applies to independent sampling approximations, such as in genetic associations under linkage disequilibrium.[59] Wallenius' variant, in contrast, describes sequential sampling without replacement where draw probabilities are proportional to current subgroup weights (e.g., ω\omega times size for the first subgroup), leading to a more complex normalizing constant involving a hypergeometric integral:
Pr(X=k)=(Kk)(NKnk)(Nn)01(1tD1)k(1tD2)nkdt, \Pr(X = k) = \frac{\dbinom{K}{k} \dbinom{N-K}{n-k}}{\dbinom{N}{n}} \int_0^1 (1 - t^{D_1})^{k} (1 - t^{D_2})^{n-k} \, dt,
with DiD_i derived from weights and sample sizes; this captures competition effects in urn models with unequal ball weights, as in ecological resource allocation or biased urn experiments. Both variants lack closed-form moments in general, requiring numerical computation, and are implemented in statistical software for exact or approximate inference.[60]

Applications

Quality Control and Industrial Sampling

The hypergeometric distribution is applied in quality control to model the exact probability of observing a specific number of defective items in a sample drawn without replacement from a finite production lot, ensuring precise assessment when the sample size is non-negligible relative to the lot size.[36] In this context, the population size NN represents the total lot size, KK denotes the number of defectives in the lot, nn is the sample size, and kk is the number of defectives observed in the sample; the probability mass function P(X=k)=(Kk)(NKnk)(Nn)P(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} computes the likelihood for kk ranging from max(0,n+KN)\max(0, n + K - N) to min(n,K)\min(n, K).[61] This approach contrasts with binomial approximations, which assume independence via replacement or infinite populations, but hypergeometric provides superior accuracy for finite lots by accounting for dependency as items are depleted.[62] In industrial acceptance sampling, plans specify lot size NN, sample size nn, and acceptance number cc, where the lot is accepted if the sample yields at most cc defectives; the hypergeometric distribution calculates the operating characteristic (OC) curve, plotting acceptance probability against the true defective proportion K/NK/N.[63] For instance, standards like ANSI/ASQ Z1.4 incorporate hypergeometric computations for OC curves in attribute sampling when lot sizes are specified, enabling manufacturers to evaluate risks of producer's and consumer's errors—such as accepting defective lots (Type II error) or rejecting good ones (Type I error).[64] This method optimizes inspection costs by balancing sample size against discrimination power; for a lot of 500 units with sample n=80n=80 and c=3c=3, hypergeometric yields exact acceptance probabilities that deviate from binomial estimates by up to 5-10% when n/N>0.1n/N > 0.1.[65] Applications extend to defect analysis in manufacturing, where hypergeometric-based p-charts with dynamic limits monitor fraction defectives over multiple lots, adapting for finite sampling without replacement to reduce false alarms.[66] Empirical studies confirm its utility: in a simulated production run of N=1000N=1000 items with K=50K=50 defectives, sampling n=100n=100 yields P(X5)0.95P(X \leq 5) \approx 0.95 under hypergeometric, informing lot disposition and process adjustments to minimize variability.[67] While binomial suffices for large NN, hypergeometric's exactness prevents over- or under-estimation in high-value sectors like electronics, where misacceptance costs exceed $10,000 per lot.[68]

Genetics and Bioinformatics

In bioinformatics, the hypergeometric distribution is employed in gene set enrichment analysis to determine whether a predefined biological category, such as a Gene Ontology term or KEGG pathway, is statistically overrepresented in a list of genes identified through experiments like RNA sequencing or genome-wide association studies (GWAS). Here, the total number of genes in the reference set (e.g., the annotated genome) serves as the population size NN, the number of genes annotated to the category is KK, the number of genes in the experimental list (e.g., differentially expressed genes) is nn, and the observed number in both is kk. The one-sided p-value, calculated as the sum of hypergeometric probabilities for kk or greater, tests the null hypothesis of no enrichment beyond random expectation.[69] This method assumes independence under the null and is computationally efficient for large NN, though it can be conservative for highly overlapping sets.[70] The hypergeometric test equates to the one-tailed Fisher's exact test in the context of 2x2 contingency tables, which compares observed overlaps against hypergeometric expectations.[71] In genetics, this application extends to assessing allelic associations in case-control studies, where rows represent disease status and columns represent genotypes or alleles, enabling exact inference without relying on large-sample approximations like the chi-squared test, particularly useful for rare variants or small cohorts.[71] Extensions, such as Bayesian variants, incorporate prior weights on genes to address biases from gene length or expression levels, improving accuracy in weighted enrichment analyses.[69] In population genetics, the distribution models finite-population sampling of alleles or genotypes without replacement, as in Wright-Fisher models adapted for small, closed populations where genetic drift depletes allele frequencies non-binomially. For instance, it quantifies the probability of observing a specific number of success alleles in gamete pools drawn from diploid individuals, informing exact tests for Hardy-Weinberg deviations in structured populations.[38] Such uses highlight its role in causal inference for inheritance patterns under resource constraints, though approximations like the binomial suffice for large NN.[38]

Games, Gambling, and Elections

The hypergeometric distribution arises in card games where hands are dealt without replacement from a finite deck, modeling the probability of obtaining a specific number of cards with desired properties. For example, in five-card poker from a standard 52-card deck containing 4 aces, the number of aces drawn follows a hypergeometric distribution with population size N=52N=52, number of success states K=4K=4, and sample size n=5n=5.[72] Similarly, in bridge, the number of cards of a particular suit in a 13-card hand is hypergeometric with N=52N=52, K=13K=13, and n=13n=13.[73] Deck-building games like Magic: The Gathering employ hypergeometric calculators to compute probabilities of drawing key cards, such as lands or specific spells, from shuffled decks without replacement, aiding in optimizing deck composition for competitive play.[74] In gambling contexts involving sequential draws without replacement, the hypergeometric distribution quantifies outcomes in scenarios like urn games or card-based wagers. A casino game variant might involve spinning to determine the number of cards flipped from a deck, with payoffs based on jokers or matches drawn, directly following hypergeometric probabilities.[75] The negative hypergeometric distribution, a related variant, models waiting times until a fixed number of successes in such draws, as analyzed in gambling applications where players anticipate hitting a threshold, such as drawing a certain number of winning symbols from a finite pool.[54] These models highlight dependencies introduced by depletion of the draw pool, contrasting with binomial approximations valid only for large populations relative to sample size. Election polling and auditing leverage the hypergeometric distribution to assess sample-based inferences from finite voter populations without replacement. In pre-election surveys, the number of supporters for a candidate in a sample mirrors hypergeometric sampling, enabling exact probability calculations for vote shares when population size NN is known and comparable to sample nn.[41] Post-election audits, such as risk-limiting audits (RLAs) for verifying machine tallies against paper ballots, use hypergeometric models to determine the probability that a sample confirms the reported winner, with parameters reflecting total ballots NN, reported votes for the apparent winner KK, and audited ballots nn.[76] For instance, in a 200,000-vote election audit, hypergeometric distributions compute confidence intervals for vote discrepancies, ensuring statistical rigor in dispute resolution.[77] This approach accounts for finite population effects, providing tighter bounds than binomial methods in close races.

References

User Avatar
No comments yet.