Hubbry Logo
Poisson limit theoremPoisson limit theoremMain
Open search
Poisson limit theorem
Community hub
Poisson limit theorem
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Poisson limit theorem
Poisson limit theorem
from Wikipedia

Comparison of the Poisson distribution (black lines) and the binomial distribution with n = 10 (red circles), n = 20 (blue circles), n = 1000 (green circles). All distributions have a mean of 5. The horizontal axis shows the number of events k. As n gets larger, the Poisson distribution becomes an increasingly better approximation for the binomial distribution with the same mean.

In probability theory, the law of rare events or Poisson limit theorem states that the Poisson distribution may be used as an approximation to the binomial distribution, under certain conditions.[1] The theorem was named after Siméon Denis Poisson (1781–1840). A generalization of this theorem is Le Cam's theorem.

Theorem

[edit]

Let be a sequence of real numbers in such that the sequence converges to a finite limit . Then:

First proof

[edit]

Assume (the case is easier). Then

Since

this leaves

Alternative proof

[edit]

Using Stirling's approximation, it can be written:

Letting and :

As , so:

Ordinary generating functions

[edit]

It is also possible to demonstrate the theorem through the use of ordinary generating functions of the binomial distribution:

by virtue of the binomial theorem. Taking the limit while keeping the product constant, it can be seen:

which is the OGF for the Poisson distribution. (The second equality holds due to the definition of the exponential function.)

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Poisson limit theorem, also known as the law of rare events, states that under certain conditions, the converges in distribution to the . Specifically, consider a sequence of independent Bernoulli random variables Xn,1,,Xn,nX_{n,1}, \dots, X_{n,n} each with success probability pn=λ/np_n = \lambda / n, where λ>0\lambda > 0 is fixed; let Sn=i=1nXn,iS_n = \sum_{i=1}^n X_{n,i}. As nn \to \infty, the distribution of SnS_n converges to a with parameter λ\lambda, meaning P(Sn=k)eλλk/k!P(S_n = k) \to e^{-\lambda} \lambda^k / k! for each integer k0k \geq 0. This theorem provides a foundational in , particularly for modeling the number of occurrences of rare, independent events within a fixed interval, where the expected number of events λ\lambda remains constant even as the opportunities for events proliferate but each becomes increasingly unlikely. It complements the by addressing scenarios where the variance λ\lambda is finite and small, rather than growing with nn, and is especially useful when nn is large and pp is small, such as in , reliability analysis, and for counting infrequent incidents like defects or arrivals. The result was first derived by Siméon Denis Poisson in his 1837 work Recherches sur la probabilité des jugements en matière criminelle et en matière civile, where it emerged in the context of error analysis and rare judicial errors, though the modern interpretation as a limit theorem developed later. In 1898, Ladislaus von Bortkiewicz popularized its application to real-world rare events, such as horse-kick fatalities in Prussian cavalry, dubbing it the "law of small numbers" to emphasize the stability of small counts under Poisson-like behavior. Proofs typically rely on the method of generating functions, where the probability generating function of the binomial (1p+ps)n(1 - p + p s)^n approaches eλ(s1)e^{\lambda (s-1)} as nn \to \infty with np=λnp = \lambda, or alternatively via Stirling's approximation for factorials in the direct probability computation. Extensions of the theorem appear in more general settings, such as for dependent events or compound processes, but the core version remains central to introductory probability.

Preliminaries

Binomial distribution

The binomial distribution models the number of successes in a fixed number nn of independent Bernoulli trials, where each trial has the same probability pp of success and 1p1-p of failure. This discrete probability distribution is fundamental in scenarios involving repeated independent experiments with binary outcomes, providing a framework for calculating probabilities of specific success counts. The of a KK is given by P(K=k)=(nk)pk(1p)nk,P(K = k) = \binom{n}{k} p^k (1-p)^{n-k}, where k=0,1,,nk = 0, 1, \dots, n and (nk)\binom{n}{k} denotes the , representing the number of ways to choose kk successes out of nn trials. The (mean) is E[K]=npE[K] = np, and the variance is Var(K)=np(1p)\operatorname{Var}(K) = np(1-p), both of which scale linearly with nn for fixed pp. Named after , the distribution was formally introduced in his posthumously published book in 1713, marking a foundational contribution to . Common examples include modeling the number of heads in nn flips of a (where p=0.5p = 0.5) or the count of defective items in a quality control sample of size nn from a production process with defect probability pp. In the Poisson limit theorem, the serves as the starting point, approximating the under certain conditions on nn and pp.

Poisson distribution

The is a discrete that models the number of events occurring within a fixed interval of time or , under the assumptions of a constant average rate λ>0\lambda > 0 and independent occurrences of . It arises naturally in scenarios where events are sporadic and the probability of more than one event in a very small interval is negligible. The probability mass function of a Poisson random variable XX is given by P(X=k)=[e](/page/E!)λλ[k](/page/K)[k](/page/K)!,k=0,1,2,P(X = k) = \frac{[e](/page/E!)^{-\lambda} \lambda^[k](/page/K')}{[k](/page/K)!}, \quad k = 0, 1, 2, \dots where [e](/page/E!)[e](/page/E!) is the base of logarithm and [k](/page/K)![k](/page/K')! denotes the of [k](/page/K)[k](/page/K'). The expected value (mean) is E[X]=λE[X] = \lambda, and the variance is Var(X)=λ\operatorname{Var}(X) = \lambda, making the distribution equidispersed with mean equal to variance. This distribution has infinite support over the non-negative integers and is commonly applied to count data, such as the number of defects in a manufactured item or the number of arrivals at a service facility. For instance, it describes the number of radioactive decays in a sample over a fixed period or the number of customer arrivals in a queue during a given hour, assuming the events occur independently at a constant rate. The serves as a limiting case for the when the number of trials is large and the success probability is small, with λ=np\lambda = np.

Theorem

Statement

The Poisson limit theorem, also known as the law of rare events, asserts that under suitable conditions, the converges to the as the number of trials increases indefinitely while the expected number of successes remains fixed. Formally, let XnX_n follow a with parameters nn and pnp_n, where npn=λn p_n = \lambda for a fixed λ>0\lambda > 0, nn \to \infty, and thus pn0p_n \to 0. Then XnX_n converges in distribution to a with parameter λ\lambda, denoted XndPoisson(λ)X_n \xrightarrow{d} \mathrm{Poisson}(\lambda)
Add your contribution
Related Hubs
User Avatar
No comments yet.