Hubbry Logo
Kolmogorov's zero–one lawKolmogorov's zero–one lawMain
Open search
Kolmogorov's zero–one law
Community hub
Kolmogorov's zero–one law
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Kolmogorov's zero–one law
Kolmogorov's zero–one law
from Wikipedia

In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, namely a tail event of independent σ-algebras, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.

Tail events are defined in terms of countably infinite families of σ-algebras. For illustrative purposes, we present here the special case in which each sigma algebra is generated by a random variable for . Let be the sigma-algebra generated jointly by all of the . Then, a tail event is an event the occurrence of which cannot depend on the outcome of a finite subfamily of these random variables. (Note: belonging to implies that membership in is uniquely determined by the values of the , but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence of the converges, and the event that its sum converges are both tail events. If the are, for example, all Bernoulli-distributed, then the event that there are infinitely many such that is a tail event. If each models the outcome of the coin toss in a modeled, infinite sequence of coin tosses, this means that a sequence of 100 consecutive heads occurring infinitely many times is a tail event in this model.

Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of the is removed.

In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one.

Formulation

[edit]

A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let be a probability space and let be a sequence of σ-algebras contained in . Let

be the smallest -algebra containing . Then the terminal -algebra of the is defined as .

Kolmogorov's zero–one law asserts that, if the are stochastically independent, then for any event , one has either or .

The statement of the law in terms of random variables is obtained from the latter by taking each to be the σ-algebra generated by the random variable . A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by all , but which is independent of any finite number of . That is, a tail event is precisely an element of the terminal σ-algebra .

Examples

[edit]
  1. Let  be a standard probability space, and let  be an invertible, measure-preserving transformation. Then  is called a Kolmogorov automorphism or K-automorphism, K-transform or K-shift, if there exists a sub-sigma algebra  such that the following three properties hold: Here, the symbol  is the join of sigma algebras, while  is set intersection. The equality should be understood as holding almost everywhere, that is, differing at most on a set of measure zero. A K-automorphism by construction necessarily obeys Kolmogorov's 0-1 Law. It can be further shown that all Bernoulli automorphisms are K-automorphisms, but not vice versa.
  2. The presence of an infinite cluster in the context of percolation theory also obeys the 0-1 law.
  3. Let be a sequence of independent random variables, then the event defined below is a tail event: Thus by Kolmogorov's 0-1 law, it has either probability 0 or 1 to happen. Note that independence is required for the tail event condition to hold. Without independence we can consider a sequence that's either or with probability each. In this case the sum converges with probability .

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Kolmogorov's zero–one law is a in stating that, for a sequence of independent random variables, any event in the tail σ-algebra—defined as the over all finite n of the σ-algebras generated by the variables from the (n+1)th onward—has probability either 0 or 1 under the associated . This result highlights the triviality of tail events in independent systems, implying that such events are determined without dependence on any finite initial segment of the sequence. The theorem is formulated more precisely as follows: Let {Xn}n=1\{X_n\}_{n=1}^\infty be a sequence of independent random variables on a probability space (Ω,F,P)(\Omega, \mathcal{F}, P). The tail σ-algebra T\mathcal{T} is T=n=1σ(Xn+1,Xn+2,)\mathcal{T} = \bigcap_{n=1}^\infty \sigma(X_{n+1}, X_{n+2}, \dots), where σ()\sigma(\cdot) denotes the generated σ-algebra. Then, for any ATA \in \mathcal{T}, P(A){0,1}P(A) \in \{0, 1\}. The proof relies on the independence property, which ensures that the tail σ-algebra is independent of each finite σ-algebra σ(X1,,Xn)\sigma(X_1, \dots, X_n), leading to the conditional probability P(Aσ(X1,,Xn))=P(A)P(A \mid \sigma(X_1, \dots, X_n)) = P(A) for all n, and thus P(A)2=P(A)P(A)^2 = P(A) by the law of total probability, forcing P(A)P(A) to be 0 or 1. Named after Andrey Nikolaevich Kolmogorov (1903–1987), the law was introduced in his seminal 1933 monograph Grundbegriffe der Wahrscheinlichkeitsrechnung (translated as Foundations of the Theory of Probability in 1956), where it appears in an appendix as a generalization of observations about limiting probabilities in infinite independent trials. This work built on earlier ideas in measure-theoretic probability, providing a rigorous foundation for analyzing asymptotic behaviors in stochastic processes. The theorem's significance lies in its applications to central limit theorems, laws of large numbers, and , where it establishes almost-sure convergence properties for tail-dependent quantities, such as the existence of infinite clusters in models or the convergence of series of independent random variables. Extensions include versions for dependent variables or finitely additive measures, but the original law remains a cornerstone for independent sequences.

Introduction

Overview

Kolmogorov's zero–one law is a in , asserting that for a sequence of independent random variables, any event belonging to the tail sigma-algebra possesses a probability of either 0 or 1. This result was established by in 1933 as part of his seminal work laying the axiomatic foundations of probability, Grundbegriffe der Wahrscheinlichkeitsrechnung (translated as Foundations of the Theory of Probability). The law highlights the deterministic nature of certain "remote" events in infinite probabilistic systems generated by independent components. Intuitively, the zero–one law captures the idea that in an infinite sequence of independent random events or variables, the asymptotic or limiting behaviors—those depending on the entire of the sequence—are predetermined, occurring with certainty (probability 1) or impossibility (probability 0), rather than some intermediate likelihood. This reflects how independence across infinitely many trials erodes uncertainty about global outcomes, making them non-random in the limit. The theorem's significance lies in its role as a bridge between finite-sample probability models and infinite-sequence analyses, providing a rigorous basis for understanding convergence phenomena. It underpins key results such as the and extensions of the , ensuring that limits in infinite settings are constants rather than random variables.

Historical Context

The development of Kolmogorov's zero–one law emerged amid the early 20th-century transition from classical probability, rooted in combinatorial counting and finite games of chance, to a measure-theoretic framework capable of handling infinite sequences of events. This shift addressed longstanding challenges in defining probabilities for infinite independent trials, such as those arising in spaces, where classical approaches faltered without rigorous axiomatization. Nikolaevich Kolmogorov played a pivotal role by formalizing probability as a measure on abstract spaces in his seminal 1933 monograph Grundbegriffe der Wahrscheinlichkeitsrechnung (Foundations of the Theory of Probability), where the zero–one law appears as an appendix theorem. This work established the modern axioms of probability—non-negativity, normalization to 1, and countable additivity—enabling precise treatment of limits and tail events in infinite processes. Preceding Kolmogorov's contribution, laid foundational ideas in 1909 through his exploration of denumerable probabilities and their arithmetic applications, particularly in the context of infinite sequences of independent events like coin tosses modeling normal numbers. Borel's analysis informally anticipated zero–one behavior for certain limiting events in infinite trials, highlighting that probabilities in such settings often polarize to 0 or 1, though without the full measure-theoretic rigor. These insights influenced the broader intellectual backdrop, including efforts to extend Bernoulli's to infinite dimensions, but Borel's countable additivity assumption for infinite products required the deeper abstraction Kolmogorov later provided. Kolmogorov's 1933 result integrated seamlessly into his , resolving ambiguities in measures by proving that tail events—unaffected by any finite number of coordinates—must have probability 0 or 1 under . Building on Kolmogorov's framework, Paul Lévy extended the zero–one principle in to symmetric or exchangeable sequences of random variables in his book Théorie de l'addition des variables aléatoires, where he demonstrated similar polarization for events invariant under finite permutations. This drew directly from Kolmogorov's measure-theoretic foundation, applying it to non-independent but symmetrically dependent cases and further solidifying the law's role in processes. Together, these advancements marked a decisive evolution in , shifting focus from heuristic approximations to verifiable theorems on infinite-dimensional phenomena.

Mathematical Background

Probability Spaces and Measures

A probability space is formally defined as a triple (Ω,F,P)(\Omega, \mathcal{F}, P), where Ω\Omega is the sample space representing all possible outcomes of a random experiment, F\mathcal{F} is a σ\sigma-algebra of subsets of Ω\Omega (known as events), and P:F[0,1]P: \mathcal{F} \to [0,1] is a probability measure satisfying P(Ω)=1P(\Omega) = 1. This axiomatic framework, introduced by Kolmogorov, provides a measure-theoretic foundation for probability theory, ensuring that probabilities can be assigned consistently to events while capturing the structure of uncertainty in a rigorous manner. The PP adheres to key properties derived from the axioms: non-negativity (P(A)0P(A) \geq 0 for all AFA \in \mathcal{F}), normalization (P(Ω)=1P(\Omega) = 1), and countable additivity (for any countable collection of pairwise disjoint events {Ai}i=1F\{A_i\}_{i=1}^\infty \subset \mathcal{F}, P(i=1Ai)=i=1P(Ai)P\left(\bigcup_{i=1}^\infty A_i\right) = \sum_{i=1}^\infty P(A_i)). These ensure that PP behaves as a countably additive , allowing the extension of probabilities from finite to infinite unions without inconsistencies. In practice, probability spaces are often taken to be complete, meaning that if NAN \subset A for some AFA \in \mathcal{F} with P(A)=0P(A) = 0, then NFN \in \mathcal{F} and P(N)=0P(N) = 0; this completion addresses subsets of null sets to maintain measurability. The measure PP assigns to each event AFA \in \mathcal{F} a value P(A)[0,1]P(A) \in [0,1] interpreting the likelihood of AA occurring, with P(A)=0P(A) = 0 indicating impossibility and P(A)=1P(A) = 1 certainty. For scenarios involving infinite sequences of random variables, such as repeated independent trials, the underlying is constructed as an space Ω=n=1Ωn\Omega = \prod_{n=1}^\infty \Omega_n, equipped with the product σ\sigma-algebra F=n=1Fn\mathcal{F} = \bigotimes_{n=1}^\infty \mathcal{F}_n generated by cylinder sets. Kolmogorov's extension theorem guarantees the existence and of a PP on F\mathcal{F} when given a consistent family of finite-dimensional distributions {μn}n=1\{\mu_n\}_{n=1}^\infty, where consistency means that for any n<mn < m and Borel sets Bni=1nΩiB_n \subset \prod_{i=1}^n \Omega_i, μm(Bn×i=n+1mΩi)=μn(Bn)\mu_m(B_n \times \prod_{i=n+1}^m \Omega_i) = \mu_n(B_n). This construction ensures that the infinite-dimensional measure aligns with the specified marginal distributions on finite projections, forming the canonical space for stochastic processes.

Sigma-Algebras and Independence

In probability theory, a sigma-algebra F\mathcal{F} on a sample space Ω\Omega is a collection of subsets of Ω\Omega, known as events, that is closed under complementation and countable unions (and hence also countable intersections and differences), and includes Ω\Omega and the empty set itself. This structure ensures that the events form a Boolean algebra extended to countable operations, allowing the assignment of probabilities in a consistent manner across infinite sequences of events. A filtration {Fn}n=1\{\mathcal{F}_n\}_{n=1}^\infty is an increasing sequence of sigma-algebras on Ω\Omega, satisfying F1F2F\mathcal{F}_1 \subseteq \mathcal{F}_2 \subseteq \cdots \subseteq \mathcal{F}, where F\mathcal{F} is the overall sigma-algebra of the probability space. Filtrations model the progressive accumulation of information over time or stages in stochastic processes, with each Fn\mathcal{F}_n representing the information available up to stage nn. Independence of sigma-algebras extends the notion of independent events to collections of events. Specifically, two sigma-algebras Fi\mathcal{F}_i and Fj\mathcal{F}_j (with iji \neq j) are independent if P(AB)=P(A)P(B)P(A \cap B) = P(A)P(B) for all AFiA \in \mathcal{F}_i and BFjB \in \mathcal{F}_j. This property generalizes to countable collections of sigma-algebras {Fk}k=1\{\mathcal{F}_k\}_{k=1}^\infty, where any finite subcollection satisfies the independence condition for their events. If the sigma-algebras are generated by pi-systems (collections closed under finite intersections), their independence implies the independence of the generated sigma-algebras. The sigma-algebra generated by a finite collection of random variables X1,,XnX_1, \dots, X_n, denoted σ(X1,,Xn)\sigma(X_1, \dots, X_n), is the smallest sigma-algebra with respect to which each XkX_k is measurable, consisting of all events of the form {ωΩ:(X1(ω),,Xn(ω))B}\{\omega \in \Omega : (X_1(\omega), \dots, X_n(\omega)) \in B\} for Borel sets BB in the appropriate space. This generated sigma-algebra captures all information about the joint behavior of the variables up to that point.

Core Theorem

Statement

Kolmogorov's zero-one law asserts that certain events in a probability space generated by an infinite sequence of independent components have probabilities that are either 0 or 1. Formally, consider a probability space (Ω,F,P)(\Omega, \mathcal{F}, P) equipped with a sequence of independent σ\sigma-algebras {Fn}n=1\{\mathcal{F}_n\}_{n=1}^\infty, meaning that for any finite collection of indices, the σ\sigma-algebras Fn1,,Fnk\mathcal{F}_{n_1}, \dots, \mathcal{F}_{n_k} are independent with respect to PP. The tail σ\sigma-algebra is defined as Tn=σ(Fk:kn)\mathcal{T}_n = \sigma(\mathcal{F}_k : k \geq n) for each nn, and the overall tail σ\sigma-algebra is T=n=1Tn=n=1σ(Fn,Fn+1,)\mathcal{T} = \bigcap_{n=1}^\infty \mathcal{T}_n = \bigcap_{n=1}^\infty \sigma(\mathcal{F}_n, \mathcal{F}_{n+1}, \dots). The law states that for any event ATA \in \mathcal{T}, P(A){0,1}P(A) \in \{0, 1\}. This result holds under the condition that independence applies to the entire infinite sequence of σ\sigma-algebras, ensuring that events in the tail are unaffected by any finite initial segment. An equivalent formulation arises when the σ\sigma-algebras are generated by a sequence of independent random variables X1,X2,X_1, X_2, \dots, in which case the tail σ\sigma-algebra consists of events invariant under finite modifications to the sequence, and tail events are independent of each fixed Fn\mathcal{F}_n, yielding P(AB)=P(A)P(B)P(A \cap B) = P(A)P(B) for BFnB \in \mathcal{F}_n and thus P(A)=P(A)2P(A) = P(A)^2 for ATA \in \mathcal{T}, implying P(A){0,1}P(A) \in \{0, 1\}.

Tail Sigma-Algebra

The tail sigma-algebra is defined in the context of a probability space (Ω,F,P)(\Omega, \mathcal{F}, P) equipped with a sequence of independent sub-σ\sigma-algebras {Fk}k=1\{\mathcal{F}_k\}_{k=1}^\infty. For each integer n1n \geq 1, let Tn=σ(k=nFk)\mathcal{T}_n = \sigma\left( \bigcup_{k=n}^\infty \mathcal{F}_k \right) be the σ\sigma-algebra generated by all events depending only on the coordinates from the nnth onward. The tail σ\sigma-algebra is then the decreasing intersection T=n=1Tn\mathcal{T} = \bigcap_{n=1}^\infty \mathcal{T}_n. Events in T\mathcal{T} possess key properties that highlight their focus on long-term behavior. Notably, T\mathcal{T} is invariant under finite perturbations: modifying the outcome on any finite collection of coordinates leaves membership in T\mathcal{T} unchanged, since for any fixed finite set, there exists an nn large enough that the perturbation precedes the tail starting at nn. Additionally, T\mathcal{T} contains events such as the occurrence of infinitely many successes in an infinite sequence of independent coin flips, as this event relies solely on the infinite tail and ignores any finite initial segment. By Kolmogorov's zero-one law, the tail σ\sigma-algebra T\mathcal{T} is P-trivial, consisting only of events ATA \in \mathcal{T} with P(A)=0P(A) = 0 or P(A)=1P(A) = 1 (modulo null sets). This triviality underscores that tail events, while potentially complex in structure, admit no intermediate probabilities under independence. Tail events in T\mathcal{T} capture the asymptotic behavior of infinite independent processes in a manner independent of any finite initial segments, a feature that aligns with exchangeable sequences where the tail σ\sigma-algebra is contained within the exchangeable σ\sigma-algebra and shares its triviality properties under appropriate zero-one laws.

Proof and Derivation

Key Lemmas

A fundamental auxiliary result in the proof of Kolmogorov's zero-one law is the 0-1 lemma for single events, which states that if an event AA in a probability space (Ω,F,P)(\Omega, \mathcal{F}, P) is independent of its generated σ\sigma-algebra σ(A)\sigma(A), then P(A){0,1}P(A) \in \{0, 1\}. To see this, note that independence implies P(AA)=P(A)P(A)P(A \cap A) = P(A) P(A), so P(A)=P(A)2P(A) = P(A)^2, or equivalently P(A)(1P(A))=0P(A) (1 - P(A)) = 0, yielding the binary outcome. This lemma captures the triviality arising from self-independence and forms the basis for extending the result to larger σ\sigma-algebras. Another preparatory lemma establishes the independence of the tail σ\sigma-algebra from finite collections of initial σ\sigma-algebras. Specifically, given a sequence of independent σ\sigma-algebras F1,F2,\mathcal{F}_1, \mathcal{F}_2, \dots , let Fn=σ(F1,,Fn)\mathcal{F}_n = \sigma(\mathcal{F}_1, \dots, \mathcal{F}_n) and let T\mathcal{T} be the tail σ\sigma-algebra, defined as the intersection over nn of the σ\sigma-algebras generated by Fn+1,Fn+2,\mathcal{F}_{n+1}, \mathcal{F}_{n+2}, \dots . Then, for any fixed nn, T\mathcal{T} is independent of F1,,Fn\mathcal{F}_1, \dots, \mathcal{F}_n. This follows from the independence of the sequence: the tail T\mathcal{T} is contained in each σ(Fk+1,Fk+2,)\sigma(\mathcal{F}_{k+1}, \mathcal{F}_{k+2}, \dots) for knk \geq n, and such σ\sigma-algebras are independent of the initial finite ones by grouping the independent components. Iterating over nn shows that T\mathcal{T} is independent of the entire σ(F1,F2,)\sigma(\mathcal{F}_1, \mathcal{F}_2, \dots). To extend independence properties from generating sets to full σ\sigma-algebras, the Dynkin lemma (also known as the π\pi-λ\lambda theorem) is applied. This lemma states that if two σ\sigma-algebras G\mathcal{G} and H\mathcal{H} have the property that every set in a π\pi-system generating G\mathcal{G} is independent of every set in H\mathcal{H}, and H\mathcal{H} is itself a σ\sigma-algebra, then G\mathcal{G} and H\mathcal{H} are independent. In the context of the zero-one law, this is used to upgrade finite-dimensional independence (from cylinder sets or generators) to independence between the tail T\mathcal{T} and the full initial σ\sigma-algebra, ensuring the result holds for all measurable events rather than just generators. The monotone class theorem provides another tool for verifying closure properties in the tail σ\sigma-algebra under limits. It asserts that the smallest monotone class containing an algebra A\mathcal{A} (closed under finite intersections and containing Ω\Omega) coincides with the σ\sigma-algebra generated by A\mathcal{A}. For tail events, this theorem confirms that T\mathcal{T} is closed under monotone limits of sequences of events, which is essential when establishing that tail probabilities remain 0 or 1 under limiting operations, such as countable unions or intersections defining more complex tail behaviors. This closure ensures the triviality result applies robustly to the entire structure of T\mathcal{T}.

Main Proof

To prove Kolmogorov's zero-one law, consider a sequence of independent random variables X1,X2,X_1, X_2, \dots on a probability space (Ω,F,P)(\Omega, \mathcal{F}, P). Let Fn=σ(X1,,Xn)\mathcal{F}_n = \sigma(X_1, \dots, X_n) for each n1n \geq 1, and define the tail sigma-algebra T=n=1Tn\mathcal{T} = \bigcap_{n=1}^\infty \mathcal{T}_n, where Tn=σ(Xn+1,Xn+2,)\mathcal{T}_n = \sigma(X_{n+1}, X_{n+2}, \dots). The goal is to show that for every ATA \in \mathcal{T}, P(A)=0P(A) = 0 or P(A)=1P(A) = 1. First, establish that T\mathcal{T} is independent of each Fn\mathcal{F}_n. Note that TTn=σ(Xn+1,Xn+2,)\mathcal{T} \subseteq \mathcal{T}_n = \sigma(X_{n+1}, X_{n+2}, \dots) for every nn. By the countable independence of the XiX_i, the sigma-algebras Fn\mathcal{F}_n and Tn\mathcal{T}_n are independent, as they are generated by disjoint collections of independent random variables. Since independence of sigma-algebras is preserved under sub-sigma-algebras, it follows that T\mathcal{T} and Fn\mathcal{F}_n are independent for each nn. This uses the standard extension of independence from pi-systems to sigma-algebras generated by them. Next, let F=σ(n=1Fn)=σ(X1,X2,)\mathcal{F}_\infty = \sigma(\bigcup_{n=1}^\infty \mathcal{F}_n) = \sigma(X_1, X_2, \dots). The π\pi-system of finite-dimensional cylinder sets generates F\mathcal{F}_\infty, and each such cylinder set belongs to some Fn\mathcal{F}_n. Since T\mathcal{T} is independent of every Fn\mathcal{F}_n, every cylinder set is independent of every set in T\mathcal{T}. By the Dynkin π\pi-λ\lambda theorem, T\mathcal{T} is independent of F\mathcal{F}_\infty. Now, fix ATA \in \mathcal{T}. Since TF\mathcal{T} \perp \mathcal{F}_\infty and ATFA \in \mathcal{T} \subseteq \mathcal{F}_\infty, it follows that AA is independent of F\mathcal{F}_\infty, and in particular independent of σ(A)F\sigma(A) \subseteq \mathcal{F}_\infty. Thus, P(AA)=P(A)P(A)P(A \cap A) = P(A) P(A), or P(A)=P(A)2P(A) = P(A)^2. Solving P(A)2P(A)=0P(A)^2 - P(A) = 0 yields P(A){0,1}P(A) \in \{0, 1\}. Since AA was arbitrary, every event in T\mathcal{T} has probability 0 or 1. This argument applies in general probability spaces without assuming the absence of atoms or other restrictive conditions on the measure.

Examples and Illustrations

Infinite Sequence of Trials

One classic illustration of Kolmogorov's zero–one law arises in the context of an infinite sequence of independent Bernoulli trials, where each trial represents a success or failure with fixed probability. The sample space is Ω={0,1}N\Omega = \{0,1\}^{\mathbb{N}}, the set of all infinite sequences of 0s and 1s, equipped with the product sigma-algebra and probability measure induced by i.i.d. Bernoulli(pp) random variables Xi(ω)=ωiX_i(\omega) = \omega_i for iNi \in \mathbb{N} and p[0,1]p \in [0,1]. The sigma-algebras Fn=σ(X1,,Xn)\mathcal{F}_n = \sigma(X_1, \dots, X_n) are generated by the first nn coordinates and form an independent filtration. Tail events, measurable with respect to the tail sigma-algebra nσ(Xn+1,Xn+2,)\bigcap_n \sigma(X_{n+1}, X_{n+2}, \dots), must have probability 0 or 1 under this setup by the zero-one law. A key tail event is A={ωΩ:i=1Xi(ω)=}A = \{\omega \in \Omega : \sum_{i=1}^\infty X_i(\omega) = \infty\}, the set of sequences containing infinitely many 1s. By the zero-one law, P(A){0,1}P(A) \in \{0,1\}. Specifically, P(A)=0P(A) = 0 if p=0p=0, as all Xi=0X_i = 0 almost surely, and P(A)=1P(A) = 1 if p>0p > 0. This follows from the second Borel-Cantelli lemma applied to the independent events {Xn=1}\{X_n = 1\}, each with probability pp, whose probabilities sum to infinity, implying P(lim supn{Xn=1})=1P(\limsup_n \{X_n = 1\}) = 1. The event {lim supnXn=1}\{\limsup_{n \to \infty} X_n = 1\}, equivalent to AA and representing infinitely often occurrences of 1, similarly has probability 1 for p>0p > 0 as a tail event, with the value determined via the divergent Borel-Cantelli sum. Thus, the zero-one law underscores that, for positive pp, infinite independent trials almost surely yield infinitely many successes, a deterministic long-run enforced by the independence structure.

Branching Processes

The models the evolution of a over discrete s, starting with a single individual (Z0=1Z_0 = 1), where each individual independently produces a of according to a fixed {pk}k=0\{p_k\}_{k=0}^\infty, with pk=P(X=k)p_k = P(X = k) for offspring number XX and m=E[X]m = \mathbb{E}[X]. The at nn, ZnZ_n, is defined recursively by Zn=i=1Zn1Xn,iZ_n = \sum_{i=1}^{Z_{n-1}} X_{n,i}, where the Xn,iX_{n,i} are independent and identically distributed as XX. The sigma-algebras Fn=σ(Z0,Z1,,Zn)\mathcal{F}_n = \sigma(Z_0, Z_1, \dots, Z_n) form an increasing generated by the process up to nn, and the of offspring distributions ensures that sigma-algebras generated by different subtrees (branches from a common ancestor) are independent. The ultimate EE is the event that the population dies out, expressed as E=n=1{Zn=0}E = \bigcup_{n=1}^\infty \{Z_n = 0\} (or equivalently, limnZn=0\lim_{n \to \infty} Z_n = 0). This event is in the overall process σ\sigma- F=σ(nFn)\mathcal{F}_\infty = \sigma(\bigcup_n \mathcal{F}_n). The independence across branches implies that the tail σ\sigma- T=nσ(Zn+1,Zn+2,)\mathcal{T} = \bigcap_n \sigma(Z_{n+1}, Z_{n+2}, \dots) is trivial in the of Kolmogorov's zero-one , meaning in T\mathcal{T} have probabilities 0 or 1. This triviality arises from the exchangeability and independence in the , highlighting asymptotic behaviors independent of any finite generations. The probability P(E)P(E) equals 1 if m1m \leq 1 (assuming P(X=1)<1P(X=1) < 1 in the critical case), while if m>1m > 1, it equals the unique fixed point η[0,1)\eta \in [0, 1) of the probability generating function f(s)=E[sX]=k=0pkskf(s) = \mathbb{E}[s^X] = \sum_{k=0}^\infty p_k s^k, satisfying η=f(η)\eta = f(\eta). This fixed point η\eta is the smallest nonnegative solution to the equation and can be found iteratively as η=limnfn(0)\eta = \lim_{n \to \infty} f_n(0), where fn=fff_n = f \circ \cdots \circ f (nn times). The independence of subtrees underpins the functional equation for η\eta and the triviality of the tail σ\sigma-algebra, illustrating how Kolmogorov's law captures deterministic long-term outcomes in such independent systems.

Applications and Extensions

Ergodic Theory Connections

Kolmogorov's zero-one law establishes that the tail sigma-algebra of a sequence of independent random variables is trivial, meaning every tail event has probability 0 or 1. In the context of ergodic theory, this triviality directly implies ergodicity for stationary processes generated by such sequences. Specifically, for a stationary process under the shift transformation, the sigma-algebra of invariant events is contained within the tail sigma-algebra. Thus, the zero-one law ensures that invariant events also have probabilities of 0 or 1, satisfying the definition of an ergodic measure-preserving transformation where no non-trivial invariant sets exist. This connection is particularly evident in applications to shift spaces, where the law demonstrates that invariant events under the shift map have probability 0 or 1. Shift spaces, constructed from sequences of independent and identically distributed random variables, model measures preserved by the bilateral shift. The resulting triviality of the invariant sigma-algebra underpins the of these spaces and serves as a foundation for the existence of unique ergodic measures, especially in systems with maximal where the invariant measure is uniquely determined by the shift dynamics. Kolmogorov extended these ideas to groups of automorphisms in , showing that factors of such systems inherit ergodic properties. In his work on measure-preserving groups, he demonstrated that the structure of invariant measures and the triviality analogous to events ensure that factors—projections onto subsystems—are themselves ergodic, preserving the zero-one behavior for invariant subsets. This extension broadened the law's impact beyond sequences to abstract dynamical systems defined by group actions. A representative example is the Bernoulli shift, which acts on the infinite product space of independent trials and is mixing, hence ergodic. The shift's invariant events belong to the tail sigma-algebra, which is trivial by the zero-one law, confirming that the Bernoulli shift is a K-automorphism with trivial tail structure. This property highlights how the law certifies the strong mixing behavior essential for applications in and dynamical systems.

Measure-Theoretic Probability

In measure-theoretic probability, Kolmogorov's zero-one law plays a crucial role in the construction of infinite product measures, ensuring the consistency and well-definedness of limits in infinite-dimensional spaces. The Kolmogorov extension theorem facilitates the extension of finite-dimensional distributions to infinite products on standard Borel spaces. For independent sequences, the resulting measure satisfies Kolmogorov's zero-one law, ensuring that tail events—those measurable with respect to the tail σ-algebra—have probabilities of 0 or 1, thereby stabilizing the measure on spaces like the infinite product of probability spaces. This is particularly evident in the realization of Gaussian measures on separable Banach or Fréchet spaces, where a related zero-one law for Gaussian measures implies that measurable affine subspaces under the Gaussian measure have probability 0 or 1, ensuring the existence and uniqueness of limits for cylinder sets and supporting the regularity of paths in function spaces such as C[0,1]C[0,1] for the Wiener measure. For instance, sets invariant under translations in the Cameron-Martin space associated with a centered nondegenerate Gaussian measure γ\gamma on a Banach space XX also satisfy this zero-one property, which underpins the almost sure constancy of invariant functions and the well-posedness of stochastic integrals in infinite dimensions. A key application arises in stochastic processes with independent increments, where the triviality of the tail σ-algebra validates the strong law of large numbers (SLLN). For a sequence of independent random variables with finite mean, the tail triviality implies that the Cesàro averages converge almost surely to the expected value, as the event of convergence is tail-measurable and thus has probability 1 under the zero-one law. In the context of processes like Lévy processes, which have stationary independent increments, this extends the SLLN to functional forms, confirming that sample paths satisfy ergodic properties and asymptotic determinism almost surely, without requiring additional mixing conditions. In modern extensions to quantum probability and non-commutative settings, the zero-one law generalizes to tail von Neumann algebras, preserving the dichotomy for events in finite W*-algebras equipped with a trace. Specifically, in a non-commutative defined by a and a faithful normal state, independence of subalgebras leads to tail algebras where projections have trace 0 or 1, analogous to classical tail triviality. This framework supports non-commutative analogs of and mixing properties, with conditional factorizability implying the zero-one behavior for tail events in operator algebras. The law's validity hinges on independence; without it, the tail σ-algebra need not be trivial, allowing probabilities strictly between 0 and 1 for tail events in dependent sequences. For negatively dependent processes, such as those with repulsive interactions, tail triviality fails, as demonstrated by examples where infinite-volume limits exhibit non-extreme measures on the tail algebra. A simple counterexample is a sequence where all variables equal a single Bernoulli random variable with parameter p(0,1)p \in (0,1), rendering the tail σ-algebra non-trivial with events of probability pp.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.