Hubbry Logo
Elementary eventElementary eventMain
Open search
Elementary event
Community hub
Elementary event
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Elementary event
Elementary event
from Wikipedia

In probability theory, an elementary event, also called an atomic event or sample point, is an event which contains only a single outcome in the sample space.[1] Using set theory terminology, an elementary event is a singleton. Elementary events and their corresponding outcomes are often written interchangeably for simplicity, as such an event corresponding to precisely one outcome.

The following are examples of elementary events:

  • All sets where if objects are being counted and the sample space is (the natural numbers).
  • if a coin is tossed twice. where stands for heads and for tails.
  • All sets where is a real number. Here is a random variable with a normal distribution and This example shows that, because the probability of each elementary event is zero, the probabilities assigned to elementary events do not determine a continuous probability distribution..

Probability of an elementary event

[edit]

Elementary events may occur with probabilities that are between zero and one (inclusively). In a discrete probability distribution whose sample space is finite, each elementary event is assigned a particular probability. In contrast, in a continuous distribution, individual elementary events must all have a probability of zero.

Some "mixed" distributions contain both stretches of continuous elementary events and some discrete elementary events; the discrete elementary events in such distributions can be called atoms or atomic events and can have non-zero probabilities.[2]

Under the measure-theoretic definition of a probability space, the probability of an elementary event need not even be defined. In particular, the set of events on which probability is defined may be some σ-algebra on and not necessarily the full power set.

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , an elementary event, also called an atomic event or sample point, is the fundamental unit of a , defined as a singleton set containing exactly one outcome from the , which represents all possible results of a random experiment. These events serve as the building blocks for constructing more complex events, which are subsets of the sample space comprising multiple elementary outcomes, allowing for the assignment of probabilities to broader scenarios in accordance with Kolmogorov's axioms. In discrete probability models, such as coin flips or dice rolls, each elementary event typically has an equal probability if the outcomes are equally likely, often denoted as 1n\frac{1}{n} where nn is the number of sample points. The concept is essential for defining the sigma-algebra of events and ensuring that probabilities are well-defined, non-negative, and sum to 1 over the entire sample space.

Definition and Context

Core Definition

In , an elementary event is a singleton of the Ω\Omega, which represents the universal set of all possible outcomes in a random experiment, and it corresponds to an indivisible outcome that cannot be decomposed into simpler events. Formally, if Ω\Omega is the , then an elementary event is denoted as {ω}\{\omega\} for some ωΩ\omega \in \Omega. These events act as the atoms of the event space, serving as the basic building blocks from which all other events are formed by taking unions of such singletons. The concept of the elementary event gained prominence through Andrey Kolmogorov's axiomatic formulation of probability in 1933, where he explicitly identified these as the fundamental elements ee of the set EE (the space of elementary events), distinguishing them from composite random events that are subsets of EE.

Relation to Sample Space

In , the sample space, denoted as Ω\Omega, represents the universal set encompassing all possible outcomes of a random experiment, while elementary events correspond to the individual singleton subsets {ω}\{\omega\} for each ωΩ\omega \in \Omega. These elementary events serve as the atomic units, capturing the most basic, indivisible results of the experiment. The structure of events builds upon these elementary events through the formation of a sigma-algebra F\mathcal{F} on Ω\Omega, which includes the empty set, Ω\Omega itself, and is closed under countable unions, intersections, and complements. In finite sample spaces, F\mathcal{F} is often the full power set of Ω\Omega, consisting of all possible subsets, each of which is a finite union of elementary events. For infinite sample spaces, particularly continuous ones, the sigma-algebra is typically generated by a basis such as open intervals (Borel sigma-algebra), which includes the elementary events as measurable sets, though they are assigned probability zero. Events are measurable sets in this sigma-algebra, not necessarily countable unions of singletons. Regardless of the of Ω\Omega, elementary events retain their indivisible nature, forming the foundational layer from which all composite events are derived. This relational structure underscores the prerequisite role of elementary events in establishing the event algebra of a , enabling the subsequent definition of measurable sets and probability measures.

Probability Assignment

Probability Measure on Elementary Events

In , the assignment of probabilities to elementary events forms the foundational layer of a , where an elementary event is a singleton subset {ω} for some outcome ω in the Ω. The P is defined such that it maps each elementary event to a value in [0,1], ensuring that probabilities are non-negative and collectively normalize to unity across the . This measure extends to more complex events through the principles of additivity, establishing a consistent framework for probabilistic reasoning. The foundation for this stems from Kolmogorov's three , which apply directly to elementary events in discrete settings. Specifically, the first requires that P({ω}) ≥ 0 for every ω ∈ Ω, guaranteeing non-negativity as a core property of valid probabilities. The second enforces normalization by stipulating that the sum of probabilities over all elementary events equals 1, i.e., ∑_{ω ∈ Ω} P({ω}) = 1, which ensures the total probability is conserved. These properties collectively define a valid over the elementary events, serving as the building blocks of the . Furthermore, the third of countable additivity extends the measure to unions of disjoint elementary events. For a countable collection of disjoint elementary events {ω_i}, the probability of their union is the sum of their individual probabilities: P(i{ωi})=iP({ωi})P\left( \bigcup_i \{\omega_i\} \right) = \sum_i P(\{\omega_i\}) This additivity principle allows the on singletons to propagate to arbitrary events within the sigma-algebra generated by the elementary events, maintaining consistency in both finite and countably infinite discrete cases. Non-negativity and normalization remain invariant, preventing negative or super-unitary probabilities and upholding the integrity of the measure.

Uniform vs. Non-Uniform Cases

In the uniform case, each elementary event ωΩ\omega \in \Omega in a finite Ω\Omega is assigned equal probability, such that P({ω})=1ΩP(\{\omega\}) = \frac{1}{|\Omega|}. This assumption holds for scenarios like tosses, where the Ω={heads,tails}\Omega = \{\text{heads}, \text{tails}\} yields P({heads})=P({tails})=12P(\{\text{heads}\}) = P(\{\text{tails}\}) = \frac{1}{2}, or standard dice rolls with Ω={1,2,,6}\Omega = \{1, 2, \dots, 6\} and each face equally likely at 16\frac{1}{6}. In contrast, the non-uniform case allows probabilities to vary across elementary events, provided they are non-negative and sum to 1 over Ω\Omega. These assignments are modeled using a (PMF), which specifies P({ω})P(\{\omega\}) for each ω\omega. For instance, a biased might have P({heads})=0.7P(\{\text{heads}\}) = 0.7 and P({tails})=0.3P(\{\text{tails}\}) = 0.3, reflecting unequal likelihoods due to physical imperfections. For any event EE as a union of elementary events in the discrete case, the probability simplifies to P(E)=EΩP(E) = \frac{|E|}{|\Omega|}, where E|E| counts the favorable elementary outcomes. Uniform assignments simplify by reducing probability computations to mere of outcomes, avoiding the need to sum disparate values from a PMF. However, many real-world scenarios, such as biased experiments or weighted sampling, necessitate non-uniform models to accurately capture varying likelihoods among elementary events.

Examples and Illustrations

Discrete Sample Spaces

In discrete sample spaces, which are finite or countably infinite sets of possible outcomes, elementary events correspond to the individual singleton outcomes that form the basic building blocks of the probability model. These spaces allow for the explicit listing of all outcomes, making it straightforward to identify and work with elementary events as the indivisible units from which more complex events are constructed. A classic example is the toss of a , where the Ω={H,T}\Omega = \{H, T\} consists of two outcomes: heads (H) or tails (T). Here, the elementary events are the singletons {H}\{H\} and {T}\{T\}, each representing an atomic outcome of the experiment. In a probability assignment, the probability of each elementary event is P({H})=P({T})=12P(\{H\}) = P(\{T\}) = \frac{1}{2}. Another illustrative case is the roll of a fair six-sided die, with Ω={1,2,3,4,5,6}\Omega = \{1, 2, 3, 4, 5, 6\}. The elementary events are {1},{2},,{6}\{1\}, \{2\}, \dots, \{6\}, each denoting the occurrence of a specific . Under probability, the measure assigned to each is P({i})=16P(\{i\}) = \frac{1}{6} for i=1,2,,6i = 1, 2, \dots, 6. The discrete nature of these s enables the direct enumeration of elementary events, which in turn facilitates the calculation of probabilities for compound events through simple of the probabilities of the constituent elementary events. This approach is particularly valuable in finite cases, as it provides a method to verify that the total probability sums to 1 across all elementary events.

Continuous Sample Spaces

In continuous sample spaces, the sample space Ω\Omega is uncountable, typically consisting of all real numbers within an interval or more complex sets, such as Ω=[0,1]\Omega = [0,1] for a uniform distribution representing proportions or normalized times. Here, elementary events are singletons {x}\{x\} for each xΩx \in \Omega, but unlike discrete cases, these have measure zero under the . For the uniform distribution on [0,1][0,1], the probability P({x})=0P(\{x\}) = 0 for any specific xx, as the total probability mass of 1 is distributed continuously across the interval, making the likelihood of exact points negligible. Probability in such spaces is assigned via probability density functions rather than directly to singletons; meaningful events are intervals or sets with positive length, where P([a,b])=abf(t)dtP([a,b]) = \int_a^b f(t) \, dt for density f(t)=1f(t) = 1 in the uniform case. Elementary events serve as idealized building blocks, but their zero probability reflects the infinite divisibility of the space, ensuring the axioms of probability are satisfied without assigning positive mass to uncountably many points. A similar structure applies to non-uniform continuous distributions, such as the standard normal distribution with density f(x)=12πex2/2f(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}
Add your contribution
Related Hubs
User Avatar
No comments yet.