Hubbry Logo
Point processPoint processMain
Open search
Point process
Community hub
Point process
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Point process
Point process
from Wikipedia

In statistics and probability theory, a point process or point field is a set of a random number of mathematical points randomly located on a mathematical space such as the real line or Euclidean space.[1][2]

Point processes on the real line form an important special case that is particularly amenable to study,[3] because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network[4] or of searches on the world-wide web.

General point processes on a Euclidean space can be used for spatial data analysis,[5][6] which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience,[7] economics[8] and others.

Conventions

[edit]

Since point processes were historically developed by different communities, there are different mathematical interpretations of a point process, such as a random counting measure or a random set,[9][10] and different notations. The notations are described in detail on the point process notation page.

Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,[11][12] though it has been remarked that the difference between point processes and stochastic processes is not clear.[12] Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[a] on which it is defined, such as the real line or -dimensional Euclidean space.[15][16] Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.[17][12] Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field.[18]

Mathematics

[edit]

In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points.[clarification needed]

Definition

[edit]

To define general point processes, we start with a probability space , and a measurable space where is a locally compact second countable Hausdorff space and is its Borel σ-algebra. Consider now an integer-valued locally finite kernel from into , that is, a mapping such that:

  1. For every , is a (integer-valued) locally finite measure on .
  2. For every , is a random variable over .

This kernel defines a random measure in the following way. We would like to think of as defining a mapping which maps to a measure (namely, ), where is the set of all locally finite measures on . Now, to make this mapping measurable, we need to define a -field over . This -field is constructed as the minimal algebra so that all evaluation maps of the form , where is relatively compact, are measurable. Equipped with this -field, then is a random element, where for every , is a locally finite measure over .

Now, by a point process on we simply mean an integer-valued random measure (or equivalently, integer-valued kernel) constructed as above. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process.

Despite the name point process since S might not be a subset of the real line, as it might suggest that ξ is a stochastic process.

Representation

[edit]

Every instance (or event) of a point process ξ can be represented as

where denotes the Dirac measure, n is an integer-valued random variable and are random elements of S. If 's are almost surely distinct (or equivalently, almost surely for all ), then the point process is known as simple.

Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as an function, a continuous function which takes integer values: :

which is the number of events in the observation interval . It is sometimes denoted by , and or mean .

Expectation measure

[edit]

The expectation measure (also known as mean measure) of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is,

Laplace functional

[edit]

The Laplace functional of a point process N is a map from the set of all positive valued functions f on the state space of N, to defined as follows:

They play a similar role as the characteristic functions for random variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal.

Moment measure

[edit]

The th power of a point process, is defined on the product space as follows :

By monotone class theorem, this uniquely defines the product measure on The expectation is called the th moment measure. The first moment measure is the mean measure.

Let . The joint intensities of a point process w.r.t. the Lebesgue measure are functions such that for any disjoint bounded Borel subsets

Joint intensities do not always exist for point processes. Given that moments of a random variable determine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases.[2]

Stationarity

[edit]

A point process is said to be stationary if has the same distribution as for all For a stationary point process, the mean measure for some constant and where stands for the Lebesgue measure. This is called the intensity of the point process. A stationary point process on has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones.[2] Stationarity has been defined and studied for point processes in more general spaces than .

Transformations

[edit]

A point process transformation is a function that maps a point process to another point process.

Examples

[edit]

We shall see some examples of point processes in

Poisson point process

[edit]

The simplest and most ubiquitous example of a point process is the Poisson point process, which is a spatial generalisation of the Poisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have a Poisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point process is a Poisson point process if the following two conditions hold

1) are independent for disjoint subsets

2) For any bounded subset , has a Poisson distribution with parameter where denotes the Lebesgue measure.

The two conditions can be combined and written as follows : For any disjoint bounded subsets and non-negative integers we have that

The constant is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameter It is a simple, stationary point process. To be more specific one calls the above point process a homogeneous Poisson point process. An inhomogeneous Poisson process is defined as above but by replacing with where is a non-negative function on

Cox point process

[edit]

A Cox process (named after Sir David Cox) is a generalisation of the Poisson point process, in that we use random measures in place of . More formally, let be a random measure. A Cox point process driven by the random measure is the point process with the following two properties :

  1. Given , is Poisson distributed with parameter for any bounded subset
  2. For any finite collection of disjoint subsets and conditioned on we have that are independent.

It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process is and thus in the special case of a Poisson point process, it is

For a Cox point process, is called the intensity measure. Further, if has a (random) density (Radon–Nikodym derivative) i.e.,

then is called the intensity field of the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes.

There have been many specific classes of Cox point processes that have been studied in detail such as:

  • Log-Gaussian Cox point processes:[19] for a Gaussian random field
  • Shot noise Cox point processes:,[20] for a Poisson point process and kernel
  • Generalised shot noise Cox point processes:[21] for a point process and kernel
  • Lévy based Cox point processes:[22] for a Lévy basis and kernel , and
  • Permanental Cox point processes:[23] for k independent Gaussian random fields 's
  • Sigmoidal Gaussian Cox point processes:[24] for a Gaussian random field and random

By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsets ,

where stands for a Poisson point process with intensity measure Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes called clustering or attractive property of the Cox point process.

Determinantal point processes

[edit]

An important class of point processes, with applications to physics, random matrix theory, and combinatorics, is that of determinantal point processes.[25]

Hawkes (self-exciting) processes

[edit]

A Hawkes process , also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as

where is a kernel function which expresses the positive influence of past events on the current value of the intensity process , is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and is the time of occurrence of the i-th event of the process.[26]

Geometric processes

[edit]

Given a sequence of non-negative random variables , if they are independent and the cdf of is given by for , where is a positive constant, then is called a geometric process (GP).[27]

The geometric process has several extensions, including the α- series process[28] and the doubly geometric process.[29]

Point processes on the real half-line

[edit]

Historically the first point processes that were studied had the real half line R+ = [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems,[30] in which the points represented events in time, such as calls to a telephone exchange.

Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1T2, ...), from which the actual sequence (X1X2, ...) of event times can be obtained as

If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process.

Intensity of a point process

[edit]

The intensity λ(t | Ht) of a point process on the real half-line with respect to a filtration Ht is defined as

Ht can denote the history of event-point times preceding time t but can also correspond to other filtrations (for example in the case of a Cox process).

In the -notation, this can be written in a more compact form:

The compensator of a point process, also known as the dual-predictable projection, is the integrated conditional intensity function defined by

[edit]

Papangelou intensity function

[edit]

The Papangelou intensity function of a point process in the -dimensional Euclidean space is defined as

where is the ball centered at of a radius , and denotes the information of the point process outside .

Likelihood function

[edit]

The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as

[31]

Point processes in spatial statistics

[edit]

The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines,[32] amongst which are

  • forestry and plant ecology (positions of trees or plants in general)
  • epidemiology (home locations of infected patients)
  • zoology (burrows or nests of animals)
  • geography (positions of human settlements, towns or cities)
  • seismology (epicenters of earthquakes)
  • materials science (positions of defects in industrial materials)
  • astronomy (locations of stars or galaxies)
  • computational neuroscience (spikes of neurons).

The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition.

In contrast, many datasets considered in classical multivariate statistics consist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial).

Apart from the applications in spatial statistics, point processes are one of the fundamental objects in stochastic geometry. Research has also focussed extensively on various models built on point processes such as Voronoi tessellations, random geometric graphs, and Boolean models.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A point process is a stochastic model used to represent the locations or times of random discrete events occurring in a continuous space or time domain, typically characterized by the positions of points that indicate event occurrences. These processes are fundamental in probability theory and statistics for analyzing phenomena where events happen irregularly, such as arrivals in queues or particle positions in physics. Point processes can be broadly classified into temporal types, which focus on events unfolding over time (e.g., occurrences), spatial types, which describe point distributions in a plane or higher dimensions (e.g., locations in a forest), and marked types, which attach additional attributes to each point (e.g., magnitudes associated with seismic events). Common subtypes include the , where events occur independently at a constant average rate λ\lambda per unit time or area, leading to exponentially distributed inter-event intervals; renewal processes, defined by independent and identically distributed waiting times between events; and more complex variants like Cox processes, which feature a random intensity function, or Markov processes that account for dependencies between points. Mathematically, a point process is often formalized through its N(A)N(A), which tallies the number of points in a AA, or via the intensity function λ(t)\lambda(t) or λ(x)\lambda(x) that quantifies the expected density of points at a given time or location. The study of point processes originated in the early with foundational work on Poisson processes by researchers like A.K. Erlang in , evolving into a rich field through seminal texts such as An Introduction to the Theory of Point Processes by D.J. Daley and D. Vere-Jones, which provides rigorous frameworks for both finite and infinite point configurations. Applications span diverse disciplines: in , they model spike trains to infer firing rates; in , for predicting aftershocks via marked spatial-temporal models; in , to assess distributions and clustering; and in , for modeling high-frequency trade arrivals or insurance claims. Advanced techniques, including simulation methods like spatial birth-death processes and via likelihood maximization, enable practical even for non-homogeneous cases.

Conventions and Notation

Terminology

A point process is a random collection of points in a space, often used to model phenomena such as event times in temporal settings or spatial locations of objects or incidents. Point processes are classified as simple if they exhibit no multiple points at the same location with probability one, meaning the counting measure assigns at most one point to any singleton set. In contrast, general point processes allow for the possibility of multiple points coinciding at the same location. The ground process refers to the underlying unmarked point process, while a marked point process extends this by associating additional attributes, known as marks, with each point to capture extra information about the events. Ground intensity describes the rate or density of points in this base process, providing a measure of average point that is explored further in subsequent sections. The term "point process" originated in the , first appearing in Conny Palm's 1943 dissertation on telephone modeling as "Punkt-prozesse," and was later generalized in the and through foundational works by mathematicians such as A. Khinchin and D.R. Cox, establishing the modern probabilistic framework.

Mathematical Symbols and Assumptions

In point process theory, the underlying space X\mathcal{X} is typically a complete separable equipped with its Borel σ\sigma-field B\mathcal{B}, often taken as the real line R\mathbb{R} for temporal processes or the dd-dimensional Rd\mathbb{R}^d for spatial processes. This space is assumed to be locally compact with a second countable topology to ensure measurability and facilitate the definition of compact subsets. The point process itself is denoted by Φ\Phi, which is interpreted as a random NN on (X,B)(\mathcal{X}, \mathcal{B}), where N(A)N(A) denotes the number of points falling in a measurable set AXA \subset \mathcal{X}. Individual points are represented using Dirac measures δx\delta_x, defined such that δx(A)=1\delta_x(A) = 1 if xAx \in A and 0 otherwise, allowing the process to be expressed as a sum of such measures over its points. Foundational assumptions include the requirement that NN is a locally finite measure, meaning it assigns finite to compact subsets of X\mathcal{X}, which aligns with the measure's role in enumerating points. Point processes are classified as simple if they exhibit no multiple points, satisfying Pr{N({x})=0 or 1 for all x}=1\Pr\{N(\{x\}) = 0 \text{ or } 1 \text{ for all } x\} = 1, ensuring at most one point per location ; in contrast, multiple point processes permit N({x})>1N(\{x\}) > 1 with positive probability. These assumptions provide the rigorous framework for subsequent developments, such as stationarity, which assumes translation invariance but is treated as a derived property elsewhere.

Core Definitions and Representations

Formal Definition

A point process is formally defined as a random element in the space of counting measures on a measurable space (X,B)(\mathcal{X}, \mathcal{B}), where X\mathcal{X} is typically a complete separable metric space equipped with its Borel σ\sigma-algebra B\mathcal{B}. Specifically, let M(X)\mathcal{M}(\mathcal{X}) denote the space of non-negative integer-valued (counting) measures on (X,B)(\mathcal{X}, \mathcal{B}), which are measures μ\mu satisfying μ(B){0,1,2,}{}\mu(B) \in \{0, 1, 2, \dots \} \cup \{\infty\} for all BBB \in \mathcal{B}, with μ()=0\mu(\emptyset) = 0 and countable additivity over disjoint sets. A point process Φ\Phi is then a measurable mapping Φ:ΩM(X)\Phi: \Omega \to \mathcal{M}(\mathcal{X}), where (Ω,F,P)(\Omega, \mathcal{F}, P) is an underlying probability space, and measurability is with respect to the σ\sigma-algebra on M(X)\mathcal{M}(\mathcal{X}) generated by the evaluation maps μμ(B)\mu \mapsto \mu(B) for BBB \in \mathcal{B}. This axiomatic setup defines realizations of Φ\Phi as locally finite counting measures, with (distinct points ) often assumed as an additional property, meaning Φ(B)<\Phi(B) < \infty for all bounded BBB \in \mathcal{B} (or compact sets if X\mathcal{X} is non-locally compact). The probability space (Ω,F,P)(\Omega, \mathcal{F}, P) provides the randomness, with Φ(ω)\Phi(\omega) for ωΩ\omega \in \Omega yielding a counting measure that counts the number of points in any measurable set, and the mapping Φ\Phi preserves the probabilistic structure through its induced distribution. An equivalent representation expresses the point process as a random sum of Dirac measures: Φ=i=1δXi\Phi = \sum_{i=1}^\infty \delta_{X_i}, where {Xi}i=1\{X_i\}_{i=1}^\infty is an almost surely countable collection of random points in X\mathcal{X}, and δx\delta_x is the Dirac measure at xXx \in \mathcal{X} defined by δx(B)=1\delta_x(B) = 1 if xBx \in B and 0 otherwise. This sum is understood in the sense of vague convergence or as a random element in M(X)\mathcal{M}(\mathcal{X}), with the points XiX_i being distinct almost surely for simple point processes. For any BBB \in \mathcal{B}, the count is then Φ(B)=i=11{XiB}\Phi(B) = \sum_{i=1}^\infty \mathbf{1}_{\{X_i \in B\}}, where 1\mathbf{1} is the indicator function. The point process Φ\Phi is uniquely determined by its distribution PΦ=PΦ1P_\Phi = P \circ \Phi^{-1} on M(X)\mathcal{M}(\mathcal{X}), which fully characterizes the law of the random counting measure and factorizes all probabilistic statements about Φ\Phi. This distribution induces finite-dimensional distributions on the counts Φ(B1),,Φ(Bk)\Phi(B_1), \dots, \Phi(B_k) for disjoint sets BjBB_j \in \mathcal{B}, ensuring consistency via the Kolmogorov extension theorem. Equivalent representations of the point process, such as through generating functionals, follow directly from this core definition.

Equivalent Representations

Point processes can be represented in various mathematically equivalent forms that facilitate different analytical approaches, such as likelihood inference, conditional analysis, and dependence quantification. These representations, including Janossy densities, Palm distributions, and , all uniquely determine the underlying distribution PΦP_\Phi of the point process Φ\Phi, building directly on its formal definition as a random counting measure. Janossy densities provide a representation through the joint densities of ordered point configurations, capturing the probability of exact point locations while accounting for the unordered nature of the process. Specifically, the Janossy density jn(x1,,xn)j_n(x_1, \dots, x_n) for nn points is defined such that for disjoint small sets B1,,BnB_1, \dots, B_n around x1,,xnx_1, \dots, x_n, it satisfies jn(x1,,xn)=n!P(Φ(B1)=1,,Φ(Bn)=1)j_n(x_1, \dots, x_n) = n! \, P(\Phi(B_1) = 1, \dots, \Phi(B_n) = 1), where the factorial n!n! adjusts for the ordering of indistinguishable points. This form is symmetric in its arguments and absolutely continuous with respect to the product , enabling the specification of finite-dimensional distributions via integrals over regions. Palm distributions offer an equivalent conditional perspective, describing the distribution of the process given the presence of a point at a specific location, typically the origin for stationary cases. Formally, the Palm distribution PΦ0P^0_\Phi is the conditional law of Φ\Phi under the event that Φ({0})1\Phi(\{0\}) \geq 1, providing insights into typical configurations around an observed point without delving into full conditioning formulas. This representation is particularly useful for ergodic and stationary processes, where it relates to reduced moment measures and regeneration properties. Correlation functions, often expressed in reduced form, quantify point dependencies through normalized probabilities of joint occurrences. The kk-th order correlation function is given by g(k)(x1,,xk)=1λkP(Φ(B1)=1,,Φ(Bk)=1)g^{(k)}(x_1, \dots, x_k) = \frac{1}{\lambda^k} P(\Phi(B_1)=1, \dots, \Phi(B_k)=1) for small disjoint balls BiB_i around xix_i and intensity λ\lambda, serving as a reduced version of the product densities or factorial moment densities. For k=2k=2, this pair correlation function g(2)(x,y)g^{(2)}(x,y) highlights clustering (values >1) or inhibition (values <1) relative to independence. These representations are equivalent in that each fully specifies the distribution PΦP_\Phi: Janossy densities determine all finite-dimensional probabilities, which in turn yield the factorial moment densities underlying correlation functions, while Palm distributions recover the unconditional law via inversion formulas like the Palm-Khinchin equations; conversely, starting from correlation functions or Palm measures allows reconstruction of the Janossy densities through integral relations, ensuring consistency across forms.

Fundamental Measures

Expectation Measure

The expectation measure of a point process Φ\Phi, also known as the first-moment measure or intensity measure, is defined as the measure Λ\Lambda on the underlying space that assigns to each Borel set AA the expected number of points in that set, given by Λ(A)=E[Φ(A)]=E[N(A)]\Lambda(A) = \mathbb{E}[\Phi(A)] = \mathbb{E}[N(A)], where N(A)N(A) denotes the counting measure of points in AA. This measure quantifies the average density of points and serves as a foundational tool for analyzing the overall scale and distribution of events in the process. For point processes defined on a space such as Rd\mathbb{R}^d, Λ\Lambda is typically required to be locally finite, meaning Λ(K)<\Lambda(K) < \infty for every compact set KK, ensuring the expected number of points remains finite over bounded regions. A key property of the expectation measure is its role in simple point processes, where, since multiplicities are impossible, it directly corresponds to the expected number of distinct point occurrences. In general, Λ\Lambda is countably additive and inherits sigma-finiteness from the process's local finiteness assumptions, allowing integration over measurable functions via Fubini's theorem. This structure enables the expectation measure to capture the linear growth of point counts, distinguishing it from higher-order measures that account for clustering or repulsion. Campbell's theorem provides a fundamental connection between the expectation measure and integrals over the point process, stating that for any non-negative measurable function ff (or integrable in the signed case), E[fdΦ]=fdΛ,\mathbb{E}\left[ \int f \, d\Phi \right] = \int f \, d\Lambda, where the integrals are with respect to the random measure Φ\Phi and the expectation measure Λ\Lambda, respectively. This result, which holds under local finiteness conditions, facilitates the computation of expected values for sums or shot-noise fields generated by the points, such as E[xΦf(x)]=fdΛ\mathbb{E}\left[ \sum_{x \in \Phi} f(x) \right] = \int f \, d\Lambda. It underscores the expectation measure's utility in deriving means for linear statistics without needing the full distributional details of Φ\Phi. The expectation measure also relates to higher-order factorial moment measures, which generalize it to products of counting variables adjusted for overlaps. Specifically, the first-order factorial moment measure is identical to Λ\Lambda, while for k2k \geq 2, the kk-th factorial moment measure Λ(k)\Lambda^{(k)} on the product space A1××AkA_1 \times \cdots \times A_k satisfies Λ(k)(A1××Ak)=E[Φ(A1)Φ(Ak)]lower-order terms,\Lambda^{(k)}(A_1 \times \cdots \times A_k) = \mathbb{E}\left[ \Phi(A_1) \cdots \Phi(A_k) \right] - \text{lower-order terms}, where the subtraction accounts for permutations and coincidences of points across the sets, ensuring Λ(k)\Lambda^{(k)} measures the expected number of ordered kk-tuples of distinct points. This relation, derived from the inclusion-exclusion principle in moment expansions, positions Λ\Lambda as the building block for characterizing dependencies in the process through its factorial hierarchy.

Intensity Measure

The intensity measure of a point process Φ\Phi on a space X\mathbb{X} is defined as Λ(A)=E[Φ(A)]\Lambda(A) = \mathbb{E}[\Phi(A)] for Borel sets AXA \subseteq \mathbb{X}. When Λ\Lambda is absolutely continuous with respect to the Lebesgue measure on X\mathbb{X}, it admits a density λ:X[0,)\lambda: \mathbb{X} \to [0, \infty), known as the first-order intensity function, such that Λ(A)=Aλ(x)dx\Lambda(A) = \int_A \lambda(x) \, dx. The first-order intensity λ(x)\lambda(x) is formally defined as the limit λ(x)=limB0E[Φ(B)]B\lambda(x) = \lim_{|B| \to 0} \frac{\mathbb{E}[\Phi(B)]}{|B|} whenever the limit exists, where BB is a Borel set containing xx and B|B| denotes its Lebesgue measure. This quantity captures the infinitesimal rate of point occurrence at xx, analogous to a probability density for the locations of points. Existence of λ(x)\lambda(x) requires that the intensity measure Λ\Lambda be absolutely continuous with respect to Lebesgue measure on X\mathbb{X}, ensuring the Radon-Nikodym derivative λ\lambda is well-defined and locally integrable. Campbell's theorem characterizes the relation between sums over the point process and integrals against the intensity: for any non-negative measurable function f:X[0,)f: \mathbb{X} \to [0, \infty), E[XiΦf(Xi)]=Xf(x)λ(x)dx,\mathbb{E}\left[ \sum_{X_i \in \Phi} f(X_i) \right] = \int_{\mathbb{X}} f(x) \, \lambda(x) \, dx, when the intensity function exists. This holds for general and facilitates computations of expectations for functionals of the process. For , Slivnyak's theorem further implies that the reduced Palm distribution coincides with the original distribution, leading to additional characterizations via the Mecke equation. Point processes are classified as homogeneous if λ(x)\lambda(x) is constant (say, λ(x)=λ>0\lambda(x) = \lambda > 0), yielding Λ(A)=λA\Lambda(A) = \lambda |A| and uniform point density across X\mathbb{X}; otherwise, they are non-homogeneous, with λ(x)\lambda(x) varying spatially or temporally to reflect inhomogeneous point clustering or sparsity.

Functional Characterizations

Laplace Functional

The Laplace functional of a point process Φ\Phi on a complete separable metric space X\mathcal{X} is defined as ψf=E[exp(XfdΦ)],\psi_f = \mathbb{E}\left[\exp\left(-\int_{\mathcal{X}} f \, d\Phi\right)\right], where f:X[0,)f: \mathcal{X} \to [0,\infty) is a non-negative measurable function. This functional provides a probabilistic characterization analogous to the Laplace transform for random variables, capturing the distribution of Φ\Phi through expectations of exponentially weighted integrals over the process. The family of all such Laplace functionals {ψf}\{\psi_f\}, indexed by admissible ff, uniquely determines the law PΦP_\Phi of the point process Φ\Phi. This uniqueness follows from the fact that the functionals encode the complete finite-dimensional distributions of Φ\Phi, allowing inversion to recover the probability measure. Key properties of the Laplace functional include continuity with respect to the vague topology on the space of test functions and monotonicity in ff. Specifically, if fnff_n \to f vaguely (i.e., gdfngdf\int g \, d f_n \to \int g \, df for continuous gg with compact support), then ψfnψf\psi_{f_n} \to \psi_f, assuming the process is locally finite. Additionally, if 0fg0 \leq f \leq g, then ψfψg\psi_f \geq \psi_g, reflecting the non-increasing nature of the exponential due to the non-negativity of the integrand. These properties ensure the functional is well-behaved under limits and orderings of test functions. For marked point processes Φ~\tilde{\Phi} on X×M\mathcal{X} \times \mathcal{M}, the Laplace functional extends naturally to ψf=E[exp(f(x,m)dΦ~(x,m))],\psi_f = \mathbb{E}\left[\exp\left(-\iint f(x,m) \, d\tilde{\Phi}(x,m)\right)\right], where f:X×M[0,)f: \mathcal{X} \times \mathcal{M} \to [0,\infty) is measurable, preserving the characterizing role for the joint distribution. The Taylor expansion of logψtf\log \psi_{tf} around t=0t=0 yields the cumulant measures, which relate to the moment measures detailed subsequently.

Moment Measures

Moment measures in point processes generalize the expectation measure to higher orders, capturing the expected configurations of multiple distinct points and thereby revealing dependencies and interactions within the process. The k-th order reduced moment measure, denoted μ(k)\mu^{(k)}, quantifies the expected number of ordered k-tuples of distinct points falling into specified regions. Specifically, for Borel sets A1,,AkA_1, \dots, A_k in the state space, it is defined as μ(k)(A1××Ak)=E[i1ik1Xi1A11XikAk],\mu^{(k)}(A_1 \times \cdots \times A_k) = \mathbb{E}\left[\sum_{i_1 \neq \cdots \neq i_k} 1_{X_{i_1} \in A_1} \cdots 1_{X_{i_k} \in A_k}\right],
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.