Hubbry Logo
Gaussian random fieldGaussian random fieldMain
Open search
Gaussian random field
Community hub
Gaussian random field
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Gaussian random field
Gaussian random field
from Wikipedia

In statistics, a Gaussian random field (GRF) is a random field involving Gaussian probability density functions of the variables. A one-dimensional GRF is also called a Gaussian process. An important special case of a GRF is the Gaussian free field.

With regard to applications of GRFs, the initial conditions of physical cosmology generated by quantum mechanical fluctuations during cosmic inflation are thought to be a GRF with a nearly scale invariant spectrum.[1]

Construction

[edit]

One way of constructing a GRF is by assuming that the field is the sum of a large number of plane, cylindrical or spherical waves with uniformly distributed random phase. Where applicable, the central limit theorem dictates that at any point, the sum of these individual plane-wave contributions will exhibit a Gaussian distribution. This type of GRF is completely described by its power spectral density, and hence, through the Wiener–Khinchin theorem, by its two-point autocorrelation function, which is related to the power spectral density through a Fourier transformation.

Suppose f(x) is the value of a GRF at a point x in some D-dimensional space. If we make a vector of the values of f at N points, x1, ..., xN, in the D-dimensional space, then the vector (f(x1), ..., f(xN)) will always be distributed as a multivariate Gaussian.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A Gaussian random field (GRF), also known as a in one dimension, is a defined over a continuous domain—typically a of RN\mathbb{R}^N—where the joint distribution of the field's values at any finite collection of points follows a . This property ensures that the field is completely characterized by its mean function μ(t)=E[X(t)]\mu(t) = \mathbb{E}[X(t)], which specifies the at each point tt, and its covariance function C(s,t)=\Cov(X(s),X(t))C(s, t) = \Cov(X(s), X(t)), which encodes the spatial or temporal dependencies between points. GRFs are particularly valued for their mathematical tractability, as the multivariate normality allows for explicit computations of probabilities, moments, and conditional distributions. Key properties of GRFs include stationarity, where the mean is constant and the covariance depends only on the difference sts - t, enabling translation-invariant models suitable for homogeneous phenomena; isotropy, a special case where covariance further depends solely on the st\|s - t\|; and self-similarity, as seen in examples like fractional Brownian fields with Hurst index H(0,1)H \in (0,1), which scale invariantly under transformations. The covariance function must be positive definite to ensure valid probability distributions, and common forms include the exponential exp(st/)\exp(-\|s-t\|/\ell), Matérn class for tunable smoothness, and squared exponential for infinitely differentiable paths. Sample paths of GRFs exhibit Hölder continuity under certain conditions on the covariance, with fractal-like level sets whose dimensions can be analyzed using tools from . Historically, GRFs trace their origins to early 20th-century work on stochastic processes, with foundational contributions from A. N. Kolmogorov in 1941 on turbulence modeling and A. M. Yaglom in 1957 on multidimensional fields, building on spectral representations by N. Wiener and A. Khinchin in the 1930s. Modern developments, including local times and intersection properties, were advanced by researchers like R. M. Dudley and Y. Xiao in the late 20th century. GRFs find broad applications across disciplines, including spatial statistics for geostatistical interpolation (), where they model environmental variables like properties or rainfall; physics for simulating random potentials in disordered systems and solutions to partial differential (SPDEs) such as the Edwards-Wilkinson equation; and for , regression, and uncertainty quantification in . In , they describe protein conformations and genomic signals, while in , they support surrogate modeling for computer experiments and risk assessment in . Their flexibility in capturing smooth or rough spatial correlations makes them indispensable for phenomena exhibiting Gaussian-like fluctuations.

Fundamentals

Definition

A Gaussian random field (GRF) on a domain DD is defined as a stochastic process {Z(x):xD}\{Z(x) : x \in D\} such that, for any finite set of points x1,,xnDx_1, \dots, x_n \in D and any n1n \geq 1, the random vector (Z(x1),,Z(xn))(Z(x_1), \dots, Z(x_n)) follows a multivariate normal distribution. This finite-dimensional Gaussian structure serves as the building block for the entire field, extending the concept of multivariate Gaussians to continuous or infinite-dimensional parameter spaces. The distribution of a GRF is fully specified by its mean function μ(x)=E[Z(x)]\mu(x) = \mathbb{E}[Z(x)] and covariance function K(x,y)=Cov(Z(x),Z(y))K(x, y) = \mathrm{Cov}(Z(x), Z(y)), where KK must be a positive semi-definite kernel to ensure the validity of the multivariate normal distributions for all finite subsets. Without loss of generality, many GRFs are assumed to have zero , simplifying the focus to the covariance structure, though the general case allows for a non-constant mean. In contrast to finite-dimensional multivariate Gaussians, which are directly defined on a fixed set of variables, a GRF operates over an uncountable domain and requires additional regularity conditions—such as mean-square continuity, where E[(Z(x)Z(y))2]0\mathbb{E}[(Z(x) - Z(y))^2] \to 0 as xyx \to y—to guarantee that the process defines a measurable random function rather than merely a collection of marginal distributions. These assumptions prevent pathological behaviors and enable practical constructions and analyses of the field. The origins of GRFs trace back to the 1940s and 1950s, emerging from pioneering work on by , who developed foundational theories for and , and , whose contributions to and the consistency theorem for random processes provided the theoretical groundwork for infinite-dimensional Gaussian extensions.

Basic properties

A Gaussian random field (GRF), defined as a stochastic process where every finite collection of values follows a multivariate Gaussian distribution, exhibits several foundational statistical properties that stem directly from this Gaussian assumption. The marginal distribution at any single point xx in the domain is Gaussian, specifically Z(x)N(μ(x),K(x,x))Z(x) \sim \mathcal{N}(\mu(x), K(x,x)), where μ(x)\mu(x) denotes the mean function evaluated at xx and K(x,x)K(x,x) is the variance given by the covariance function. For any finite set of points {x1,,xn}\{x_1, \dots, x_n\}, the joint distribution of (Z(x1),,Z(xn))(Z(x_1), \dots, Z(x_n)) is multivariate Gaussian, N(μ,K)\mathcal{N}(\boldsymbol{\mu}, \mathbf{K}), with the mean vector μ=(μ(x1),,μ(xn))\boldsymbol{\mu} = (\mu(x_1), \dots, \mu(x_n))^\top and the covariance matrix K\mathbf{K} whose entries are K(xi,xj)K(x_i, x_j). These finite-dimensional distributions fully characterize the GRF, ensuring consistency across all subsets of points. Linearity is a key property of GRFs: any linear combination of values from the field remains Gaussian distributed. For instance, if ZZ is a GRF and a1,,ana_1, \dots, a_n are constants, then i=1naiZ(xi)N(i=1naiμ(xi),i=1nj=1naiajK(xi,xj))\sum_{i=1}^n a_i Z(x_i) \sim \mathcal{N}\left( \sum_{i=1}^n a_i \mu(x_i), \sum_{i=1}^n \sum_{j=1}^n a_i a_j K(x_i, x_j) \right). This extends to more general linear functionals, such as integrals over the domain, provided suitable measurability conditions hold to ensure the result is well-defined and Gaussian. For jointly Gaussian random variables, including those arising from a GRF, uncorrelatedness implies statistical independence. That is, if Cov(Z(x),Z(y))=0\operatorname{Cov}(Z(x), Z(y)) = 0 for distinct points xx and yy, then Z(x)Z(x) and Z(y)Z(y) are independent; this property generalizes to any finite collection where the covariance matrix is block-diagonal. The distribution of a GRF is completely specified by its mean function μ\mu and covariance function KK, which must be positive semi-definite to ensure valid Gaussian marginals and joints. No additional information is required, as these functions determine all probabilistic characteristics of the field, including higher-order moments via the Gaussian structure.

Mathematical formulation

Covariance structure

The covariance structure of a Gaussian random field (GRF) is primarily defined by its kernel K(x,y)K(x, y), which specifies the dependence between values of the field at any two points xx and yy in the domain. This kernel must be symmetric, meaning K(x,y)=K(y,x)K(x, y) = K(y, x) for all x,yx, y, and positive semi-definite to ensure that the field induces valid finite-dimensional Gaussian distributions at any collection of points. Continuity of KK is often assumed to guarantee mean-square continuity of the field samples. Positive semi-definiteness requires that for any of points x1,,xnx_1, \dots, x_n in the domain and any real coefficients a1,,ana_1, \dots, a_n, the i=1nj=1naiajK(xi,xj)0\sum_{i=1}^n \sum_{j=1}^n a_i a_j K(x_i, x_j) \geq 0. This condition ensures that all principal minors of the resulting are non-negative, preserving the non-negativity of variances and the of joint distributions. The full specification of a GRF also includes the function μ(x)=E[Z(x)]\mu(x) = \mathbb{E}[Z(x)], which can be arbitrary but is frequently set to zero for centered fields to focus on the covariance-induced variability. The joint distribution at any points is then fully determined by μ\mu and KK, with the kernel governing the correlations while the provides the expected location. The (RKHS) HK\mathcal{H}_K is defined by KK, with inner product satisfying K(,x),K(,y)HK=K(x,y)\langle K(\cdot, x), K(\cdot, y) \rangle_{\mathcal{H}_K} = K(x, y). It provides a functional-analytic perspective on the and regularity properties of the field, though sample paths typically outside HK\mathcal{H}_K . The RKHS norm relates to measures of function , offering insights into the field's variability. In operator terms, the covariance structure induces an CC on suitable function spaces, defined by (Cf)(x)=K(x,y)f(y)dy,(C f)(x) = \int K(x, y) f(y) \, dy, which is , positive semi-definite, and trace-class under appropriate conditions on the domain and kernel, capturing the field's second-moment properties in a linear functional setting.

Stationarity and isotropy

In Gaussian random fields (GRFs), strict stationarity requires that the finite-dimensional distributions remain unchanged under spatial translations, implying a constant function and a function K(x,y)K(\mathbf{x}, \mathbf{y}) that depends solely on the lag τ=xy\boldsymbol{\tau} = \mathbf{x} - \mathbf{y}, so K(x,y)=C(τ)K(\mathbf{x}, \mathbf{y}) = C(\boldsymbol{\tau}). This property ensures the field's statistical characteristics are translation-invariant across the domain. For Gaussian fields, since joint distributions are fully determined by their and , strict stationarity is equivalent to the being constant and the function depending only on the lag. Wide-sense stationarity, a weaker condition often sufficient for second-order statistical analyses, posits a constant mean and a covariance function K(x,y)=C(xy)K(\mathbf{x}, \mathbf{y}) = C(\mathbf{x} - \mathbf{y}), without requiring full distributional invariance. In the Gaussian case, wide-sense stationarity implies strict stationarity because the is completely specified by its first- and second-order moments. This equivalence simplifies analysis in many applications, as verifying the structure alone suffices. Isotropy imposes an additional symmetry on stationary GRFs, restricting the covariance to depend only on the Euclidean distance between points, K(x,y)=C(xy)K(\mathbf{x}, \mathbf{y}) = C(|\mathbf{x} - \mathbf{y}|), making the field invariant under rotations. This radial symmetry is prevalent in Euclidean spaces for modeling phenomena without preferred directions, such as atmospheric or material properties. For non-stationary GRFs, intrinsic stationarity generalizes the concept by assuming that the increments (differences between field values) form a stationary field, with the of increments depending only on their separation; this is quantified via the 2γ(h)=E[(Z(x+h)Z(x))2]2\gamma(\mathbf{h}) = \mathbb{E}[(Z(\mathbf{x} + \mathbf{h}) - Z(\mathbf{x}))^2], which is finite and depends solely on h\mathbf{h}. Such models are essential for processes like , where the field itself lacks a defined variance but exhibits stationary increment . Under wide-sense stationarity, Bochner's theorem characterizes the covariance function C(τ)C(\boldsymbol{\tau}) as the of a non-negative power S(ω)S(\boldsymbol{\omega}), i.e., C(τ)=eiωτS(ω)dωC(\boldsymbol{\tau}) = \int e^{i \boldsymbol{\omega} \cdot \boldsymbol{\tau}} S(\boldsymbol{\omega}) \, d\boldsymbol{\omega}, ensuring positive semi-definiteness. This spectral representation links the field's correlation structure to its frequency content, facilitating simulations and inferences.

Construction methods

Karhunen–Loève expansion

The Karhunen–Loève (KL) expansion provides a spectral decomposition for representing Gaussian random fields (GRFs) on a compact domain, enabling efficient construction and simulation through an orthogonal series of uncorrelated random coefficients. For a GRF Z(x)Z(\mathbf{x}) with mean function μ(x)\mu(\mathbf{x}) and continuous covariance function K(x,y)K(\mathbf{x}, \mathbf{y}), the expansion takes the form Z(x)=μ(x)+k=1λkϕk(x)ξk,Z(\mathbf{x}) = \mu(\mathbf{x}) + \sum_{k=1}^\infty \sqrt{\lambda_k} \phi_k(\mathbf{x}) \xi_k,
Add your contribution
Related Hubs
User Avatar
No comments yet.