Hubbry Logo
Marginal likelihoodMarginal likelihoodMain
Open search
Marginal likelihood
Community hub
Marginal likelihood
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Marginal likelihood
Marginal likelihood
from Wikipedia

A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be understood as the probability of the model itself and is therefore often referred to as model evidence or simply evidence.

Due to the integration over the parameter space, the marginal likelihood does not directly depend upon the parameters. If the focus is not on model comparison, the marginal likelihood is simply the normalizing constant that ensures that the posterior is a proper probability. It is related to the partition function in statistical mechanics.[1]

Concept

[edit]

Given a set of independent identically distributed data points where according to some probability distribution parameterized by , where itself is a random variable described by a distribution, i.e. the marginal likelihood in general asks what the probability is, where has been marginalized out (integrated out):

The above definition is phrased in the context of Bayesian statistics in which case is called prior density and is the likelihood. Recognizing that the marginal likelihood is the normalizing constant of the Bayesian posterior density , one also has the alternative expression[2]

which is an identity in . The marginal likelihood quantifies the agreement between data and prior in a geometric sense made precise[how?] in de Carvalho et al. (2019). In classical (frequentist) statistics, the concept of marginal likelihood occurs instead in the context of a joint parameter , where is the actual parameter of interest, and is a non-interesting nuisance parameter. If there exists a probability distribution for [dubiousdiscuss], it is often desirable to consider the likelihood function only in terms of , by marginalizing out :

Unfortunately, marginal likelihoods are generally difficult to compute. Exact solutions are known for a small class of distributions, particularly when the marginalized-out parameter is the conjugate prior of the distribution of the data. In other cases, some kind of numerical integration method is needed, either a general method such as Gaussian integration or a Monte Carlo method, or a method specialized to statistical problems such as the Laplace approximation, Gibbs/Metropolis sampling, or the EM algorithm.

It is also possible to apply the above considerations to a single random variable (data point) , rather than a set of observations. In a Bayesian context, this is equivalent to the prior predictive distribution of a data point.

Applications

[edit]

Bayesian model comparison

[edit]

In Bayesian model comparison, the marginalized variables are parameters for a particular type of model, and the remaining variable is the identity of the model itself. In this case, the marginalized likelihood is the probability of the data given the model type, not assuming any particular model parameters. Writing for the model parameters, the marginal likelihood for the model M is

It is in this context that the term model evidence is normally used. This quantity is important because the posterior odds ratio for a model M1 against another model M2 involves a ratio of marginal likelihoods, called the Bayes factor:

which can be stated schematically as

posterior odds = prior odds × Bayes factor

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , the marginal likelihood, also known as the Bayesian evidence or model evidence, is the probability of observing the given a model, obtained by integrating the joint likelihood of the and model parameters with respect to the prior distribution over the parameters. Mathematically, for DD and model MkM_k with parameters hkh_k, it is expressed as p(DMk)=p(Dhk,Mk)p(hkMk)dhkp(D \mid M_k) = \int p(D \mid h_k, M_k) \, p(h_k \mid M_k) \, dh_k. This integral marginalizes out the parameters, yielding a that depends only on the and the model structure, including the prior. The marginal likelihood plays a central role in Bayesian model comparison and selection, as it quantifies how well a model predicts the data while automatically incorporating a complexity penalty through the prior's influence on the integration. Specifically, the Bayes factor, which is the ratio of the marginal likelihoods of two competing models, serves as a measure of evidence in favor of one model over the other, updating prior odds to posterior odds without requiring nested models. It is essential for Bayesian model averaging, where posterior model probabilities are computed as p(MkD)=p(DMk)p(Mk)lp(DMl)p(Ml)p(M_k \mid D) = \frac{p(D \mid M_k) p(M_k)}{\sum_l p(D \mid M_l) p(M_l)}, allowing inference that accounts for model uncertainty. Applications span fields like hydrology, phylogenetics, and cosmology, where it aids in hypothesis testing and hyperparameter optimization. Computing the marginal likelihood exactly is often intractable for complex models due to the high-dimensional integral, necessitating approximations such as , thermodynamic integration, or (MCMC) estimators like the . These methods balance accuracy and computational feasibility, with thermodynamic integration noted for its consistency in environmental modeling contexts. The sensitivity to prior choices underscores its interpretive challenges, as it encodes by favoring parsimonious models that fit the data well.

Definition and Basics

Formal Definition

The marginal likelihood, denoted as p(y)p(y), is formally defined as the marginal probability of the observed yy, obtained by integrating the joint distribution of the and the model θ\theta over the parameter space: p(y)=p(y,θ)dθ=p(yθ)p(θ)dθ,p(y) = \int p(y, \theta) \, d\theta = \int p(y \mid \theta) p(\theta) \, d\theta, where p(yθ)p(y \mid \theta) is the and p(θ)p(\theta) is the prior distribution over the θ\theta. This represents the prior predictive distribution of the , averaging the likelihood across all possible parameter values weighted by the prior. The process of marginalization in this context involves integrating out the parameters θ\theta, which are treated as nuisance parameters, to yield a probability for the data yy that is independent of any specific parameter values. This eliminates dependence on θ\theta by averaging over its uncertainty as specified by the prior, resulting in a quantity that summarizes the model's predictive content for the observed data without conditioning on particular estimates of θ\theta. In contrast to the joint probability p(y,θ)=p(yθ)p(θ)p(y, \theta) = p(y \mid \theta) p(\theta), which depends on both the data and parameters, the marginal likelihood p(y)p(y) marginalizes away θ\theta and thus provides the unconditional probability of the data under the model. This distinction underscores how marginalization transforms the parameter-dependent joint into a data-only marginal. The term "marginal likelihood" is used interchangeably with "marginal probability" or "evidence" to refer to p(y)p(y), particularly in contexts emphasizing its role as a model-level summary.

Bayesian Interpretation

In Bayesian statistics, the marginal likelihood represents the predictive probability of the observed data under a given model, obtained by averaging the likelihood over all possible parameter values weighted by their prior distribution. This integration marginalizes out the parameters, providing a coherent measure of how well the model explains the data while incorporating prior beliefs about parameter uncertainty. As such, it serves as the Bayesian evidence for the model, distinct from conditional measures that fix parameters at specific values. The marginal likelihood functions as a key indicator of model plausibility, where a higher value suggests the model offers a better overall fit to the after for the full range of parameter uncertainty encoded in the prior. Unlike point estimates, this averaging process naturally balances goodness-of-fit with the model's capacity to generalize, implicitly favoring parsimonious models that do not overfit by spreading probability mass too thinly across parameters. In model comparison, it enables direct assessment of competing hypotheses by quantifying the relative support each model receives from the alone. In contrast to , which maximizes the likelihood at a single point estimate of the and thus relies on plug-in predictions that ignore , the marginal likelihood integrates over the entire space. This integration implicitly penalizes model complexity through the prior's influence, as overly flexible models tend to dilute the by assigning low to the under broad ranges, whereas maximum likelihood can favor intricate models that fit noise without such restraint. Consequently, marginal likelihood promotes more robust in finite samples by embedding directly into the . Historically, the marginal likelihood emerged as a central component in Bayesian model assessment through its role in computing Bayes factors, which compare the evidence for rival models and update prior odds to posterior odds in a coherent manner. This framework, formalized in seminal work on Bayes factors, provided a principled alternative to frequentist testing for evaluating scientific theories.

Mathematical Framework

Expression in Parametric Models

In parametric statistical models, the marginal likelihood integrates out the model parameters to obtain the probability of the observed data under the model specification. Consider a parametric model MM indexed by a parameter vector θΘ\theta \in \Theta, where Θ\Theta is the parameter space. The marginal likelihood is then expressed as p(yM)=Θp(yθ,M)π(θM)dθ,p(y \mid M) = \int_{\Theta} p(y \mid \theta, M) \, \pi(\theta \mid M) \, d\theta, where p(yθ,M)p(y \mid \theta, M) denotes the likelihood function and π(θM)\pi(\theta \mid M) is the prior distribution over the parameters given the model. This formulation arises naturally in Bayesian inference as the normalizing constant for the posterior distribution. When the parameter space Θ\Theta is discrete, the continuous integral is replaced by a summation: p(yM)=θΘp(yθ,M)π(θM).p(y \mid M) = \sum_{\theta \in \Theta} p(y \mid \theta, M) \, \pi(\theta \mid M). This discrete form is applicable in scenarios such as finite models with a discrete number of components or models with categorical parameters. The dimensionality of the or sum corresponds directly to the dimension of θ\theta, reflecting the number of parameters being marginalized out; higher dimensionality increases computational demands but maintains the core structure of the expression. A concrete example illustrates this in a simple univariate normal model, where the observed yy follows yN(μ,σ2)y \sim \mathcal{N}(\mu, \sigma^2) with known σ2\sigma^2, and the prior on the is μN(0,1)\mu \sim \mathcal{N}(0, 1). The marginal likelihood involves the p(yM)=N(yμ,σ2)N(μ0,1)dμ.p(y \mid M) = \int_{-\infty}^{\infty} \mathcal{N}(y \mid \mu, \sigma^2) \, \mathcal{N}(\mu \mid 0, 1) \, d\mu. Completing the square in the exponent yields a closed-form normal distribution for the marginal: p(yM)=N(y0,σ2+1).p(y \mid M) = \mathcal{N}(y \mid 0, \sigma^2 + 1). However, in the more general univariate normal model with unknown variance and a conjugate normal-inverse-gamma prior on (μ,σ2)(\mu, \sigma^2), the full marginal likelihood over both parameters results in a closed-form for yy, highlighting how prior choices enable analytical tractability.

Relation to Posterior and Likelihood

In , the marginal likelihood plays a central role in , which expresses the posterior distribution as
p(θy)=p(yθ)p(θ)p(y),p(\theta \mid y) = \frac{p(y \mid \theta) p(\theta)}{p(y)},
where p(yθ)p(y \mid \theta) is the likelihood function, p(θ)p(\theta) is the prior distribution, and p(y)p(y) is the marginal likelihood of the data yy. This formulation updates prior beliefs about the parameters θ\theta with observed data to yield the posterior p(θy)p(\theta \mid y).
The marginal likelihood p(y)p(y) serves as the normalizing constant—or evidence—that ensures the posterior integrates to 1 over the parameter space, transforming the unnormalized product p(yθ)p(θ)p(y \mid \theta) p(\theta) into a proper probability . Computationally, it is obtained by integrating the of and parameters: p(y)=p(yθ)p(θ)dθp(y) = \int p(y \mid \theta) p(\theta) \, d\theta. This integration averages the likelihood across all possible parameter values weighted by the prior, providing a data-dependent measure of model plausibility independent of specific θ\theta. Unlike the conditional likelihood p(yθ)p(y \mid \theta), which conditions on fixed parameters and evaluates model fit for particular θ\theta, the marginal likelihood marginalizes the full likelihood over the prior distribution, incorporating parameter uncertainty from the outset. This contrasts with the profiled likelihood in frequentist settings, where parameters are eliminated by maximization rather than integration, leading to a point-estimate-based adjustment without prior weighting. The marginal likelihood thus quantifies the total predictive in the under the model, encompassing both the variability in the likelihood due to unknown parameters and the prior's influence on that variability, whereas the likelihood alone fixes θ\theta and ignores such averaging. This broader measure supports coherent probabilistic reasoning in Bayesian frameworks.

Computation Methods

Analytical Approaches

Analytical approaches to computing the marginal likelihood rely on cases where the integral over the parameter space can be evaluated exactly in closed form, which occurs primarily under the use of conjugate priors. Conjugate priors are distributions from the same family as the posterior, allowing the marginal likelihood to be derived as a without numerical integration. This solvability holds when the likelihood and prior combine to yield a posterior in the same parametric family, simplifying the integration to known functions like the beta or gamma integrals. A classic example is the beta-binomial model, where the binomial likelihood for coin flips is paired with a beta prior on the success probability θ\theta. The marginal likelihood is then given by p(yn,α,β)=(ny)B(α+y,β+ny)B(α,β),p(y \mid n, \alpha, \beta) = \binom{n}{y} \frac{B(\alpha + y, \beta + n - y)}{B(\alpha, \beta)}, where BB denotes the , representing the integral over θ\theta. This closed-form expression arises directly from the conjugacy, enabling exact for binary data. In Gaussian models, conjugate priors such as the normal-inverse-Wishart facilitate analytical marginal likelihoods, often resulting in a multivariate for the . For a multivariate normal likelihood with unknown μ\mu and precision Λ\Lambda, the marginal distribution of the is p(D)=πnd/2Γd(νn/2)Γd(ν0/2)Λ0ν0/2Λnνn/2(κ0κn)d/2,p(D) = \pi^{-nd/2} \frac{\Gamma_d(\nu_n/2)}{\Gamma_d(\nu_0/2)} \frac{|\Lambda_0|^{\nu_0/2}}{|\Lambda_n|^{\nu_n/2}} \left( \frac{\kappa_0}{\kappa_n} \right)^{d/2}, where νn=ν0+n\nu_n = \nu_0 + n, κn=κ0+n\kappa_n = \kappa_0 + n, Λn=Λ0+S+κ0nκn(xˉμ0)(xˉμ0)T\Lambda_n = \Lambda_0 + S + \frac{\kappa_0 n}{\kappa_n} (\bar{x} - \mu_0)(\bar{x} - \mu_0)^T , S=i=1n(xixˉ)(xixˉ)TS = \sum_{i=1}^n (x_i - \bar{x})(x_i - \bar{x})^T , dd is the dimensionality, and nn is the number of observations. This highlights the tractability in linear Gaussian settings. However, such analytical solutions are rare, particularly in high-dimensional settings or with non-conjugate priors, where the parameter integral becomes intractable and necessitates numerical methods. These limitations stem from the exponential growth in integration complexity as dimensionality increases, restricting exact computations to low-dimensional or specially structured models.

Numerical and Approximation Techniques

When exact analytical computation of the marginal likelihood is infeasible, such as in non-conjugate models with high-dimensional spaces, numerical and techniques become essential for . These methods leverage sampling or asymptotic expansions to approximate the p(yθ)p(θ)dθ\int p(y \mid \theta) p(\theta) \, d\theta, balancing computational feasibility with accuracy. Monte Carlo-based approaches, in particular, provide unbiased or consistent estimators but often require careful tuning to manage variance, while deterministic methods offer faster but potentially biased approximations suitable for large-scale applications. Monte Carlo methods, including , form a foundational class of estimators for the marginal likelihood. The importance sampling estimator approximates the integral as p^(y)1Ni=1Np(yθi)p(θi)q(θi)\hat{p}(y) \approx \frac{1}{N} \sum_{i=1}^N \frac{p(y \mid \theta_i) p(\theta_i)}{q(\theta_i)}, where {θi}i=1N\{\theta_i\}_{i=1}^N are samples drawn from a proposal distribution q(θ)q(\theta) chosen to approximate the posterior p(θy)p(\theta \mid y). This self-normalized form ensures consistency under mild conditions on qq, though high variance arises if qq poorly covers the posterior support, necessitating techniques like adaptive proposals or multiple importance sampling to improve efficiency in complex models. Markov Chain Monte Carlo (MCMC) methods extend these ideas by generating dependent samples from the posterior to estimate the marginal likelihood without direct prior sampling. Bridge sampling, for instance, uses samples from both the prior and posterior to compute ratios via an optimal bridge function, yielding p^(y)=1Ni=1Np(yθiprior)p(θiprior)g(θipriory)1Mj=1Mg(θjposty)p(yθjpost)p(θjpost)\hat{p}(y) = \frac{1}{N} \sum_{i=1}^N \frac{p(y \mid \theta_i^\text{prior}) p(\theta_i^\text{prior})}{g(\theta_i^\text{prior} \mid y)} \cdot \frac{1}{M} \sum_{j=1}^M \frac{g(\theta_j^\text{post} \mid y)}{p(y \mid \theta_j^\text{post}) p(\theta_j^\text{post})}, where gg minimizes estimation variance; this approach is particularly robust for multimodal posteriors. The estimator, another posterior-based method, approximates p(y)p(y) as the reciprocal of the average inverse likelihood over posterior samples, p^(y)(1Ni=1N1p(yθi))1\hat{p}(y) \approx \left( \frac{1}{N} \sum_{i=1}^N \frac{1}{p(y \mid \theta_i)} \right)^{-1}, but suffers from infinite variance in heavy-tailed cases, prompting stabilized variants. Deterministic approximations, such as the Laplace method, provide closed-form estimates by exploiting local behavior around the posterior mode. This technique approximates the log-posterior as quadratic via its HH at the mode θ\theta^*, leading to p(y)p(yθ)p(θ)(2π)d/2H1/2p(y) \approx p(y \mid \theta^*) p(\theta^*) (2\pi)^{d/2} |H|^{-1/2}, where dd is the parameter dimension; the approximation improves asymptotically as nn \to \infty for large samples but can results in small-data or highly nonlinear settings. It is computationally efficient, requiring only optimization and Hessian evaluation, and serves as a building block for higher-order corrections in moderate dimensions. Variational offers a scalable lower bound on the log-marginal likelihood through optimization, framing as minimizing the Kullback-Leibler divergence between a tractable variational distribution q(θ)q(\theta) and the true posterior. The (ELBO) states logp(y)Eq[logp(y,θ)]Eq[logq(θ)]\log p(y) \geq \mathbb{E}_q [\log p(y, \theta)] - \mathbb{E}_q [\log q(\theta)], maximized by adjusting qq (often mean-field or structured forms) via stochastic gradients; this bound is tight when qp(θy)q \approx p(\theta \mid y) and enables fast in massive datasets, though it underestimates the true value and requires careful family selection to avoid loose bounds. Recent advancements, including those post-2020, have enhanced these techniques for scalability in deep learning models, where parameter spaces exceed millions of dimensions. Annealed importance sampling (AIS) refines importance sampling by introducing intermediate distributions bridging prior and posterior, with differentiable variants enabling end-to-end optimization of annealing schedules for tighter estimates in generative models. Sequential Monte Carlo (SMC) methods, propagating particles through annealed sequences with resampling, have been adapted for deep Bayesian networks, achieving unbiased marginal likelihoods via thermodynamic integration while integrating neural proposals for efficiency in high-dimensional tasks like variational autoencoders. More recent methods, such as generalized stepping-stone sampling (2024), improve efficiency in specific domains like pulsar timing analysis. These developments prioritize variance reduction and GPU acceleration, facilitating model comparison in neural architectures.

Applications in Statistics

Model Comparison and Selection

One key application of the marginal likelihood in is model comparison, where it serves as the basis for the , a measure of the relative evidence provided by the data for two competing models. The BF12BF_{12} comparing model M1M_1 to model M2M_2 is defined as the ratio of their marginal likelihoods:
BF12=p(yM1)p(yM2),BF_{12} = \frac{p(y \mid M_1)}{p(y \mid M_2)},
where yy denotes the observed data. This ratio quantifies how much more likely the data are under M1M_1 than under M2M_2, after integrating out model parameters via their priors, thereby providing a coherent framework for testing and without relying on arbitrary significance thresholds.
For nested models, where M1M_1 is a special case of M2M_2 (e.g., by imposing parameter restrictions), the Savage-Dickey density ratio offers a convenient way to compute the Bayes factor using posterior and prior densities evaluated at the boundary values of the restricted parameters. Specifically, the Bayes factor is given by
BF12=p(θy,M2)π(θM2),BF_{12} = \frac{p(\theta \mid y, M_2)}{\pi(\theta \mid M_2)},
evaluated at the values of θ\theta (the restricted parameter) that define the nesting boundary, under the encompassing model M2M_2, assuming the prior under M2M_2 matches the prior under M1M_1 for the common parameters. This approach simplifies computation by avoiding full marginal likelihood estimation for both models, though it requires careful prior specification to ensure validity.
The marginal likelihood inherently implements in by favoring parsimonious models that adequately fit the data, as more complex models must allocate mass across larger parameter spaces, effectively penalizing overparameterization unless the data strongly support the added complexity. In practice, Bayes factors are interpreted using guidelines such as those proposed by Kass and Raftery, where values between 1 and 3 provide barely worth mentioning , 3 to 20 indicate positive , 20 to 150 strong , and greater than 150 very strong in favor of the numerator model (e.g., BF12>10BF_{12} > 10 suggests substantial support for M1M_1). However, Bayes factors exhibit sensitivity to the choice of priors, which can lead to varying conclusions if priors are not chosen judiciously, and their computation becomes prohibitively expensive for high-dimensional or large-scale models, often necessitating approximations like those from MCMC methods.

Inference in Hierarchical Models

In Bayesian hierarchical models, the marginal likelihood facilitates inference by marginalizing over lower-level parameters to focus on hyperparameters. Consider a setup with observed data yy, group-specific parameters θ\theta, and global hyperparameters ψ\psi; the marginal likelihood is expressed as p(yψ)=p(yθ,ψ)p(θψ)dθ.p(y \mid \psi) = \int p(y \mid \theta, \psi) \, p(\theta \mid \psi) \, d\theta. This integral integrates out the variability in θ\theta across groups, yielding a likelihood for ψ\psi that captures the overall structure induced by the hierarchy. Such marginalization enables posterior inference on ψ\psi, such as via Markov chain Monte Carlo (MCMC) on the reduced parameter space, which is particularly valuable in models with many groups where full joint inference would be computationally prohibitive. The marginal likelihood also underpins posterior predictive checks in hierarchical settings, where it serves as the for the posterior distribution. The for new data yy^* is p(yy)=p(yθ)p(θy)dθ,p(y^* \mid y) = \int p(y^* \mid \theta) \, p(\theta \mid y) \, d\theta, with p(θy)p(yθ)p(θ)p(\theta \mid y) \propto p(y \mid \theta) p(\theta) and the marginal p(y)p(y) ensuring proper normalization. This framework allows assessment of model fit by simulating replicated datasets from the and comparing them to observed yy, accounting for across levels. In , the marginal likelihood is maximized to estimate hyperparameters directly from the data, as in ψ^=argmaxψp(yψ)\hat{\psi} = \arg\max_{\psi} p(y \mid \psi). This approach yields data-driven priors that shrink individual estimates toward a global mean, improving stability in sparse or high-dimensional scenarios. In , for instance, empirical Bayes using marginal maximization has enabled efficient for thousands of expressions in experiments, providing posterior means with reduced variance compared to independent analyses. Relative to full Bayesian MCMC over all parameters and hyperparameters, this method often reduces effective dimensionality, accelerating while maintaining shrinkage benefits, though it approximates the full posterior. Despite these advantages, computing the marginal likelihood in multi-level hierarchical models poses significant challenges due to the need to evaluate high-dimensional integrals, which grow intractable with added depth. In such cases, variational methods approximate the posterior by optimizing a lower bound on the marginal, enabling scalable in complex structures like topic models or phylogenetic trees. When exact fails, numerical techniques from broader computational frameworks are typically invoked to estimate these integrals reliably.

Historical Development

Origins in Bayesian Theory

The concept of marginal likelihood traces its roots to the foundational principles of articulated by in his posthumously published 1763 essay, "An Essay towards solving a Problem in the Doctrine of Chances." In this work, Bayes explored , seeking to determine the probability of a cause given observed effects, and implicitly relied on marginalization over possible parameter values to normalize the posterior distribution. This approach integrated prior beliefs with data to yield updated probabilities, laying the groundwork for the marginal likelihood as the predictive probability of the data under a model, though Bayes did not explicitly term it as such. The explicit formulation of marginal likelihood as a measure of evidence emerged in ' seminal 1939 book, Theory of Probability. Jeffreys advanced Bayesian hypothesis testing by defining the marginal likelihood—the of the over the prior distribution—as the evidential support for a , independent of specific values. This formulation enabled objective comparisons between competing scientific theories, using non-informative priors to avoid subjective bias, and positioned the marginal likelihood as central to in statistics. In the mid-20th century, the and saw further development through the introduction of Bayes factors, which directly incorporate marginal likelihoods to quantify relative model support. , in his 1950 monograph , coined the term "Bayes factor" as the ratio of marginal likelihoods under alternative hypotheses, providing a scale for weighing evidential strength in . Building on this, D.V. Lindley's 1957 paper "A Statistical " demonstrated the use of marginal probabilities for Bayesian model , illustrating how they could lead to counterintuitive results when comparing nested models and emphasizing their role in resolving ambiguities in hypothesis testing. Concurrently, contributed to this era's neo-Bayesian revival by integrating marginal likelihood concepts with likelihood-based inference, as explored in his later writings on the foundations of statistical evidence. Prior to 1980, the practical application of marginal likelihood remained constrained by computational limitations, confining its use largely to analytically tractable cases involving conjugate priors, where closed-form expressions for the marginal could be derived. This focus on conjugate families, such as the beta-binomial or normal-normal models, allowed exact computation without , but restricted broader adoption in complex, non-conjugate scenarios.

Key Advancements and Modern Usage

The advent of (MCMC) methods in the 1980s and 1990s revolutionized Bayesian computation, enabling the estimation of marginal likelihoods in complex models where analytical integration was infeasible. A pivotal advancement was Chib's 1995 method, which computes the exact marginal likelihood using posterior samples from by rearranging to express it as the ratio of the posterior ordinate to the prior and likelihood evaluated at a high-density point. This approach, applicable to models with , provided a reliable without requiring additional sampling, facilitating model comparison in hierarchical settings. In the 2000s, asymptotic approximations gained prominence for their computational efficiency in large datasets, bridging Bayesian and frequentist paradigms. The (BIC), originally proposed by Schwarz in 1978, serves as a of the log marginal likelihood, approximating it as logp(yM)maxk2logn\log p(y \mid M) \approx \ell_{\max} - \frac{k}{2} \log n, where max\ell_{\max} is the maximized log-likelihood, kk is the number of parameters, and nn is the sample size. This approximation, derived under regularity conditions for large nn, links marginal likelihood estimation to penalization criteria like AIC, promoting its use in high-dimensional despite its frequentist origins. The 2010s and 2020s witnessed the integration of marginal likelihood estimation into frameworks, enhancing and . In Gaussian processes (GPs), the marginal likelihood—often maximized via its log form—guides kernel and noise selection, balancing fit and model complexity as detailed in foundational treatments of GP regression. For Bayesian neural networks, the marginal likelihood, or evidence, underpins variational approximations to assess model generalization, with techniques like the (ELBO) providing scalable proxies amid the rise of . As of 2025, marginal likelihood computation plays a central role in scalable Bayesian , particularly through black-box variational inference (BBVI), which optimizes the ELBO to approximate the in high-dimensional models without model-specific derivations. However, ongoing debates highlight its sensitivity to prior choices in large datasets, where even weakly informative priors can disproportionately influence estimates, prompting robustness analyses and alternative criteria like the widely applicable information criterion (WAIC). Influential software packages have democratized these methods: Stan implements bridge sampling for MCMC-based marginal likelihood estimation, while PyMC supports integrated nested Laplace approximations and sequential for automatic computation in workflows.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.