Hubbry Logo
search
logo
2204207

Mutual information

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Venn diagram showing additive and subtractive relationships of various information measures associated with correlated variables and .[1] The area contained by either circle is the joint entropy . The circle on the left (red and violet) is the individual entropy , with the red being the conditional entropy . The circle on the right (blue and violet) is , with the blue being . The violet is the mutual information .

In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" (in units such as shannons (bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable.

Not limited to real-valued random variables and linear dependence like the correlation coefficient, MI is more general and determines how different the joint distribution of the pair is from the product of the marginal distributions of and . MI is the expected value of the pointwise mutual information (PMI).

The quantity was defined and analyzed by Claude Shannon in his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later by Robert Fano.[2] Mutual Information is also known as information gain.

Definition

[edit]

Let be a pair of random variables with values over the space . If their joint distribution is and the marginal distributions are and , the mutual information is defined as

where is the Kullback–Leibler divergence, and is the outer product distribution which assigns probability to each .

Expressed in terms of the entropy and the conditional entropy of the random variables and , one also has (See relation to conditional and joint entropy):

Notice, as per property of the Kullback–Leibler divergence, that is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when and are independent (and hence observing tells you nothing about ). is non-negative, it is a measure of the price for encoding as a pair of independent random variables when in reality they are not.

If the natural logarithm is used, the unit of mutual information is the nat. If the log base 2 is used, the unit of mutual information is the shannon, also known as the bit. If the log base 10 is used, the unit of mutual information is the hartley, also known as the ban or the dit.

In terms of PMFs for discrete distributions

[edit]

The mutual information of two jointly discrete random variables and is calculated as a double sum:[3]: 20 

,

where is the joint probability mass function of and , and and are the marginal probability mass functions of and respectively.

In terms of PDFs for continuous distributions

[edit]

In the case of jointly continuous random variables, the double sum is replaced by a double integral:[3]: 251 

,

where is now the joint probability density function of and , and and are the marginal probability density functions of and respectively.

Motivation

[edit]

Intuitively, mutual information measures the information that and share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if and are independent, then knowing does not give any information about and vice versa, so their mutual information is zero. At the other extreme, if is a deterministic function of and is a deterministic function of then all information conveyed by is shared with : knowing determines the value of and vice versa. As a result, the mutual information is the same as the uncertainty contained in (or ) alone, namely the entropy of (or ). A very special case of this is when and are the same random variable.

Mutual information is a measure of the inherent dependence expressed in the joint distribution of and relative to the marginal distribution of and under the assumption of independence. Mutual information therefore measures dependence in the following sense: if and only if and are independent random variables. This is easy to see in one direction: if and are independent, then , and therefore:

.

Moreover, mutual information is nonnegative (i.e. see below) and symmetric (i.e. see below).

Properties

[edit]

Nonnegativity

[edit]

Using Jensen's inequality on the definition of mutual information we can show that is non-negative, i.e.[3]: 28 

Symmetry

[edit]

The proof is given considering the relationship with entropy, as shown below.

Supermodularity under independence

[edit]

If is independent of , then

.[4]

Relation to conditional and joint entropy

[edit]

Mutual information can be equivalently expressed as:

where and are the marginal entropies, and are the conditional entropies, and is the joint entropy of and .

Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article.

In terms of a communication channel in which the output is a noisy version of the input , these relations are summarised in the figure:

The relationships between information theoretic quantities

Because is non-negative, consequently, . Here we give the detailed deduction of for the case of jointly discrete random variables:

The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums.

Intuitively, if entropy is regarded as a measure of uncertainty about a random variable, then is a measure of what does not say about . This is "the amount of uncertainty remaining about after is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in , minus the amount of uncertainty in which remains after is known", which is equivalent to "the amount of uncertainty in which is removed by knowing ". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other.

Note that in the discrete case and therefore . Thus , and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide.

Relation to Kullback–Leibler divergence

[edit]

For jointly discrete or jointly continuous pairs , mutual information is the Kullback–Leibler divergence from the product of the marginal distributions, , of the joint distribution , that is,

Furthermore, let be the conditional mass or density function. Then, we have the identity

The proof for jointly discrete random variables is as follows:

Similarly this identity can be established for jointly continuous random variables.

Note that here the Kullback–Leibler divergence involves integration over the values of the random variable only, and the expression still denotes a random variable because is random. Thus mutual information can also be understood as the expectation over of the Kullback–Leibler divergence of the conditional distribution of given from the univariate distribution of : the more different the distributions and are on average, the greater the information gain.

Bayesian estimation of mutual information

[edit]

If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was.[5] Subsequent researchers have rederived [6] and extended [7] this analysis. See [8] for a recent paper based on a prior specifically tailored to estimation of mutual information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs, , was proposed in .[9]

Independence assumptions

[edit]

The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparing to the fully factorized outer product . In many problems, such as non-negative matrix factorization, one is interested in less extreme factorizations; specifically, one wishes to compare to a low-rank matrix approximation in some unknown variable ; that is, to what degree one might have

Alternately, one might be interested in knowing how much more information carries over its factorization. In such a case, the excess information that the full distribution carries over the matrix factorization is given by the Kullback-Leibler divergence

The conventional definition of the mutual information is recovered in the extreme case that the process has only one value for .

Variations

[edit]

Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables.

Metric

[edit]

Many applications require a metric, that is, a distance measure between pairs of points. The quantity

satisfies the properties of a metric (triangle inequality, non-negativity, indiscernability and symmetry), where equality is understood to mean that can be completely determined from .[10]

This distance metric is also known as the variation of information.

If are discrete random variables then all the entropy terms are non-negative, so and one can define a normalized distance

Plugging in the definitions shows that

This is known as the Rajski Distance.[11] In a set-theoretic interpretation of information (see the figure for Conditional entropy), this is effectively the Jaccard distance between and .

Finally,

is also a metric.

Conditional mutual information

[edit]

Sometimes it is useful to express the mutual information of two random variables conditioned on a third.

For jointly discrete random variables this takes the form

which can be simplified as

For jointly continuous random variables this takes the form

which can be simplified as

Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that

for discrete, jointly distributed random variables . This result has been used as a basic building block for proving other inequalities in information theory.

Interaction information

[edit]

Several generalizations of mutual information to more than two random variables have been proposed, such as total correlation (or multi-information) and dual total correlation. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954)[12] who called these functions "interaction information", and Hu Kuo Ting (1962).[13] Interaction information is defined for one variable as follows:

and for

Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that

Multivariate statistical independence

[edit]

The multivariate mutual information functions generalize the pairwise independence case that states that if and only if , to arbitrary numerous variable. n variables are mutually independent if and only if the mutual information functions vanish with (theorem 2[14]). In this sense, the can be used as a refined statistical independence criterion.

Applications

[edit]

For 3 variables, Brenner et al. applied multivariate mutual information to neural coding and called its negativity "synergy"[15] and Watkinson et al. applied it to genetic expression.[16] For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression.[17][14] It can be zero, positive, or negative.[13] The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints [17]).

One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful in feature selection.[18]

Mutual information is also used in the area of signal processing as a measure of similarity between two signals. For example, FMI metric[19] is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. The Matlab code for this metric can be found at.[20] A python package for computing all multivariate mutual informations, conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available.[21]

Directed information

[edit]

Directed information, , measures the amount of information that flows from the process to , where denotes the vector and denotes . The term directed information was coined by James Massey and is defined as

.

Note that if , the directed information becomes the mutual information. Directed information has many applications in problems where causality plays an important role, such as capacity of channel with feedback.[22][23]

Normalized variants

[edit]

Normalized variants of the mutual information are provided by the coefficients of constraint,[24] uncertainty coefficient[25] or proficiency:[26]

The two coefficients have a value ranging in [0, 1], but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the following redundancy measure:

which attains a minimum of zero when the variables are independent and a maximum value of

when one variable becomes completely redundant with the knowledge of the other. See also Redundancy (information theory).

Another symmetrical measure is the symmetric uncertainty (Witten & Frank 2005), given by

which represents the harmonic mean of the two uncertainty coefficients .[25]

If we consider mutual information as a special case of the total correlation or dual total correlation, the normalized version are respectively,

and

This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty:[27]

There exists a normalization[28] which derives from first thinking of mutual information as an analogue to covariance (thus Shannon entropy is analogous to variance). Then the normalized mutual information is calculated akin to the Pearson correlation coefficient,

Weighted variants

[edit]

In the traditional formulation of the mutual information,

each event or object specified by is weighted by the corresponding probability . This assumes that all objects or events are equivalent apart from their probability of occurrence. However, in some applications it may be the case that certain objects or events are more significant than others, or that certain patterns of association are more semantically important than others.

For example, the deterministic mapping may be viewed as stronger than the deterministic mapping , although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (Cronbach 1954, Coombs, Dawes & Tversky 1970, Lockhead 1970), and is therefore not sensitive at all to the form of the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the following weighted mutual information (Guiasu 1977).

which places a weight on the probability of each variable value co-occurrence, . This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevant holistic or Prägnanz factors. In the above example, using larger relative weights for , , and would have the effect of assessing greater informativeness for the relation than for the relation , which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs,[29] and there are examples where the weighted mutual information also takes negative values.[30]

Adjusted mutual information

[edit]

A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? The adjusted mutual information or AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to the adjusted Rand index of two different partitions of a set.

Absolute mutual information

[edit]

Using the ideas of Kolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution:

To establish that this quantity is symmetric up to a logarithmic factor () one requires the chain rule for Kolmogorov complexity (Li & Vitányi 1997). Approximations of this quantity via compression can be used to define a distance measure to perform a hierarchical clustering of sequences without having any domain knowledge of the sequences (Cilibrasi & Vitányi 2005).

Linear correlation

[edit]

Unlike correlation coefficients, such as the product moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution for and is a bivariate normal distribution (implying in particular that both marginal distributions are normally distributed), there is an exact relationship between and the correlation coefficient (Gel'fand & Yaglom 1957).

The equation above can be derived as follows for a bivariate Gaussian:

Therefore,

For discrete data

[edit]

When and are limited to be in a discrete number of states, observation data is summarized in a contingency table, with row variable (or ) and column variable (or ). Mutual information is one of the measures of association or correlation between the row and column variables.

Other measures of association include Pearson's chi-squared test statistics, G-test statistics, etc. In fact, with the same log base, mutual information will be equal to the G-test log-likelihood statistic divided by , where is the sample size.

Applications

[edit]

In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizing conditional entropy. Examples include:

  • In search engine technology, mutual information between phrases and contexts is used as a feature for k-means clustering to discover semantic clusters (concepts).[31] For example, the mutual information of a bigram might be calculated as:
where is the number of times the bigram xy appears in the corpus, is the number of times the unigram x appears in the corpus, B is the total number of bigrams, and U is the total number of unigrams.[31]
  • In telecommunications, the channel capacity is equal to the mutual information, maximized over all input distributions.
  • Discriminative training procedures for hidden Markov models have been proposed based on the maximum mutual information (MMI) criterion.
  • RNA secondary structure prediction from a multiple sequence alignment.
  • Phylogenetic profiling prediction from pairwise present and disappearance of functionally link genes.
  • Mutual information has been used as a criterion for feature selection and feature transformations in machine learning. It can be used to characterize both the relevance and redundancy of variables, such as the minimum redundancy feature selection.
  • Mutual information is used in determining the similarity of two different clusterings of a dataset. As such, it provides some advantages over the traditional Rand index.
  • Mutual information of words is often used as a significance function for the computation of collocations in corpus linguistics. This has the added complexity that no word-instance is an instance to two different words; rather, one counts instances where 2 words occur adjacent or in close proximity; this slightly complicates the calculation, since the expected probability of one word occurring within words of another, goes up with
  • Mutual information is used in medical imaging for image registration. Given a reference image (for example, a brain scan), and a second image which needs to be put into the same coordinate system as the reference image, this image is deformed until the mutual information between it and the reference image is maximized.
  • Detection of phase synchronization in time series analysis.
  • In the infomax method for neural-net and other machine learning, including the infomax-based Independent component analysis algorithm
  • Average mutual information in delay embedding theorem is used for determining the embedding delay parameter.
  • Mutual information between genes in expression microarray data is used by the ARACNE algorithm for reconstruction of gene networks.
  • In statistical mechanics, Loschmidt's paradox may be expressed in terms of mutual information.[32][33] Loschmidt noted that it must be impossible to determine a physical law which lacks time reversal symmetry (e.g. the second law of thermodynamics) only from physical laws which have this symmetry. He pointed out that the H-theorem of Boltzmann made the assumption that the velocities of particles in a gas were permanently uncorrelated, which removed the time symmetry inherent in the H-theorem. It can be shown that if a system is described by a probability density in phase space, then Liouville's theorem implies that the joint information (negative of the joint entropy) of the distribution remains constant in time. The joint information is equal to the mutual information plus the sum of all the marginal information (negative of the marginal entropies) for each particle coordinate. Boltzmann's assumption amounts to ignoring the mutual information in the calculation of entropy, which yields the thermodynamic entropy (divided by the Boltzmann constant).
  • In stochastic processes coupled to changing environments, mutual information can be used to disentangle internal and effective environmental dependencies.[34][35] This is particularly useful when a physical system undergoes changes in the parameters describing its dynamics, e.g., changes in temperature.
  • The mutual information is used to learn the structure of Bayesian networks/dynamic Bayesian networks, which is thought to explain the causal relationship between random variables, as exemplified by the GlobalMIT toolkit:[36] learning the globally optimal dynamic Bayesian network with the Mutual Information Test criterion.
  • The mutual information is used to quantify information transmitted during the updating procedure in the Gibbs sampling algorithm.[37]
  • Popular cost function in decision tree learning.
  • The mutual information is used in cosmology to test the influence of large-scale environments on galaxy properties in the Galaxy Zoo.
  • The mutual information was used in Solar Physics to derive the solar differential rotation profile, a travel-time deviation map for sunspots, and a time–distance diagram from quiet-Sun measurements[38]
  • Used in Invariant Information Clustering to automatically train neural network classifiers and image segmenters given no labelled data.[39]
  • In stochastic dynamical systems with multiple timescales, mutual information has been shown to capture the functional couplings between different temporal scales.[40] Importantly, it was shown that physical interactions may or may not give rise to mutual information, depending on the typical timescale of their dynamics.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Mutual information is a fundamental concept in information theory that quantifies the amount of information one random variable contains about another, serving as a measure of their statistical dependence.[1] Formally defined for two random variables XX and YY as I(X;Y)=H(X)H(XY)I(X; Y) = H(X) - H(X \mid Y), where H(X)H(X) denotes the entropy of XX and H(XY)H(X \mid Y) the conditional entropy of XX given YY, it represents the reduction in uncertainty about XX upon knowing YY.[1] Introduced by Claude E. Shannon in his seminal 1948 paper "A Mathematical Theory of Communication," mutual information provides a rigorous foundation for analyzing communication channels and data transmission efficiency.[1] This measure is symmetric, such that I(X;Y)=I(Y;X)=H(X)+H(Y)H(X,Y)I(X; Y) = I(Y; X) = H(X) + H(Y) - H(X, Y), where H(X,Y)H(X, Y) is the joint entropy, and it is always non-negative, achieving zero if and only if XX and YY are statistically independent.[2] Mutual information generalizes notions of correlation beyond linear relationships, making it particularly valuable in scenarios involving nonlinear or complex dependencies.[3] Expressed in units of bits (or nats in natural logarithm base), it enables precise quantification of shared information content between variables.[4] Beyond its origins in communication theory, mutual information finds broad applications across disciplines. In statistics, it detects and evaluates dependencies between variables, offering a nonparametric alternative to traditional correlation metrics.[2] In machine learning, it supports feature selection by identifying relevant predictors that maximize information about the target variable while minimizing redundancy.[3] Fields like neuroscience employ it to assess neural coding efficiency and information flow in brain networks, while in genetics, it helps uncover associations in high-dimensional genomic data.[4] Despite its power, estimating mutual information from finite samples poses challenges due to its sensitivity to data distribution and the curse of dimensionality, spurring ongoing research into scalable approximation methods.[3]

Definition

Discrete random variables

Mutual information between two discrete random variables XX and YY is a measure of the amount of information that one variable contains about the other, originally introduced by Shannon in the context of communication over noisy channels.[5] Formally, it is defined as the expected value under the joint distribution PXYP_{XY} of the pointwise mutual information, which is the logarithm of the ratio of the joint probability to the product of the marginal probabilities:
I(X;Y)=EPXY[logdPXYdPXdPY]. I(X;Y) = \mathbb{E}_{P_{XY}} \left[ \log \frac{dP_{XY}}{dP_X dP_Y} \right].
For discrete random variables with probability mass functions p(x,y)p(x,y), p(x)p(x), and p(y)p(y), this expectation becomes a double summation over their supports:
I(X;Y)=x,yp(x,y)logp(x,y)p(x)p(y). I(X;Y) = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x) p(y)}.
[6][7] This summation formula follows directly from the definition of entropy and conditional entropy in information theory. Specifically, mutual information equals the marginal entropy minus the conditional entropy: I(X;Y)=H(X)H(XY)I(X;Y) = H(X) - H(X|Y), where H(X)=xp(x)logp(x)H(X) = -\sum_x p(x) \log p(x) is the entropy of XX and H(XY)=yp(y)H(XY=y)=x,yp(x,y)logp(xy)H(X|Y) = \sum_y p(y) H(X|Y=y) = -\sum_{x,y} p(x,y) \log p(x|y) is the conditional entropy.[5] Substituting and simplifying yields the summation form, as the terms involving only marginals cancel out, leaving the log ratio weighted by the joint probabilities.[8] To illustrate computation, consider two binary random variables XX and YY related through a binary symmetric channel with crossover probability ϵ=0.1\epsilon = 0.1, where XBernoulli(0.5)X \sim \text{Bernoulli}(0.5) and Y=XZY = X \oplus Z with ZBernoulli(0.1)Z \sim \text{Bernoulli}(0.1) independent of XX. The joint probabilities are p(0,0)=p(1,1)=0.45p(0,0) = p(1,1) = 0.45 and p(0,1)=p(1,0)=0.05p(0,1) = p(1,0) = 0.05, with marginals p(x)=p(y)=0.5p(x) = p(y) = 0.5. The mutual information simplifies to I(X;Y)=H(Y)H(YX)=1h2(0.1)I(X;Y) = H(Y) - H(Y|X) = 1 - h_2(0.1), where h2(p)=plog2p(1p)log2(1p)h_2(p) = -p \log_2 p - (1-p) \log_2 (1-p) is the binary entropy function, yielding I(X;Y)0.531I(X;Y) \approx 0.531 bits.[9] The logarithm in the definition is typically taken base 2, measuring mutual information in bits, or natural logarithm for nats; the base scales the numerical value but preserves key properties like symmetry.[7] This connection to entropy highlights mutual information as the reduction in uncertainty about XX provided by observing YY.[6]

Continuous random variables

For continuous random variables XX and YY defined on a common probability space with joint probability density function fX,Y(x,y)f_{X,Y}(x,y) and marginal density functions fX(x)f_X(x) and fY(y)f_Y(y) with respect to Lebesgue measure, mutual information extends the discrete case by replacing sums with integrals over the densities.[8] This formulation assumes the joint distribution is absolutely continuous with respect to the product of the marginals, allowing the use of Radon-Nikodym derivatives to define the densities rigorously.[10] The mutual information I(X;Y)I(X;Y) is then given by
I(X;Y)=fX,Y(x,y)logfX,Y(x,y)fX(x)fY(y)dxdy, I(X;Y) = \iint f_{X,Y}(x,y) \log \frac{f_{X,Y}(x,y)}{f_X(x) f_Y(y)} \, dx \, dy,
where the logarithm is typically base 2 for bits or natural for nats, and the integral is taken over the support of the densities.[8] Unlike the discrete case, this expression can yield infinite values if the densities lead to divergences, such as when X=YX = Y almost surely, reflecting perfect dependence in continuous spaces where differential entropy is unbounded below.[11] The definition relies on differential entropy as a prerequisite, which generalizes discrete entropy to continuous variables but differs fundamentally due to the infinite divisibility of the real line.[8] For a continuous random variable XX with density fX(x)f_X(x), the differential entropy is
H(X)=fX(x)logfX(x)dx. H(X) = -\int f_X(x) \log f_X(x) \, dx.
This quantity can be negative, unlike Shannon entropy, because it measures uncertainty relative to Lebesgue measure rather than a finite partition.[8] Mutual information for continuous variables can equivalently be expressed as I(X;Y)=H(X)+H(Y)H(X,Y)I(X;Y) = H(X) + H(Y) - H(X,Y), where H(X,Y)H(X,Y) is the joint differential entropy, preserving non-negativity despite the potential negativity of individual terms.[8] When the joint distribution is singular with respect to the product of the marginals—meaning it concentrates on a lower-dimensional manifold without a density in the full space—the standard integral form does not apply directly, and mutual information may be infinite to capture complete dependence.[12] In such cases, the rigorous definition invokes the Radon-Nikodym derivative of the joint measure with respect to the product measure, ensuring the expression I(X;Y)=logdPX,Yd(PX×PY)dPX,YI(X;Y) = \int \log \frac{dP_{X,Y}}{d(P_X \times P_Y)} \, dP_{X,Y} holds where the derivative exists, with infinity otherwise.[10] This handles singular continuous distributions, like those uniform on a curve, by embedding them in the broader measure-theoretic framework without assuming full-dimensional densities.[12] A representative example is the bivariate Gaussian distribution, where XX and YY have zero mean, unit variance, and correlation coefficient ρ(1,1)\rho \in (-1,1). The mutual information admits a closed-form expression I(X;Y)=12log(1ρ2)I(X;Y) = -\frac{1}{2} \log(1 - \rho^2), measured in nats, which increases monotonically from 0 (at ρ=0\rho = 0, independence) toward infinity as ρ|\rho| approaches 1 (perfect linear dependence).[13] This formula arises from computing the differential entropies: H(X)=H(Y)=12log(2πe)H(X) = H(Y) = \frac{1}{2} \log(2\pi e) and H(X,Y)=log(2πe)+12log(1ρ2)H(X,Y) = \log(2\pi e) + \frac{1}{2} \log(1 - \rho^2), highlighting how correlation reduces joint uncertainty beyond marginals.[13]

General measure-theoretic formulation

In the measure-theoretic framework, mutual information between two random variables XX and YY is defined with respect to the σ\sigma-algebras G\mathcal{G} and H\mathcal{H} they generate on a probability space (Ω,F,P)(\Omega, \mathcal{F}, P). It quantifies the shared information as I(X;Y)=Ωlog(dPXYd(PX×PY))dPXYI(X; Y) = \int_{\Omega} \log \left( \frac{dP_{XY}}{d(P_X \times P_Y)} \right) dP_{XY}, where PXYP_{XY} is the joint probability measure induced by XX and YY, PX×PYP_X \times P_Y is the product measure of the marginals, and the logarithm argument is the Radon-Nikodym derivative assuming PXYPX×PYP_{XY} \ll P_X \times P_Y. This integral expression is equivalent to the Kullback-Leibler divergence I(X;Y)=DKL(PXYPX×PY)I(X; Y) = D_{\mathrm{KL}}(P_{XY} \| P_X \times P_Y), which measures how the joint distribution deviates from independence under the product measure.[8] The formulation extends to arbitrary probability spaces beyond standard Euclidean settings, encompassing atomic measures (discrete components with point masses) and diffuse measures (continuous components without atoms), as long as the absolute continuity condition holds to ensure the Radon-Nikodym derivative exists. The concept originated in Claude Shannon's 1948 paper, where mutual information was introduced as a foundational quantity in information theory for analyzing discrete communication channels.[5]

Interpretation

Relation to entropy

Mutual information between two random variables XX and YY, denoted I(X;Y)I(X; Y), measures the amount of information one variable contains about the other and is defined in terms of entropy as
I(X;Y)=H(X)H(XY)=H(Y)H(YX)=H(X)+H(Y)H(X,Y), I(X; Y) = H(X) - H(X \mid Y) = H(Y) - H(Y \mid X) = H(X) + H(Y) - H(X, Y),
where H(X)H(X) is the entropy of XX, H(XY)H(X \mid Y) is the conditional entropy of XX given YY, and H(X,Y)H(X, Y) is the joint entropy of XX and YY.[5] This formulation arises directly from the foundational concepts in information theory, where entropy quantifies uncertainty.[5] The term H(X)H(XY)H(X) - H(X \mid Y) specifically represents the reduction in uncertainty about XX when YY is known. The entropy H(X)H(X) measures the average information needed to specify XX, while the conditional entropy H(XY)H(X \mid Y) captures the remaining uncertainty in XX after observing YY. Thus, their difference I(X;Y)I(X; Y) quantifies the information that YY provides about XX, symmetrically expressed as H(Y)H(YX)H(Y) - H(Y \mid X). The joint form H(X)+H(Y)H(X,Y)H(X) + H(Y) - H(X, Y) follows from the chain rule for entropy, H(X,Y)=H(X)+H(YX)H(X, Y) = H(X) + H(Y \mid X), highlighting how mutual information accounts for the overlap in the uncertainties of XX and YY.[5] A simple example illustrates this relation using two fair coin flips, XX and YY, each with outcomes heads or tails equally likely, so H(X)=H(Y)=1H(X) = H(Y) = 1 bit. If XX and YY are independent, then H(XY)=H(X)=1H(X \mid Y) = H(X) = 1 bit, yielding I(X;Y)=0I(X; Y) = 0 bits and indicating no reduction in uncertainty. In contrast, if Y=XY = X (perfect dependence), then H(XY)=0H(X \mid Y) = 0 bits, so I(X;Y)=1I(X; Y) = 1 bit, showing that observing YY eliminates all uncertainty about XX. This entropy difference demonstrates mutual information's role in quantifying shared uncertainty.[14] Mutual information serves as a measure of dependence or "correlation" between variables in information units (such as bits), but it differs from statistical correlation like the Pearson coefficient by capturing any form of statistical dependence, including nonlinear ones, without assuming linearity.[15]

Information-theoretic motivation

Mutual information emerged as a cornerstone of information theory through Claude Shannon's foundational work in his 1948 paper "A Mathematical Theory of Communication," where it was defined to capture the essence of how much uncertainty about one event is resolved by observing another. Shannon developed this concept to address the challenges of reliable communication in the presence of noise, framing mutual information I(X;Y)I(X;Y) as the precise measure of shared information between a source variable XX and a received variable YY. This quantification allowed for the first time a mathematical treatment of information as a tradable commodity, independent of semantics or physical representation.[5] In the context of noisy communication channels, mutual information provides the theoretical foundation for understanding the limits of information transmission. It represents the average amount of information about the input that is conveyed through the channel's output, serving as the key metric for determining how much reliable communication is possible without error. For instance, in a channel where noise corrupts the signal, I(X;Y)I(X;Y) quantifies the reduction in the sender's message uncertainty that the receiver can achieve, motivating its role as the building block for concepts like channel capacity. This perspective underscores mutual information's origin in solving practical engineering problems of the era, such as telegraphy and early telephony.[5] To illustrate, consider a thought experiment with two fair six-sided dice representing random variables XX and YY. If the dice are rolled independently, the outcome of one die offers no predictive power about the other; knowing X=3X = 3 does not alter the probabilities for YY, resulting in I(X;Y)=0I(X;Y) = 0. This zero mutual information directly corresponds to statistical independence between the variables, a property that holds symmetrically: if XX and YY are independent, then I(X;Y)=0I(X;Y) = 0.[16][6] This equivalence between zero mutual information and independence is a hallmark of the concept, providing a rigorous test for dependence in probabilistic systems. However, in continuous random variables, while the implication remains valid, the computation involves probability densities and can encounter subtleties such as potential divergences if the joint distribution lacks absolute continuity with respect to the product measure. Nonetheless, under standard assumptions, mutual information faithfully detects the absence of informational linkage in both discrete and continuous settings.[6]

Geometric interpretation

Mutual information admits a geometric interpretation as the Kullback-Leibler (KL) divergence between the joint probability distribution PX,YP_{X,Y} and the product of the marginal distributions PXPYP_X P_Y, quantifying the deviation of the joint distribution from the independence assumption.[17] This formulation positions mutual information as a measure of "distance" in the space of probability distributions, where zero mutual information corresponds to the joint aligning perfectly with the independence surface.[17] Geometrically, this KL divergence can be visualized as the area under the curve of the log-ratio logPX,Y(x,y)PX(x)PY(y)\log \frac{P_{X,Y}(x,y)}{P_X(x) P_Y(y)}, weighted by the joint density PX,Y(x,y)P_{X,Y}(x,y), highlighting regions where dependence inflates or deflates probabilities relative to independence.[18] In the probability simplex or density space, mutual information thus traces the excess "volume" or separation from the independence manifold.[19] Mutual information emerges as a special case of f-divergences, a broader class of divergences defined by a convex function ff, with the KL divergence (and hence mutual information) corresponding to f(u)=uloguf(u) = u \log u.[20] This connection underscores mutual information's role within the family of information measures that asymmetrically compare distributions, emphasizing its geometric asymmetry in capturing directional dependence.[20] Visualizations often employ contour plots of the joint density PX,Y(x,y)P_{X,Y}(x,y) overlaid against the independence surface PX(x)PY(y)P_X(x) P_Y(y); for independent variables, the contours align, but dependence introduces distortions, with the extent of mismatch reflecting higher mutual information.[21] Such plots reveal how correlation skews probability mass away from rectangular independence contours toward diagonal or clustered patterns in bivariate space.[21] In bivariate examples, mutual information increases with the strength of dependence, as seen in scatter plots where linear correlation tightens points along a line (elevating MI from near zero for scattered data to higher values for perfect alignment), while nonlinear dependencies like quadratic relations similarly boost MI beyond what linear measures capture.[22] For Gaussian variables, this scaling is explicit, with MI growing logarithmically with the absolute correlation coefficient.[23]

Basic Properties

Non-negativity and symmetry

Mutual information is always non-negative for any pair of random variables XX and YY, that is, I(X;Y)0I(X;Y) \geq 0.[24] This property follows from the fact that mutual information can be expressed as the Kullback-Leibler divergence between the joint distribution PXYP_{XY} and the product of the marginals PX×PYP_X \times P_Y, i.e., I(X;Y)=D(PXYPX×PY)I(X;Y) = D(P_{XY} \| P_X \times P_Y), and the Kullback-Leibler divergence is non-negative.[24] To prove the non-negativity of the Kullback-Leibler divergence using Jensen's inequality, consider the discrete case where XX and YY take values in finite sets. The divergence is given by
D(PXYPX×PY)=x,yPXY(x,y)logPXY(x,y)PX(x)PY(y). D(P_{XY} \| P_X \times P_Y) = \sum_{x,y} P_{XY}(x,y) \log \frac{P_{XY}(x,y)}{P_X(x) P_Y(y)}.
This can be rewritten as
D(PXYPX×PY)=x,yPXY(x,y)logPX(x)PY(y)PXY(x,y)=EPXY[logPX(X)PY(Y)PXY(X,Y)]. D(P_{XY} \| P_X \times P_Y) = -\sum_{x,y} P_{XY}(x,y) \log \frac{P_X(x) P_Y(y)}{P_{XY}(x,y)} = E_{P_{XY}} \left[ -\log \frac{P_X(X) P_Y(Y)}{P_{XY}(X,Y)} \right].
The function f(t)=logtf(t) = -\log t is convex for t>0t > 0. By Jensen's inequality applied to the expectation under PXYP_{XY},
EPXY[f(PX(X)PY(Y)PXY(X,Y))]f(EPXY[PX(X)PY(Y)PXY(X,Y)]). E_{P_{XY}} \left[ f\left( \frac{P_X(X) P_Y(Y)}{P_{XY}(X,Y)} \right) \right] \geq f\left( E_{P_{XY}} \left[ \frac{P_X(X) P_Y(Y)}{P_{XY}(X,Y)} \right] \right).
The expectation on the right simplifies to x,yPX(x)PY(y)=1\sum_{x,y} P_X(x) P_Y(y) = 1, so f(1)=log1=0f(1) = -\log 1 = 0. Thus, D(PXYPX×PY)0D(P_{XY} \| P_X \times P_Y) \geq 0.[25] For continuous random variables, the proof is analogous, replacing sums with integrals and relying on the same convexity of log-\log.[26] Mutual information is also symmetric, meaning I(X;Y)=I(Y;X)I(X;Y) = I(Y;X).[24] This follows directly from the definition, as the joint probability PXY(x,y)=PYX(y,x)P_{XY}(x,y) = P_{YX}(y,x) and the expression x,yPXY(x,y)logPXY(x,y)PX(x)PY(y)\sum_{x,y} P_{XY}(x,y) \log \frac{P_{XY}(x,y)}{P_X(x) P_Y(y)} remains unchanged when XX and YY are swapped.[5] The continuous case holds similarly. Equality in the non-negativity holds if and only if XX and YY are independent, i.e., I(X;Y)=0I(X;Y) = 0 precisely when PXY(x,y)=PX(x)PY(y)P_{XY}(x,y) = P_X(x) P_Y(y) for all x,yx,y (discrete case) or almost everywhere (continuous case).[24] This is because the Kullback-Leibler divergence equals zero if and only if the two distributions are identical, and Jensen's inequality achieves equality when the argument PX(x)PY(y)PXY(x,y)\frac{P_X(x) P_Y(y)}{P_{XY}(x,y)} is constant almost surely, which occurs under independence.[27] To illustrate non-negativity, consider two binary random variables each uniform on {[0](/page/0),1}\{[0](/page/0),1\}. If XX and YY are independent, then I(X;Y)=[0](/page/0)I(X;Y) = [0](/page/0). If instead Y=XY = X (perfect dependence), the joint distribution has PXY([0](/page/0),[0](/page/0))=PXY(1,1)=1/2P_{XY}([0](/page/0),[0](/page/0)) = P_{XY}(1,1) = 1/2 and PXY([0](/page/0),1)=PXY(1,[0](/page/0))=[0](/page/0)P_{XY}([0](/page/0),1) = P_{XY}(1,[0](/page/0)) = [0](/page/0), yielding I(X;Y)=1I(X;Y) = 1 bit, which is positive.[28]

Additivity under independence

One key property of mutual information is its additivity for independent components. Specifically, if the joint distribution factors as PX,Y,W,Z(x,y,w,z)=PX,Y(x,y)PW,Z(w,z)P_{X,Y,W,Z}(x,y,w,z) = P_{X,Y}(x,y) P_{W,Z}(w,z) (i.e., the pairs (X,Y)(X,Y) and (W,Z)(W,Z) are independent), then I(X,W;Y,Z)=I(X;Y)+I(W;Z)I(X, W; Y, Z) = I(X; Y) + I(W; Z).[6] This additivity reflects the fact that mutual information between independent systems adds up without cross terms. It leverages the non-negativity of mutual information as a foundational bound where I(X;Y)[0](/page/0)I(X; Y) \geq [0](/page/0) and equality holds if and only if XX and YY are independent. A related property is the data processing inequality, which states that mutual information cannot increase when one of the variables is further processed through a function. Formally, for any function ff, I(X;Y)I(X;f(Y))I(X; Y) \geq I(X; f(Y)), with equality if ff is invertible.[29] This inequality implies that in a Markov chain XYZX \to Y \to Z, the mutual information decreases or stays the same along the chain: I(X;Y)I(X;Z)I(X; Y) \geq I(X; Z). For example, consider a simple binary Markov chain where XX is a fair coin flip, Y=XY = X with probability 0.9 and flipped with probability 0.1 (noisy channel), and ZZ is YY passed through another identical noisy channel; here, I(X;Y)0.531I(X; Y) \approx 0.531 bits while I(X;Z)0.321I(X; Z) \approx 0.321 bits, illustrating the non-increase. This additivity extends naturally to multiple independent pairs. If several pairs of variables are mutually independent in the same sense, the total mutual information is the sum over the individual pairs.[6]

Chain rule

The chain rule for mutual information expresses the total mutual information between a joint random variable consisting of a sequence X1,,XnX_1, \dots, X_n and another random variable YY as a sum of conditional mutual informations. Formally,
I(X1,,Xn;Y)=i=1nI(Xi;YX1,,Xi1), I(X_1, \dots, X_n; Y) = \sum_{i=1}^n I(X_i; Y \mid X_1, \dots, X_{i-1}),
where the conditioning set is empty for i=1i=1. This identity derives from the definition of mutual information in terms of entropy and the chain rule for entropy. Mutual information satisfies I(X1,,Xn;Y)=H(X1,,Xn)H(X1,,XnY)I(X_1, \dots, X_n; Y) = H(X_1, \dots, X_n) - H(X_1, \dots, X_n \mid Y). Applying the chain rule for entropy to the first term gives
H(X1,,Xn)=i=1nH(XiX1,,Xi1), H(X_1, \dots, X_n) = \sum_{i=1}^n H(X_i \mid X_1, \dots, X_{i-1}),
and similarly for the conditional entropy,
H(X1,,XnY)=i=1nH(XiX1,,Xi1,Y). H(X_1, \dots, X_n \mid Y) = \sum_{i=1}^n H(X_i \mid X_1, \dots, X_{i-1}, Y).
Subtracting these expansions yields
I(X1,,Xn;Y)=i=1n[H(XiX1,,Xi1)H(XiX1,,Xi1,Y)]=i=1nI(Xi;YX1,,Xi1), I(X_1, \dots, X_n; Y) = \sum_{i=1}^n \bigl[ H(X_i \mid X_1, \dots, X_{i-1}) - H(X_i \mid X_1, \dots, X_{i-1}, Y) \bigr] = \sum_{i=1}^n I(X_i; Y \mid X_1, \dots, X_{i-1}),
since the conditional mutual information is the difference between these conditional entropies. In applications to sequential prediction, the chain rule decomposes the total information as the incremental contribution of each successive variable: the ii-th term measures how much additional uncertainty about YY is reduced by observing XiX_i after the previous observations X1,,Xi1X_1, \dots, X_{i-1}. This perspective highlights the marginal benefit of incorporating variables one at a time, which is particularly useful in feature selection or predictive modeling where variables arrive in sequence.[14] For an illustration with three variables X,Y,ZX, Y, Z, the rule specializes to I(X,Y;Z)=I(X;Z)+I(Y;ZX)I(X, Y; Z) = I(X; Z) + I(Y; Z \mid X), where the first term captures the direct dependence between XX and ZZ, and the second quantifies the remaining dependence of YY on ZZ after accounting for XX.

Advanced Properties

Relation to Kullback-Leibler divergence

Mutual information $ I(X; Y) $ between two random variables $ X $ and $ Y $ is equivalently defined as the Kullback-Leibler (KL) divergence between their joint probability distribution $ P_{XY} $ and the product of their marginal distributions $ P_X \times P_Y $:
I(X;Y)=DKL(PXYPX×PY). I(X; Y) = D_{\mathrm{KL}}(P_{XY} \| P_X \times P_Y).
This representation underscores mutual information as a measure of the deviation from statistical independence, where $ D_{\mathrm{KL}}(P_{XY} | P_X \times P_Y) = 0 $ if and only if $ X $ and $ Y $ are independent. The identity holds in both discrete and continuous settings, with the KL divergence computed as a summation over joint support or an integral over the joint density, respectively. To derive this equivalence, substitute the definition of the KL divergence into the expression. For discrete random variables,
DKL(PXYPX×PY)=x,ypXY(x,y)logpXY(x,y)pX(x)pY(y), D_{\mathrm{KL}}(P_{XY} \| P_X \times P_Y) = \sum_{x, y} p_{XY}(x, y) \log \frac{p_{XY}(x, y)}{p_X(x) p_Y(y)},
where the logarithm argument simplifies to $ \frac{p_{Y|X}(y|x)}{p_Y(y)} $. Expanding yields
xpX(x)ypYX(yx)logpYX(yx)pY(y)=EX[DKL(PYXPY)]=H(Y)H(YX), \sum_x p_X(x) \sum_y p_{Y|X}(y|x) \log \frac{p_{Y|X}(y|x)}{p_Y(y)} = \mathbb{E}_{X} \left[ D_{\mathrm{KL}}(P_{Y|X} \| P_Y) \right] = H(Y) - H(Y|X),
matching the entropic definition of mutual information; the continuous analog uses integrals and differential entropies. This proof demonstrates that mutual information is a special case of the KL divergence tailored to independence testing. Asymptotically, mutual information connects to hypothesis testing via Stein's lemma, which characterizes the optimal error exponent in distinguishing distributions. Specifically, for testing the null hypothesis of independence ($ H_0: P_{XY} = P_X \times P_Y )againstdependence() against dependence ( H_1: P_{XY} $), the best type II error probability decays exponentially with rate $ I(X; Y) = D_{\mathrm{KL}}(P_{XY} | P_X \times P_Y) $ as sample size increases, for fixed type I error. This log-likelihood ratio interpretation positions mutual information as the expected excess information needed to discriminate joint dependence from marginal independence in large samples.[30] For categorical data, this KL form facilitates direct computation. Consider binary variables $ X, Y \in {0,1} $ with $ P(X=0)=0.5 $, $ P(Y=0|X=0)=0.9 $, $ P(Y=0|X=1)=0.1 $, yielding marginal $ P(Y=0)=0.5 $ and joint probabilities $ p_{00}=0.45 $, $ p_{01}=0.05 $, $ p_{10}=0.05 $, $ p_{11}=0.45 $. Then,
I(X;Y)=x,y{0,1}pxylogpxy0.5py0.531 bits, I(X; Y) = \sum_{x,y \in \{0,1\}} p_{xy} \log \frac{p_{xy}}{0.5 \cdot p_y} \approx 0.531 \text{ bits},
obtained by evaluating each term (e.g., $ 0.45 \log(0.45 / 0.25) \approx 0.382 $, and symmetrically for others); this equals $ H(Y) - H(Y|X) \approx 1 - 0.469 $. Such examples illustrate how KL computation quantifies dependence in finite discrete spaces.

Supermodularity

Mutual information exhibits submodularity as a set function when considering set-valued random variables. Specifically, for random variables XX, YY, and ZZ where YY and ZZ are sets, the mutual information satisfies the inequality
I(X;YZ)+I(X;YZ)I(X;Y)+I(X;Z). I(X; Y \cup Z) + I(X; Y \cap Z) \leq I(X; Y) + I(X; Z).
This property holds because mutual information can be expressed in terms of entropy: I(X;W)=H(X)H(XW)I(X; W) = H(X) - H(X \mid W), where H(X)H(X) is constant with respect to WW. The inequality is thus equivalent to the supermodularity of the conditional entropy function H(X)H(X \mid \cdot).[31] The proof proceeds via inclusion-exclusion principles on entropies, leveraging the non-negativity of mutual information. The submodularity of mutual information follows from the supermodularity of conditional entropy, with equality holding when XX is conditionally independent of the symmetric differences given the intersection. This derivation relies on the chain rule for entropy applied to the disjoint components YZY \setminus Z, ZYZ \setminus Y, and YZY \cap Z.[31] In feature selection, the submodularity of mutual information implies diminishing returns when incrementally adding variables to a feature set, meaning the marginal information gain from a new variable decreases as the set grows. This property enables efficient greedy algorithms for selecting informative features, providing constant-factor approximation guarantees relative to the optimal subset in high-dimensional settings. For instance, in supervised learning tasks, maximizing I(X;S)I(X; S) over feature sets SS benefits from this structure, avoiding exhaustive search while bounding suboptimality.[32] An application arises in graphical models, particularly Bayesian networks, where mutual information quantifies conditional dependencies between nodes. The submodular property supports optimization in structure learning by facilitating greedy edge additions that respect diminishing marginal contributions, leading to scalable inference with theoretical performance assurances in discovering network topologies from data.[33]

Bayesian estimation

Bayesian estimation of mutual information addresses the challenge of inferring the dependency between random variables from finite data samples by incorporating prior distributions on the underlying probability densities, thereby quantifying uncertainty in the estimate. Unlike frequentist approaches, Bayesian methods treat the mutual information I(X;Y)I(X;Y) as a random variable derived from the posterior distribution over densities, enabling robust inference even with limited data. This is particularly useful in scenarios where parametric assumptions fail, such as in high-dimensional or non-standard distributions.[34] A prominent non-parametric approach employs Dirichlet process priors to model the joint and marginal densities without specifying a parametric form, allowing flexible estimation of I(X;Y)I(X;Y) through posterior sampling. In this framework, the Dirichlet process mixture model generates densities by placing a prior on partitions of the sample space, with the base measure and concentration parameter tuned to the data; the mutual information is then computed as the expected value under the posterior, providing a full distribution rather than a point estimate. This method has demonstrated lower mean squared error compared to some frequentist nonparametric estimators in simulation studies, making it suitable for applications like feature selection in machine learning. Recent advances include flow-based variational Bayesian estimators for improved scalability in high dimensions (as of 2025).[34][35][36] Plug-in estimators within a Bayesian context approximate I(X;Y)I(X;Y) by first estimating the densities via kernel density estimation (KDE) and then applying Bayesian bias correction to account for finite-sample effects. KDE constructs smooth estimates of the joint density p(x,y)p(x,y) and marginals p(x)p(x), p(y)p(y) using a kernel function (e.g., Gaussian) and bandwidth parameter, after which the plug-in formula I(X;Y)p(x,y)logp(x,y)p(x)p(y)I(X;Y) \approx \sum p(x,y) \log \frac{p(x,y)}{p(x)p(y)} (discretized for computation) yields the estimate; priors on the bandwidth or kernel parameters can be incorporated to regularize the posterior. Bias correction, often via analytical adjustments like those derived from asymptotic expansions, mitigates underestimation in low-sample regimes, with Bayesian variants using Dirichlet priors on binned approximations of KDE outputs for discrete-like handling of continuous data. This approach excels for bivariate cases where direct computation is feasible.[37][38] In model selection tasks, Bayesian mutual information facilitates choosing among competing models by maximizing the expected information gain between parameters and observed data, often integrated into variational inference frameworks. For instance, variational methods approximate intractable posteriors by optimizing a lower bound that incorporates mutual information terms, such as in estimating dependencies for latent variable models; here, self-consistent Bayesian updates enhance data efficiency by iteratively refining the variational distribution to align with the true MI. This is applied in scenarios like structure learning in Bayesian networks, where MI scores with uniform Dirichlet priors guide edge selection, balancing fit and complexity. Recent advancements leverage this for experimental design, prioritizing queries that maximize MI for parameter identifiability.[39][38][40] A key challenge in Bayesian estimation of mutual information is the curse of dimensionality, where the required sample size grows exponentially with the number of variables, leading to unreliable density estimates and posterior collapse in high dimensions. For bivariate data (d=2d=2), accurate estimation is achievable with thousands of samples using DP or KDE methods, but in dimensions d>10d > 10, even millions of samples yield biased or high-variance results due to sparse effective support in the density. Mitigation strategies include dimensionality reduction or restricted priors, but these trade off flexibility for tractability.[41]

Variations

Conditional mutual information

Conditional mutual information is a measure in information theory that quantifies the mutual dependence between two random variables XX and YY when a third random variable ZZ is known, extending the unconditional mutual information to scenarios with conditioning. It represents the expected reduction in uncertainty about XX provided by YY, after accounting for the information already available from ZZ. Formally, for discrete random variables, the conditional mutual information I(X;YZ)I(X; Y \mid Z) is defined as
I(X;YZ)=H(XZ)H(XY,Z), I(X; Y \mid Z) = H(X \mid Z) - H(X \mid Y, Z),
where H()H(\cdot \mid \cdot) denotes conditional entropy. Equivalently, it can be expressed as the expectation over ZZ:
I(X;YZ)=zp(z)I(X;YZ=z), I(X; Y \mid Z) = \sum_z p(z) \, I(X; Y \mid Z = z),
with a similar integral form p(z)I(X;YZ=z)dz\int p(z) \, I(X; Y \mid Z = z) \, dz for continuous variables. This definition arises naturally from the properties of conditional entropy introduced in foundational work on information theory.[5] Key properties of conditional mutual information include non-negativity, I(X;YZ)0I(X; Y \mid Z) \geq 0, with equality if and only if XX and YY are conditionally independent given ZZ, i.e., p(x,yz)=p(xz)p(yz)p(x, y \mid z) = p(x \mid z) p(y \mid z) for all x,y,zx, y, z with p(z)>0p(z) > 0. It is also symmetric in XX and YY, so I(X;YZ)=I(Y;XZ)I(X; Y \mid Z) = I(Y; X \mid Z). Additionally, it satisfies a chain rule analogous to that for unconditional mutual information:
I(X;YZ)=I(X;Y)+I(X;ZY), I(X; YZ) = I(X; Y) + I(X; Z \mid Y),
which decomposes the mutual information between XX and the joint variable (Y,Z)(Y, Z) into the direct dependence on YY plus the additional dependence on ZZ given YY. These properties hold for both discrete and continuous cases and facilitate derivations in multi-variable settings. The interpretation of conditional mutual information is the amount of shared information between XX and YY that remains after conditioning on ZZ, capturing dependencies not explained by ZZ alone. For instance, in a Markov chain where XZYX \to Z \to Y, the conditional mutual information I(X;YZ)=0I(X; Y \mid Z) = 0, indicating that ZZ fully mediates the dependence between XX and YY, leaving no residual direct information flow. This property is central to applications in causal modeling and graphical models.

Normalized mutual information

Normalized mutual information (NMI) provides a bounded measure of dependence between two random variables by normalizing the mutual information to the range [0, 1], enabling straightforward interpretation and comparison across datasets with varying scales or cardinalities. A widely used symmetric variant is given by
NMI(X;Y)=I(X;Y)H(X)H(Y), \text{NMI}(X; Y) = \frac{I(X; Y)}{\sqrt{H(X) H(Y)}},
where I(X;Y)I(X; Y) denotes the mutual information between XX and YY, and H(X)H(X) and H(Y)H(Y) are their respective entropies; this formulation ensures the measure is invariant to monotonic transformations of the variables and achieves a maximum value of 1 when XX and YY have equal entropy and are fully dependent (e.g., one is a bijection of the other). This version was originally proposed for evaluating multimodality image alignment, where it demonstrated robustness to changes in image overlap. Alternative normalizations include dividing by the minimum marginal entropy, NMI(X;Y)=I(X;Y)/min(H(X),H(Y))\text{NMI}(X; Y) = I(X; Y) / \min(H(X), H(Y)), which also bounds the value at most 1 since I(X;Y)min(H(X),H(Y))I(X; Y) \leq \min(H(X), H(Y)), or the uncertainty coefficient, an asymmetric form U(XY)=I(X;Y)/H(X)U(X \mid Y) = I(X; Y) / H(X), which quantifies the fraction of uncertainty in XX resolved by knowing YY.[42] The uncertainty coefficient originates from early applications in statistical computing for assessing variable associations.[42] For discrete random variables, NMI is computed using estimated probabilities from a contingency table, where joint probabilities p(x,y)p(x, y) are the normalized counts of co-occurrences, marginal probabilities p(x)p(x) and p(y)p(y) are row and column sums divided by the total count, and entropies and mutual information follow the standard sums: H(X)=xp(x)logp(x)H(X) = -\sum_x p(x) \log p(x), I(X;Y)=x,yp(x,y)logp(x,y)p(x)p(y)I(X; Y) = \sum_{x,y} p(x, y) \log \frac{p(x, y)}{p(x) p(y)}.[43] This approach is particularly effective for categorical data in practice, such as cluster labels. The primary advantages of NMI lie in its scale invariance and bounded range, making it suitable for tasks like clustering evaluation, where it compares predicted partitions to ground-truth labels regardless of the number of clusters in each, with values near 1 indicating strong agreement and 0 indicating independence.[43] However, NMI is not a true metric, as it fails to satisfy the triangle inequality, and remains sensitive to imbalances in marginal entropies, which can diminish its value for strong dependencies involving high-entropy variables.[42]

Directed and transfer entropy variants

Directed information extends the concept of mutual information to time series data by incorporating causality and feedback, quantifying the information flow from one process to another in a directed manner. Formally, for two stochastic processes XX and YY, the directed information from XX to YY is defined as
I(XY)=t=1nI(Xt;YtY1:t1), I(X \to Y) = \sum_{t=1}^n I(X_t ; Y_t \mid Y_{1:t-1}),
where I(;)I(\cdot ; \cdot \mid \cdot) denotes conditional mutual information, capturing how past values of YY condition the dependence between current XtX_t and YtY_t. This measure was introduced to address limitations of standard mutual information in channels with feedback, providing a more appropriate framework for causal inference in sequential data.[44] Transfer entropy, a related asymmetric variant, specifically measures the directed information transfer from the past of one process to the future of another, conditional on the receiver's own past. It is given by
TEXY=I(X1:t1;YtY1:t1), TE_{X \to Y} = I(X_{1:t-1} ; Y_t \mid Y_{1:t-1}),
averaged over time tt, and serves as a model-free tool to detect effective connectivity in complex systems without assuming underlying dynamics. Proposed as a practical implementation for empirical data analysis, transfer entropy distinguishes driving influences from mere correlations by isolating predictive information beyond the target process's autocorrelation.[45] Both directed information and transfer entropy find applications in inferring causality, particularly through links to Granger causality, where non-zero values indicate that one time series contains information that improves prediction of the other beyond its own history. For instance, in econometric and neural data, directed information has been shown to align with Granger-noncausality conditions under Gaussian assumptions, enabling the construction of causality graphs for multivariate processes. Transfer entropy extends this to nonlinear settings, offering robustness in detecting asymmetric interactions in fields like climate modeling and finance.[46][47] A representative example is unidirectional coupling in stochastic processes, such as a system where process XX drives YY but not vice versa, modeled by Yt=f(Yt1,Xtτ)+ϵtY_t = f(Y_{t-1}, X_{t-\tau}) + \epsilon_t with noise ϵt\epsilon_t. Here, transfer entropy TEXYTE_{X \to Y} yields a positive value reflecting the information flow, while TEYXTE_{Y \to X} approaches zero, demonstrating the measure's ability to uncover directional dependencies in simulated bidirectional versus unidirectional scenarios.[45]

Applications

In statistics and machine learning

In statistics, mutual information serves as a key measure for feature selection by quantifying the dependency between features and the target variable while accounting for redundancies among features. The minimum redundancy maximum relevance (mRMR) algorithm, for instance, selects features that maximize mutual information with the target (relevance) while minimizing mutual information among the selected features themselves (redundancy), enabling efficient handling of high-dimensional datasets in classification tasks.[48] This approach has been shown to outperform traditional correlation-based methods in gene expression analysis and text categorization by preserving predictive power with fewer features.[49] In machine learning, mutual information is integral to clustering evaluation, where adjusted mutual information (AMI) provides a normalized metric to compare predicted clusterings against ground truth partitions, correcting for chance agreements. AMI is computed as the mutual information between cluster assignments minus its expected value under a random hypergeometric model, divided by the square root of the product of entropies adjusted for chance, yielding values between -1 and 1 where 1 indicates perfect agreement and 0 indicates random labeling.[50] This measure is particularly useful in hierarchical clustering algorithms, such as those applied to bioinformatics datasets, as it remains robust to varying cluster sizes and numbers, unlike unadjusted variants.[51] For dimensionality reduction, the information bottleneck (IB) method employs mutual information to compress input data into a lower-dimensional representation that retains maximal relevant information about a target variable. The IB Lagrangian balances the compression term, which minimizes mutual information between input XX and representation ZZ (i.e., I(X;Z)I(X; Z)), against the preservation term, which maximizes mutual information between ZZ and target YY (i.e., I(Z;Y)I(Z; Y)), optimized via:
minp(zx)I(X;Z)βI(Z;Y), \min_{p(z|x)} I(X; Z) - \beta I(Z; Y),
where β>0\beta > 0 trades off compression and relevance; solutions are found iteratively using generalized Blahut-Arimoto algorithms.[52] This framework has influenced manifold learning and has been applied in speech recognition to extract succinct features that improve downstream prediction accuracy.[53] Recent advances since 2020 have integrated mutual information into deep generative models, particularly variational autoencoders (VAEs), to enhance latent space disentanglement and generation quality. The variational mutual information maximizing (VMI) framework for VAEs maximizes mutual information between latent variables and data while constraining posterior collapse, leading to more informative encodings in image synthesis tasks.[54] Similarly, the VOLTA autoencoder integrates variational mutual information maximization within a Transformer-VAE structure to improve generative diversity in natural language generation tasks, enhancing metrics such as Self-BLEU and Distinct-n compared to standard VAEs.[55] These methods leverage MI estimation techniques, such as variational bounds, to address limitations in traditional VAEs' posterior approximations.

In communication theory

In communication theory, mutual information plays a central role in quantifying the reliable transmission of information over noisy channels, as established by Claude Shannon's foundational work. Shannon's noisy-channel coding theorem, published in 1948, demonstrates that reliable communication is possible at rates below the channel capacity, defined as the maximum mutual information between the input and output of the channel.[5] Specifically, for a discrete memoryless channel, the capacity CC is given by
C=maxp(x)I(X;Y), C = \max_{p(x)} I(X; Y),
where the maximum is taken over all possible input distributions p(x)p(x), and I(X;Y)I(X; Y) measures the reduction in uncertainty about the input XX provided by the output YY. This theorem shows that error probability can be made arbitrarily small for rates R<CR < C, but impossible for R>CR > C, marking a profound shift from prior beliefs that noise fundamentally limited communication efficiency.[5]
Mutual information also bounds the trade-off between data compression and fidelity in rate-distortion theory, another cornerstone of Shannon's contributions. In this framework, the rate-distortion function R(D)R(D) represents the minimum rate required to encode a source at distortion level DD, expressed as the infimum of mutual information I(X;X^)I(X; \hat{X}) over all conditional distributions p(x^x)p(\hat{x}|x) satisfying the expected distortion constraint E[d(X,X^)]DE[d(X, \hat{X})] \leq D.[56] Shannon proved that this function is achievable, providing the theoretical limit for lossy compression schemes, where mutual information captures the essential information preserved in the reconstruction X^\hat{X}. For example, in encoding a discrete source with quadratic distortion, R(D)R(D) decreases as allowable distortion DD increases, reflecting the diminishing returns of additional bits for higher fidelity.[56] In multi-user communication scenarios, mutual information extends to conditional forms to characterize capacities of broadcast and multiple-access channels. For a broadcast channel, where a single transmitter sends to multiple receivers over correlated channels, the capacity region involves maximizing rates using expressions like I(X;Y1U)I(X; Y_1 | U) and I(X;Y2U)I(X; Y_2 | U) for auxiliary random variable UU, enabling degraded message sets or superposition coding strategies.[57] Similarly, in a multiple-access channel with multiple transmitters sharing a common receiver, the capacity region is bounded by individual rates R1I(X1;YX2)R_1 \leq I(X_1; Y | X_2), R2I(X2;YX1)R_2 \leq I(X_2; Y | X_1), and sum rate R1+R2I(X1,X2;Y)R_1 + R_2 \leq I(X_1, X_2; Y), where conditional mutual information accounts for interference between users. These formulations, developed in the 1970s building on Shannon's foundations, guide practical systems like cellular networks by optimizing resource allocation under multi-user constraints.[58]

In neuroscience and other fields

In neuroscience, mutual information serves as a robust measure for quantifying functional connectivity between neural signals, capturing nonlinear dependencies that linear correlation methods often miss. For instance, in electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) analyses, mutual information has been applied to construct brain networks that reveal physiologically relevant architectures, such as altered connectivity in conditions like post-stroke depression.[59] This approach highlights shared information between brain regions during cognitive tasks, providing insights into network dynamics beyond pairwise correlations. Directed variants of mutual information, such as transfer entropy, extend this to infer causal influences in neural circuits.[60] In genetics, normalized mutual information quantifies linkage disequilibrium (LD), the non-random association of alleles at different loci, offering a multivariate extension to traditional pairwise measures like D'. By extending mutual information theory, researchers have developed multilocus LD metrics that assess statistical dependencies across multiple single nucleotide polymorphisms (SNPs), aiding in tagging SNP selection for genome-wide association studies.[61] This normalization ensures comparability across datasets, revealing epistatic interactions that influence disease susceptibility. In physics, mutual information acts as the classical analog in quantum information theory, measuring correlations between subsystems of quantum states while underpinning thermodynamic interpretations of information processing. It quantifies entanglement growth in quantum systems and bounds the second law for open quantum systems coupled to reservoirs, linking information flows to entropy production. For example, in thermodynamic contexts, mutual information describes how correlations evolve under quantum scrambling, providing a bridge between classical information theory and quantum irreversibility. Emerging applications in 2025 leverage mutual information to model interactions among climate variables, disentangling internal variability in global circulation models through network-based analyses. By computing mutual information between time series of meteorological and soil variables, researchers identify nonlinear dependencies that enhance drought prediction and bias correction in Earth system models, preserving inter-variable structures under climate change scenarios.[62]

References

User Avatar
No comments yet.