Hubbry Logo
Logarithmically concave functionLogarithmically concave functionMain
Open search
Logarithmically concave function
Community hub
Logarithmically concave function
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Logarithmically concave function
Logarithmically concave function
from Wikipedia

In convex analysis, a non-negative function f : RnR+ is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality

for all x,y ∈ dom f and 0 < θ < 1. If f is strictly positive, this is equivalent to saying that the logarithm of the function, log ∘ f, is concave; that is,

for all x,y ∈ dom f and 0 < θ < 1.

Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function.

Similarly, a function is log-convex if it satisfies the reverse inequality

for all x,y ∈ dom f and 0 < θ < 1.

Properties

[edit]
  • A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex.[1]
  • Every concave function that is nonnegative on its domain is log-concave. However, the reverse does not necessarily hold. An example is the Gaussian function f(x) = exp(−x2/2) which is log-concave since log f(x) = x2/2 is a concave function of x. But f is not concave since the second derivative is positive for |x| > 1:
  • From above two points, concavity log-concavity quasiconcavity.
  • A twice differentiable, nonnegative function with a convex domain is log-concave if and only if for all x satisfying f(x) > 0,
,[1]
i.e.
is
negative semi-definite. For functions of one variable, this condition simplifies to

Operations preserving log-concavity

[edit]
  • Products: The product of log-concave functions is also log-concave. Indeed, if f and g are log-concave functions, then log f and log g are concave by definition. Therefore
is concave, and hence also f g is log-concave.
  • Marginals: if f(x,y) : Rn+m → R is log-concave, then
is log-concave (see Prékopa–Leindler inequality).
  • This implies that convolution preserves log-concavity, since h(x,y) = f(x-yg(y) is log-concave if f and g are log-concave, and therefore
is log-concave.

Log-concave distributions

[edit]

Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability distribution with specified mean μ and Deviation risk measure D.[2] As it happens, many common probability distributions are log-concave. Some examples:[3]

Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave.

The following distributions are non-log-concave for all parameters:

Note that the cumulative distribution function (CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's:

The following are among the properties of log-concave distributions:

  • If a density is log-concave, so is its cumulative distribution function (CDF).
  • If a multivariate density is log-concave, so is the marginal density over any subset of variables.
  • The sum of two independent log-concave random variables is log-concave. This follows from the fact that the convolution of two log-concave functions is log-concave.
  • The product of two log-concave functions is log-concave. This means that joint densities formed by multiplying two probability densities (e.g. the normal-gamma distribution, which always has a shape parameter ≥ 1) will be log-concave. This property is heavily used in general-purpose Gibbs sampling programs such as BUGS and JAGS, which are thereby able to use adaptive rejection sampling over a wide variety of conditional distributions derived from the product of other distributions.
  • If a density is log-concave, so is its survival function.[3]
  • If a density is log-concave, it has a monotone hazard rate (MHR), and is a regular distribution since the derivative of the logarithm of the survival function is the negative hazard rate, and by concavity is monotone i.e.
which is decreasing as it is the derivative of a concave function.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A logarithmically concave function, also known as a log-concave function, is a nonnegative function ff defined on a in Rd\mathbb{R}^d such that logf\log f is concave wherever f>0f > 0. Equivalently, ff satisfies the functional inequality f(λx+(1λ)y)f(x)λf(y)1λf(\lambda x + (1-\lambda)y) \geq f(x)^\lambda f(y)^{1-\lambda} for all x,yx, y in the domain and λ[0,1]\lambda \in [0,1]. Log-concave functions form a broad class that includes exponentials of concave functions and are central to convex analysis and probability theory. In probability, they characterize the densities of log-concave distributions, encompassing familiar examples such as the Gaussian, exponential, Laplace, and uniform distributions on convex bodies. These functions exhibit desirable properties like unimodality, sub-exponential tail decay, and preservation under key operations: log-concavity is maintained under affine transformations, marginalization of densities, convolution of measures, and weak limits. The study of log-concave functions originated in the mid-20th century with work by Pólya and Schoenberg on totally positive functions, but gained prominence through functional forms of the Brunn-Minkowski inequality by Prékopa (1971) and Leindler (1972), and extensions by Brascamp and Lieb (1976). They play a pivotal role in applications across fields, including in high-dimensional probability, nonparametric in statistics, efficient sampling algorithms in optimization and , and geometric inequalities for convex sections in asymptotic .

Fundamentals

Definition

A logarithmically concave function, also known as a log-concave function, is a non-negative function f:D[0,)f: D \to [0, \infty) defined on a convex domain DRnD \subseteq \mathbb{R}^n that satisfies the functional inequality f(θx+(1θ)y)f(x)θf(y)1θf(\theta x + (1-\theta) y) \geq f(x)^\theta f(y)^{1-\theta} for all x,yDx, y \in D and θ[0,1]\theta \in [0,1]. This inequality generalizes the notion of concavity to multiplicative structures, ensuring that the function preserves a form of convexity in its logarithmic scale. The requirement that DD be convex is essential, as it guarantees the argument θx+(1θ)y\theta x + (1-\theta) y remains within the domain, and it aligns with the geometric properties of such functions, where the superlevel sets {xDf(x)α}\{x \in D \mid f(x) \geq \alpha\} for α>0\alpha > 0 are convex. For functions that are strictly positive on their domain (i.e., f(x)>0f(x) > 0 for all xDx \in D), the definition is equivalent to the concavity of the logarithm: the function logf:DR\log f: D \to \mathbb{R} is concave. In cases where ff may attain zero values, the support {xDf(x)>0}\{x \in D \mid f(x) > 0\} forms a , and the inequality holds trivially when either f(x)=0f(x) = 0 or f(y)=0f(y) = 0, as the right-hand side becomes zero while the left-hand side is non-negative. This extension allows log-concave functions to model phenomena with bounded support, such as densities concentrated on convex regions. The study of logarithmically concave functions originated in the mid-20th century with work by Pólya and Schoenberg on totally positive functions. Samuel Karlin introduced log-concavity for densities in the context of totally positive kernels of order 2 in his 1968 monograph, while András Prékopa formalized and extended the notion for measures and functions through his development of the Prékopa-Leindler inequality in papers from 1971 and 1973.

Examples

A fundamental example of a logarithmically concave function is the of a CRnC \subseteq \mathbb{R}^n, defined as f(x)=1f(x) = 1 if xCx \in C and f(x)=0f(x) = 0 otherwise. Using the convention log0=\log 0 = -\infty, the function logf(x)\log f(x) equals 0 on CC and -\infty outside, which is concave precisely because CC is convex. The provides another : f(x)=ex2/2f(x) = e^{-\|x\|^2 / 2} for xRnx \in \mathbb{R}^n. Here, logf(x)=x2/2\log f(x) = -\|x\|^2 / 2, a negative that is strictly concave. In one dimension, the exponential function f(x)=exf(x) = e^{-x} for x0x \geq 0 is logarithmically concave, as logf(x)=x\log f(x) = -x is linear and thus concave on its domain. Not all concave functions are logarithmically concave. For instance, non-constant linear functions on R\mathbb{R}, such as f(x)=xf(x) = x, take negative values and thus cannot support a logarithm defined over their full domain, whereas positive constants are trivially log-concave.

Properties

General Properties

A logarithmically concave function f:Rn[0,)f: \mathbb{R}^n \to [0, \infty) is quasi-concave, meaning that its upper level sets {x:f(x)t}\{x : f(x) \geq t\} are convex for every t>0t > 0. This follows from the fact that logf\log f is concave wherever f>0f > 0, and the superlevel sets of a positive function inherit convexity from the concavity of its logarithm. Every nonnegative concave function is logarithmically concave, since the logarithm of a concave function that is positive on a convex domain preserves concavity. However, the converse does not hold; for instance, the Gaussian function f(x)=ex2/2f(x) = e^{-\|x\|^2/2} (up to normalization) is logarithmically concave but not concave, as it approaches zero at infinity while remaining positive. Logarithmically concave functions are unimodal, possessing at most one global maximum and being monotonically non-decreasing (or non-increasing) on either side of this point. This unimodality arises directly from the concavity of logf\log f, which ensures a unique peak without local maxima. Logarithmic concavity is preserved under invertible affine transformations: if ff is logarithmically concave, then so is g(x)=f(Ax+b)g(x) = f(Ax + b) for any invertible matrix AA and vector bb. This invariance reflects the geometric robustness of the class under linear changes of variables. The support of a logarithmically concave function, defined as the set {x:f(x)>0}\{x : f(x) > 0\}, is convex. Outside this set, f=0f = 0 and logf=\log f = -\infty, and the convexity of logf\log f (extended appropriately) requires the support to form a convex domain.

Differentiable Case

For twice continuously differentiable functions f>0f > 0 defined on a convex open set in Rn\mathbb{R}^n, logarithmically concavity is equivalent to the negative semidefiniteness of the Hessian of logf\log f. Specifically, ff is logarithmically concave if and only if 2(logf)(x)0\nabla^2 (\log f)(x) \preceq 0 for all xx in the domain, where 0\preceq 0 denotes that the is negative semidefinite. This condition ensures that logf\log f is concave, reflecting the global property through local second-order behavior. Equivalently, the condition can be expressed directly in terms of the derivatives of ff: f(x)2f(x)f(x)[f(x)]T0f(x) \nabla^2 f(x) - \nabla f(x) [\nabla f(x)]^T \preceq 0 for all xx where f(x)>0f(x) > 0. To derive this from the definition, consider the second-order Taylor expansion of logf\log f around a point xx: logf(y)=logf(x)+(logf)(x)(yx)+12(yx)T2(logf)(x)(yx)+o(yx2).\log f(y) = \log f(x) + \nabla (\log f)(x) \cdot (y - x) + \frac{1}{2} (y - x)^T \nabla^2 (\log f)(x) (y - x) + o(\|y - x\|^2). For logf\log f to be concave, the must satisfy (yx)T2(logf)(x)(yx)0(y - x)^T \nabla^2 (\log f)(x) (y - x) \leq 0 for all directions yxy - x, implying 2(logf)(x)0\nabla^2 (\log f)(x) \preceq 0. Substituting (logf)=f/f\nabla (\log f) = \nabla f / f and 2(logf)=(2f)/f(f[f]T)/f2\nabla^2 (\log f) = (\nabla^2 f)/f - (\nabla f [\nabla f]^T)/f^2 yields the form involving ff and its derivatives after multiplying through by f>0f > 0. The negative semidefiniteness of 2(logf)\nabla^2 (\log f) implies that logf\log f exhibits non-positive in , meaning the function bends downward or is flat, with no upward inflections. This local curvature control strengthens the inherent in general logarithmically concave functions, providing a tool for analyzing and stability in optimization and . In higher dimensions, the Hessian condition connects to the Monge-Ampère , where the of the Hessian measures volume changes under mappings; for logf\log f concave, the of 2(logf)\nabla^2 (\log f) has (1)n(-1)^n, i.e., (1)ndet(2(logf))0(-1)^n \det(\nabla^2 (\log f)) \geq 0, and this arises in optimal transport problems between log-concave densities, where the Brenier potential solves det(2u)=f/(gu)\det(\nabla^2 u) = f / (g \circ \nabla u) with regularity enhanced by the log-concavity of ff and gg.

Preservation under Operations

Algebraic Operations

Logarithmically concave functions are closed under pointwise multiplication. If ff and gg are logarithmically concave functions defined on a convex domain, then their pointwise product h(x)=f(x)g(x)h(x) = f(x) g(x) is also logarithmically concave. This follows directly from the definition, as logh(x)=logf(x)+logg(x)\log h(x) = \log f(x) + \log g(x), and the sum of two concave functions is concave. To verify using the functional inequality form of log-concavity, suppose f(λx+(1λ)y)f(x)λf(y)1λf(\lambda x + (1-\lambda) y) \geq f(x)^\lambda f(y)^{1-\lambda} and g(λx+(1λ)y)g(x)λg(y)1λg(\lambda x + (1-\lambda) y) \geq g(x)^\lambda g(y)^{1-\lambda} for λ[0,1]\lambda \in [0,1]. Then, h(λx+(1λ)y)=f(λx+(1λ)y)g(λx+(1λ)y)[f(x)λf(y)1λ][g(x)λg(y)1λ]=h(x)λh(y)1λ,h(\lambda x + (1-\lambda) y) = f(\lambda x + (1-\lambda) y) g(\lambda x + (1-\lambda) y) \geq \left[ f(x)^\lambda f(y)^{1-\lambda} \right] \left[ g(x)^\lambda g(y)^{1-\lambda} \right] = h(x)^\lambda h(y)^{1-\lambda}, confirming that hh satisfies the log-concavity inequality. The class is also closed under positive powers. If ff is a logarithmically concave function and α>0\alpha > 0, then fαf^\alpha is logarithmically concave, since log(fα(x))=αlogf(x)\log(f^\alpha(x)) = \alpha \log f(x), and multiplication of a concave function by a positive scalar preserves concavity. Finite sums do not generally preserve log-concavity. The pointwise sum of two logarithmically concave functions need not be logarithmically concave. For a brief counterexample, consider the indicator functions f(x)=1[1,1](x)f(x) = \mathbf{1}_{[-1,1]}(x) and g(x)=1[3,5](x)g(x) = \mathbf{1}_{[3,5]}(x), both of which are logarithmically concave (as indicators of convex sets). Their sum h(x)=f(x)+g(x)h(x) = f(x) + g(x) has support [1,1][3,5][-1,1] \cup [3,5], which is not convex, violating the requirement that the domain of a logarithmically concave function be convex.

Integral Operations

Logarithmically concave functions exhibit remarkable stability under various integral operations, which is crucial for their applications in and probability. One fundamental result is that the convolution of two logarithmically concave functions is itself logarithmically concave. Specifically, if ff and gg are logarithmically concave densities on Rd\mathbb{R}^d, then their h(z)=Rdf(x)g(zx)dxh(z) = \int_{\mathbb{R}^d} f(x) g(z - x) \, dx is also logarithmically concave. This preservation can be established using the Brascamp-Lieb inequality, which provides a variational framework for such functional inequalities, or through properties of the that maintain the log-concavity in the . The result extends from the one-dimensional case, originally due to Schoenberg (1951), to higher dimensions via generalizations by Borell (1974, 1975). Marginalization, or integration over subsets of variables, also preserves log-concavity. If a joint function p(x,y)p(x, y) is logarithmically concave on Rm+n\mathbb{R}^{m+n}, then the marginal q(y)=Rmp(x,y)dxq(y) = \int_{\mathbb{R}^m} p(x, y) \, dx is logarithmically concave on Rn\mathbb{R}^n. This follows from Prékopa's theorem (1973), a direct consequence of the Prékopa-Leindler inequality, which is a functional analogue of the Brunn-Minkowski inequality. The Prékopa-Leindler inequality states that for nonnegative measurable functions ϕ,ψ,ξ:Rd[0,)\phi, \psi, \xi: \mathbb{R}^d \to [0, \infty) and λ(0,1)\lambda \in (0,1), if ξ(λx+(1λ)y)ϕ(x)λψ(y)1λ\xi(\lambda x + (1-\lambda) y) \geq \phi(x)^\lambda \psi(y)^{1-\lambda} for all x,yRdx, y \in \mathbb{R}^d, then Rdξ(z)dz(Rdϕ(x)dx)λ(Rdψ(y)dy)1λ\int_{\mathbb{R}^d} \xi(z) \, dz \geq \left( \int_{\mathbb{R}^d} \phi(x) \, dx \right)^\lambda \left( \int_{\mathbb{R}^d} \psi(y) \, dy \right)^{1-\lambda}. This inequality implies the preservation under marginals by applying it to appropriately chosen supermodular functions derived from the log-concave joint. Conditioning on variables similarly maintains log-concavity. For a logarithmically concave joint density p(x,y)p(x, y), the conditional density p(xy)=p(x,y)/p(x,y)dxp(x \mid y) = p(x, y) / \int p(x, y) \, dx is logarithmically concave in xx for almost every yy. This property, again rooted in the Prékopa-Leindler framework, ensures that projections and conditionals of log-concave measures remain within the class. The Prékopa-Leindler inequality originated in works by Prékopa (1971, 1973) and Leindler (1972), establishing it as a for integral preservation results in .

Applications

In Probability and Statistics

In probability theory, a probability distribution is log-concave if its density function pp satisfies the condition that logp\log p is a concave function on the support of the distribution. This property ensures that the density can be expressed as p(x)=eϕ(x)p(x) = e^{-\phi(x)} where ϕ\phi is a convex function. Prominent examples of log-concave distributions include the uniform distribution over a convex set, the multivariate normal distribution, and the (one-sided) exponential distribution. In contrast, distributions such as the Cauchy and Pareto, which exhibit heavy tails, are not log-concave. Log-concave densities possess several key properties relevant to stochastic modeling. They are unimodal, meaning they have a single peak, with level sets that form convex sets. The (CDF) of a log-concave density is also log-concave. Additionally, log-concave distributions exhibit a monotone increasing failure rate (IFR) property, which is valuable in reliability analysis and modeling, and they correspond to Pólya functions of order 2, implying certain sign-consistency in their coefficients under expansions. Under specific constraints, such as fixed mean and a , log-concave distributions achieve the maximum , providing a principled way to select distributions that balance uncertainty and constraint satisfaction. In statistical computing, the log-concavity property facilitates efficient sampling algorithms, notably , which constructs piecewise linear envelopes to draw samples from univariate log-concave densities with high acceptance rates. This method adapts the envelope based on previous samples, making it particularly effective for involving log-concave posteriors.

In Optimization

Log-concave functions are central to , as the negative logarithm of a log-concave function is convex, enabling the reformulation of problems like for log-concave densities as convex programs solvable via interior-point methods. This property facilitates efficient approximation and sampling in high-dimensional settings, where the convexity ensures global optimality and polynomial-time solvability under standard assumptions. In particular, interior-point methods leverage self-concordant barrier functions derived from log-concave structures to navigate the while maintaining strict feasibility. A prominent example is the use of log-concave barrier functions in (SDP), where the standard barrier for the positive semidefinite cone, given by ϕ(X)=logdetX\phi(X) = -\log \det X for X0X \succ 0, is convex because logdetX\log \det X is concave on the domain of positive definite matrices. This barrier, introduced in early interior-point algorithms for SDP, ensures rapid convergence by following the central path, with the decreasing as n/tn/t where nn is the matrix dimension and tt is the barrier parameter. Such barriers extend naturally to more general convex cones, preserving the efficiency of primal-dual methods for large-scale SDP instances in applications like and . In volume computation for convex bodies, log-concave densities enable Monte Carlo sampling algorithms to approximate volumes efficiently, relying on the Kannan-Lovász-Simonovits (KLS) conjecture for dimension-independent bounds on mixing times. The conjecture posits that the Cheeger constant of any isotropic log-concave density in Rn\mathbb{R}^n is at least a universal constant c>0c > 0, implying that random walks like the ball walk mix in O(n2)O^*(n^2) steps, leading to volume estimators with relative error ϵ\epsilon in O~(n2/ϵ2)\tilde{O}(n^2 / \epsilon^2) time. Recent resolutions approaching the conjectured bound have improved practical algorithms for high-dimensional integration over convex sets. Log-concave utility functions appear in , where their concavity properties promote stability and uniqueness of equilibria in multi-agent economic models. For instance, homothetic quasi-concave utilities can be transformed monotonically to log-concave forms, ensuring the existence of market equilibria via convex programming formulations. This approach, as detailed in Boyd and Vandenberghe's framework for barrier methods, underscores the role of log-concavity in stabilizing optimization-based computations of equilibrium prices and allocations.
Add your contribution
Related Hubs
User Avatar
No comments yet.