Hubbry Logo
search
logo

Logarithmically convex function

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In mathematics, a function f is logarithmically convex or superconvex[1] if , the composition of the logarithm with f, is itself a convex function.

Definition

[edit]

Let X be a convex subset of a real vector space, and let f : XR be a function taking non-negative values. Then f is:

  • Logarithmically convex if is convex, and
  • Strictly logarithmically convex if is strictly convex.

Here we interpret as .

Explicitly, f is logarithmically convex if and only if, for all x1, x2X and all t ∈ [0, 1], the two following equivalent conditions hold:

Similarly, f is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all t ∈ (0, 1).

The above definition permits f to be zero, but if f is logarithmically convex and vanishes anywhere in X, then it vanishes everywhere in the interior of X.

Equivalent conditions

[edit]

If f is a differentiable function defined on an interval IR, then f is logarithmically convex if and only if the following condition holds for all x and y in I:

This is equivalent to the condition that, whenever x and y are in I and x > y,

Moreover, f is strictly logarithmically convex if and only if these inequalities are always strict.

If f is twice differentiable, then it is logarithmically convex if and only if, for all x in I,

If the inequality is always strict, then f is strictly logarithmically convex. However, the converse is false: It is possible that f is strictly logarithmically convex and that, for some x, we have . For example, if , then f is strictly logarithmically convex, but .

Furthermore, is logarithmically convex if and only if is convex for all .[2][3]

Sufficient conditions

[edit]

If are logarithmically convex, and if are non-negative real numbers, then is logarithmically convex.

If is any family of logarithmically convex functions, then is logarithmically convex.

If is convex and is logarithmically convex and non-decreasing, then is logarithmically convex.

Properties

[edit]

A logarithmically convex function f is a convex function since it is the composite of the increasing convex function and the function , which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring function is convex, but its logarithm is not. Therefore the squaring function is not logarithmically convex.

Examples

[edit]
  • is logarithmically convex when and strictly logarithmically convex when .
  • is strictly logarithmically convex on for all
  • Euler's gamma function is strictly logarithmically convex when restricted to the positive real numbers. In fact, by the Bohr–Mollerup theorem, this property can be used to characterize Euler's gamma function among the possible extensions of the factorial function to real arguments.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A logarithmically convex function, also called a log-convex function, is a function f:I(0,)f: I \to (0, \infty) defined on an interval IRI \subseteq \mathbb{R} such that logf\log f is convex on II. This is equivalent to the inequality f(tx+(1t)y)f(x)tf(y)1tf(tx + (1-t)y) \leq f(x)^t f(y)^{1-t} holding for all x,yIx, y \in I and t[0,1]t \in [0,1].[1] Logarithmically convex functions form an important subclass of convex functions, as the property implies ordinary convexity via the arithmetic-geometric mean inequality, though the converse does not hold.[2] Key properties include closure under pointwise products and sums, meaning the product or sum of two log-convex functions is again log-convex.[3] They also satisfy various integral inequalities, such as bounds on averages that refine the Hermite-Hadamard inequality for convex functions.[1] For twice-differentiable functions, log-convexity is characterized by the second derivative satisfying f(x)f(x)[f(x)]2f(x) f''(x) \geq [f'(x)]^2.[1] Prominent examples include the exponential function f(x)=ekxf(x) = e^{kx} for any real kk, which is strictly log-convex, and the gamma function Γ(x)\Gamma(x) for x>0x > 0, whose log-convexity is a cornerstone of the Bohr-Mollerup theorem uniquely characterizing Γ\Gamma.[4] Log-convexity arises in diverse applications, such as bounding remainders in Taylor expansions of the exponential function[1] and in functional analysis, notably in extensions of Orlicz space theory, which are defined using convex modular functions introduced by Orlicz in 1936. More generally, it features in inequalities like the log-Minkowski inequality extended to Orlicz settings and in studying monotonicity properties of special functions.

Definition and Characterizations

Formal Definition

A logarithmically convex function, also known as a log-convex function, is defined on a convex subset XX of a real vector space, where f:X(0,)f: X \to (0, \infty) maps to positive real numbers, ensuring the logarithm is well-defined. The function ff is logarithmically convex if logf:XR\log f: X \to \mathbb{R} is a convex function.[4][5] This convexity of logf\log f implies the defining inequality: for all x,yXx, y \in X and t[0,1]t \in [0, 1],
f(tx+(1t)y)f(x)tf(y)1t. f(tx + (1-t)y) \leq f(x)^t f(y)^{1-t}.
This follows directly from Jensen's inequality applied to logf\log f, since logf(tx+(1t)y)tlogf(x)+(1t)logf(y)\log f(tx + (1-t)y) \leq t \log f(x) + (1-t) \log f(y), and exponentiating both sides preserves the inequality due to the monotonicity of the exponential function.[4][6] The function ff is strictly logarithmically convex if the inequality is strict for all t(0,1)t \in (0, 1) and xyx \neq y, which occurs precisely when logf\log f is strictly convex. This framework motivates the study of logarithmically convex functions as those exhibiting convexity in a multiplicative sense, transforming additive convexity of logf\log f into the geometric mean-like bound above.

Equivalent Conditions

A logarithmically convex function f>0f > 0 on an open interval IRI \subseteq \mathbb{R} admits equivalent characterizations under smoothness assumptions, leveraging the fact that ff is logarithmically convex if and only if g=logfg = \log f is convex on II.[7] For a once-differentiable ff, this equivalence yields the first-order condition that gg lies above its tangents:
logf(x)logf(y)+f(y)f(y)(xy) \log f(x) \geq \log f(y) + \frac{f'(y)}{f(y)} (x - y)
for all x,yIx, y \in I. To derive this from the definition, note that convexity of gg requires g(x)g(y)+g(y)(xy)g(x) \geq g(y) + g'(y)(x - y) for all x,yIx, y \in I, where g(y)=f(y)/f(y)g'(y) = f'(y)/f(y); substituting g=logfg = \log f directly gives the inequality, which is both necessary and sufficient for logarithmic convexity under the once-differentiable assumption.[7][7] For a twice-differentiable ff, convexity of gg is equivalent to its second derivative being nonnegative everywhere:
d2dx2logf(x)0 \frac{d^2}{dx^2} \log f(x) \geq 0
for all xIx \in I. Expanding this yields
f(x)f(x)[f(x)]20. f''(x) f(x) - [f'(x)]^2 \geq 0.
The derivation follows by direct computation: g(x)=f(x)/f(x)g'(x) = f'(x)/f(x) and g(x)=[f(x)f(x)[f(x)]2]/[f(x)]2g''(x) = [f''(x) f(x) - [f'(x)]^2]/ [f(x)]^2, so g(x)0g''(x) \geq 0 holds if and only if the numerator is nonnegative (since f>0f > 0); this condition is necessary and sufficient for logarithmic convexity under twice differentiability.[7][7]

Properties

Fundamental Properties

A logarithmically convex function f:I(0,)f: I \to (0, \infty), where II is a convex subset of R\mathbb{R}, satisfies the fundamental inequality f(λx+(1λ)y)f(x)λf(y)1λf(\lambda x + (1-\lambda) y) \leq f(x)^\lambda f(y)^{1-\lambda} for all x,yIx, y \in I and λ[0,1]\lambda \in [0, 1].[8] This quasi-multiplicative property arises directly from the convexity of logf\log f and distinguishes log-convex functions by linking additive combinations of arguments to powered products of function values, with equality holding for linear logf\log f. Log-convex functions are closed under pointwise addition and multiplication: if ff and gg are log-convex, then so are f+gf + g and fgf \cdot g.[7] For twice continuously differentiable functions, log-convexity is equivalent to f(x)f(x)[f(x)]2f(x) f''(x) \geq [f'(x)]^2 for all xIx \in I.[9] Log-convexity is preserved under pointwise limits of sequences of log-convex functions, provided the limit function remains positive on the domain.[8] Similarly, under suitable conditions, integrals of log-convex functions—such as those appearing in representations like the Gamma function via the Rogers-Hölder inequality—retain log-convexity if the resulting function is positive.[8] These preservation results ensure that log-convex functions form a stable class under limiting operations common in analysis. If a log-convex function ff is increasing on its domain, then logf\log f is both convex (by definition) and increasing.[8] This combined monotonicity strengthens the utility of log-convex functions in contexts requiring ordered behavior alongside convexity. Constant functions f(x)=c>0f(x) = c > 0 are log-convex, as logf(x)=logc\log f(x) = \log c is constant and thus convex.[8] Exponential functions f(x)=eax+bf(x) = e^{a x + b} with a,bRa, b \in \mathbb{R} are also log-convex, since logf(x)=ax+b\log f(x) = a x + b is affine and hence convex; these serve as basic building blocks for constructing more complex log-convex functions via products or limits.[8]

Relation to Convexity

A logarithmically convex function is always convex. To see this, suppose ff is positive and logarithmically convex on a convex set, so logf\log f is convex. For θ[0,1]\theta \in [0,1] and x,yx, y in the domain, Jensen's inequality applied to logf\log f yields logf(θx+(1θ)y)θlogf(x)+(1θ)logf(y)=log(f(x)θf(y)1θ)\log f(\theta x + (1-\theta) y) \leq \theta \log f(x) + (1-\theta) \log f(y) = \log \left( f(x)^\theta f(y)^{1-\theta} \right), which implies f(θx+(1θ)y)f(x)θf(y)1θf(\theta x + (1-\theta) y) \leq f(x)^\theta f(y)^{1-\theta}. By the AM-GM inequality, f(x)θf(y)1θθf(x)+(1θ)f(y)f(x)^\theta f(y)^{1-\theta} \leq \theta f(x) + (1-\theta) f(y), so f(θx+(1θ)y)θf(x)+(1θ)f(y)f(\theta x + (1-\theta) y) \leq \theta f(x) + (1-\theta) f(y), confirming the convexity of ff.[10] Logarithmic convexity is a strictly stronger condition than convexity. For instance, the function f(x)=x2f(x) = x^2 for x>0x > 0 is convex, as its second derivative f(x)=2>0f''(x) = 2 > 0, but it is not logarithmically convex because logf(x)=2logx\log f(x) = 2 \log x has second derivative 2/x2<0-2/x^2 < 0, making logf\log f concave.[9] In one dimension, every logarithmically convex function on an interval is convex, but the converse does not hold; for example, any polynomial of degree greater than 1, such as quadratics with positive leading coefficient, is convex on R\mathbb{R} but not logarithmically convex, since the logarithm of a polynomial generally fails to be convex due to the concavity of the logarithm near zero or at boundaries.[9] The dual concept to logarithmic convexity is logarithmic concavity, where a positive function ff is logarithmically concave if logf-\log f is convex (equivalently, logf\log f is concave).[9]

Sufficient Conditions

Preservation under Operations

Logarithmically convex functions, also known as log-convex functions, form a class that is closed under certain operations, allowing the construction of new log-convex functions from existing ones. A key preservation property is that the pointwise product of log-convex functions remains log-convex. Specifically, if fi:Rn(0,)f_i: \mathbb{R}^n \to (0, \infty) for i=1,,ki = 1, \dots, k are log-convex, then so is their product raised to nonnegative powers, h(x)=i=1kfi(x)aih(x) = \prod_{i=1}^k f_i(x)^{a_i} where ai0a_i \geq 0. This follows from the additivity of the logarithm: logh(x)=i=1kailogfi(x)\log h(x) = \sum_{i=1}^k a_i \log f_i(x), and nonnegative linear combinations of convex functions are convex.[7] Another operation that preserves log-convexity is the pointwise supremum. If {fi:Rn(0,)}iI\{f_i: \mathbb{R}^n \to (0, \infty)\}_{i \in I} is a family of log-convex functions indexed by some set II, then the pointwise supremum h(x)=supiIfi(x)h(x) = \sup_{i \in I} f_i(x) is log-convex, provided the supremum is finite and positive on the domain. For families with non-negative weights, such as a weighted supremum h(x)=supiλifi(x)h(x) = \sup_{i} \lambda_i f_i(x) with λi0\lambda_i \geq 0, log-convexity is similarly preserved, as it reduces to a nonnegative scaling in the log domain before taking the sup. The reason is that logh(x)=supiIlogfi(x)\log h(x) = \sup_{i \in I} \log f_i(x), and the pointwise supremum of convex functions is convex.[7] Log-convexity is also preserved under specific compositions. If f:Rm(0,)f: \mathbb{R}^m \to (0, \infty) is log-convex and increasing, and g:RnRmg: \mathbb{R}^n \to \mathbb{R}^m is convex with compatible domains, then the composition h(x)=f(g(x))h(x) = f(g(x)) is log-convex. To see this, note that logh(x)=(logf)(g(x))\log h(x) = (\log f)(g(x)), where logf\log f is convex (by definition of log-convexity) and increasing (since ff is increasing and positive), and the composition of a convex increasing function with a convex function yields a convex function. This rule extends the standard convexity preservation under composition to the logarithmic scale.[7] These preservation properties arise fundamentally from the convexity of the logarithm function itself, which transforms products into sums, suprema into suprema, and suitable compositions into convex compositions, thereby maintaining the convexity of the logged versions.[7]

Other Criteria

For a positive function ff that is once differentiable on an interval, a sufficient condition for log-convexity is that the logarithmic derivative ff\frac{f'}{f} is non-decreasing. This condition ensures that logf\log f has a non-decreasing derivative, implying the convexity of logf\log f by the fundamental theorem of calculus and the characterization of convex functions via non-decreasing first derivatives. Another sufficient condition arises from integral representations: a positive function ff on (0,)(0, \infty) is log-convex if it can be expressed as the Laplace transform of a positive measure, i.e., f(s)=0esudμ(u)f(s) = \int_0^\infty e^{-s u} \, d\mu(u) for some positive Borel measure μ\mu.[11] This representation guarantees log-convexity because logf(s)\log f(s) is the cumulant generating function of the distribution induced by μ\mu, which is convex. A characterization due to Paul Montel (1928) states that a positive function ff is log-convex if and only if eαxf(x)e^{\alpha x} f(x) is convex for every real α\alpha.[12]

Examples

Classical Examples

One classical example of a logarithmically convex function is the exponential function $ f(x) = e^{c x} $, where $ c $ is a real constant. To verify, note that $ \log f(x) = c x $, which is a linear function and hence convex on $ \mathbb{R} $. Another standard example is the power function $ f(x) = x^{-p} $ for $ x > 0 $ and $ p > 0 $. Here, $ \log f(x) = -p \log x $. Since $ \log x $ (natural logarithm) is concave on $ (0, \infty) $, $ -\log x $ is convex, and multiplication by the positive constant $ p $ preserves convexity, making $ \log f(x) $ convex on $ (0, \infty) $.[13] Functions resembling Gaussians also provide classical instances, such as $ f(x) = e^{|x|^p} $ for $ p \geq 1 $. In this case, $ \log f(x) = |x|^p $, which is convex on $ \mathbb{R} $ because $ p $-norms with $ p \geq 1 $ are convex (verifiable by the second derivative test: for $ x > 0 $, the second derivative of $ x^p $ is $ p(p-1)x^{p-2} \geq 0 $, and symmetry extends this to the absolute value). Thus, $ f(x) $ is logarithmically convex on $ \mathbb{R} $.

Examples from Special Functions

The gamma function Γ(x)\Gamma(x) is logarithmically convex on (0,)(0, \infty). This property is established as part of the Bohr–Mollerup theorem, which characterizes Γ(x)\Gamma(x) as the unique positive function satisfying Γ(1)=1\Gamma(1) = 1, the functional equation Γ(x+1)=xΓ(x)\Gamma(x+1) = x \Gamma(x), and logarithmic convexity on (0,)(0, \infty).[14] Alternatively, logarithmic convexity follows from the fact that the second derivative of logΓ(x)\log \Gamma(x) is the trigamma function ψ1(x)=Γ(x)Γ(x)[Γ(x)]2Γ(x)2>0\psi_1(x) = \frac{\Gamma''(x) \Gamma(x) - [\Gamma'(x)]^2}{\Gamma(x)^2} > 0 for x>0x > 0. These characterizations highlight the non-trivial nature of the result, as direct verification for Γ(x)\Gamma(x) relies on its integral representation Γ(x)=0tx1etdt\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} \, dt and inequalities like Hölder's, rather than algebraic manipulation as for polynomials. The beta function B(p,q)=01tp1(1t)q1dtB(p, q) = \int_0^1 t^{p-1} (1-t)^{q-1} \, dt for p,q>0p, q > 0 is logarithmically convex in pp for fixed q>0q > 0, and symmetrically in qq for fixed p>0p > 0. This follows from the integral representation, where the kernel tp1t^{p-1} ensures the convexity of logB(p,q)\log B(p, q) via differentiation under the integral sign and positivity of the resulting expressions.[15] Such proofs underscore the role of integral forms in establishing log-convexity for these functions, contrasting with elementary cases. For entire functions, the growth rate function logM(r)=logmaxz=rf(z)\log M(r) = \log \max_{|z|=r} |f(z)| is convex in logr\log r for r>0r > 0. This result, due to Hadamard via the three circles theorem, arises from the subharmonicity of logf(z)\log |f(z)| and properties of the maximum modulus for functions analytic in the plane.[16] Verification involves Phragmén–Lindelöf principles or series expansions, making it non-trivial compared to finite-degree polynomials where growth is explicitly polynomial.

Applications

In Analysis and Optimization

In convex analysis, logarithmically convex functions play a key role in ensuring the existence and uniqueness of minimizers in variational problems. Specifically, if a positive function ff is strictly logarithmically convex, then logf\log f is strictly convex, which implies that ff attains a unique global minimizer over convex sets where it is coercive, as the minimizer of ff coincides with that of logf\log f due to the monotonicity of the exponential function. This property is particularly useful in problems involving entropy maximization or likelihood functions, where log-convexity guarantees a single optimal solution without additional regularity assumptions. In optimization, logarithmically convex barrier functions are central to interior-point methods, facilitating polynomial-time convergence for convex programs. For instance, the universal barrier function introduced by Nesterov and Nemirovskii, defined as the negative logarithm of the characteristic function of a convex cone, is self-concordant, enabling efficient Newton steps while staying within the interior of the feasible set.[17] In semidefinite programming, the log-determinant barrier logdetX-\log \det X for positive semidefinite matrices XX provides a smooth approximation to the feasible region and ensuring theoretical convergence rates of O(νlog(1/ϵ))O(\sqrt{\nu} \log(1/\epsilon)), where ν\nu is the barrier parameter and ϵ\epsilon is the desired accuracy.[18] Log-convexity also underpins uniqueness theorems in functional analysis, such as the Bohr–Mollerup theorem, which characterizes the gamma function as the unique positive, logarithmically convex solution to the functional equation f(x+1)=xf(x)f(x+1) = x f(x) with f(1)=1f(1) = 1.[19] This result relies on the log-convexity condition to exclude other interpolants of the factorial, leveraging properties like the three-point inequality for logf\log f to establish uniqueness via the Weierstrass approximation theorem. Furthermore, log-convexity enhances classical inequalities when integrated with convexity properties. For example, for a log-convex function f>0f > 0, the inequality f(θx+(1θ)y)f(x)θf(y)1θf(\theta x + (1-\theta) y) \leq f(x)^\theta f(y)^{1-\theta} provides a weighted geometric mean upper bound, which strengthens the arithmetic mean-geometric mean (AM-GM) inequality in product forms and refines Jensen-type bounds for composite functions.[20] Such refinements are applied in bounding error terms in approximation theory and optimization duals.[20]

In Probability and Statistics

Log-convex densities play a role in modeling heavy-tailed distributions in probability and statistics, where extreme values occur with higher probability than in light-tailed cases. For example, the Pareto distribution possesses a log-convex density function, which facilitates the analysis of phenomena like income inequality or insurance claims with power-law tails. Unlike log-concave densities, which guarantee unimodality and promote concentration around a mode, log-convex densities tend to allocate more probability mass toward the boundaries of their support, enabling representations of bimodal or dispersed behaviors in statistical models.[21] The moment generating function $ M(t) = \mathbb{E}[e^{tX}] $ of any random variable $ X $ is logarithmically convex as a function of $ t $, a property arising from its representation as a convex combination of exponential terms or via Hölder's inequality applied to the expectation. This implies that the cumulant generating function $ K(t) = \log M(t) $ is convex, providing a foundation for deriving moment inequalities and applying large deviation principles in statistical inference, such as bounding tail probabilities.[22] In reliability engineering, a logarithmically convex survival function $ \bar{F}(t) $ characterizes distributions with decreasing failure rates (DFR), suitable for systems exhibiting improving reliability over time, such as certain repairable components. Specifically, if the density $ f $ is log-convex and vanishes at the upper endpoint of its support, the survival function inherits log-convexity, ensuring the hazard rate $ h(t) = f(t)/\bar{F}(t) $ is nonincreasing.[21][23] For statistical estimation, the log-convexity of the moment generating function in exponential families ensures the convexity of the cumulant generating function, which underpins the concavity of the log-likelihood and thus the convexity of the negative log-likelihood as a loss function in maximum likelihood estimation. This property guarantees a unique global maximum for the likelihood in natural parameter space, simplifying optimization for families like the gamma or inverse Gaussian distributions with appropriate parameterizations.[24]
User Avatar
No comments yet.