Hubbry Logo
Logarithmically convex functionLogarithmically convex functionMain
Open search
Logarithmically convex function
Community hub
Logarithmically convex function
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Logarithmically convex function
Logarithmically convex function
from Wikipedia

In mathematics, a function f is logarithmically convex or superconvex[1] if , the composition of the logarithm with f, is itself a convex function.

Definition

[edit]

Let X be a convex subset of a real vector space, and let f : XR be a function taking non-negative values. Then f is:

  • Logarithmically convex if is convex, and
  • Strictly logarithmically convex if is strictly convex.

Here we interpret as .

Explicitly, f is logarithmically convex if and only if, for all x1, x2X and all t ∈ [0, 1], the two following equivalent conditions hold:

Similarly, f is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all t ∈ (0, 1).

The above definition permits f to be zero, but if f is logarithmically convex and vanishes anywhere in X, then it vanishes everywhere in the interior of X.

Equivalent conditions

[edit]

If f is a differentiable function defined on an interval IR, then f is logarithmically convex if and only if the following condition holds for all x and y in I:

This is equivalent to the condition that, whenever x and y are in I and x > y,

Moreover, f is strictly logarithmically convex if and only if these inequalities are always strict.

If f is twice differentiable, then it is logarithmically convex if and only if, for all x in I,

If the inequality is always strict, then f is strictly logarithmically convex. However, the converse is false: It is possible that f is strictly logarithmically convex and that, for some x, we have . For example, if , then f is strictly logarithmically convex, but .

Furthermore, is logarithmically convex if and only if is convex for all .[2][3]

Sufficient conditions

[edit]

If are logarithmically convex, and if are non-negative real numbers, then is logarithmically convex.

If is any family of logarithmically convex functions, then is logarithmically convex.

If is convex and is logarithmically convex and non-decreasing, then is logarithmically convex.

Properties

[edit]

A logarithmically convex function f is a convex function since it is the composite of the increasing convex function and the function , which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring function is convex, but its logarithm is not. Therefore the squaring function is not logarithmically convex.

Examples

[edit]
  • is logarithmically convex when and strictly logarithmically convex when .
  • is strictly logarithmically convex on for all
  • Euler's gamma function is strictly logarithmically convex when restricted to the positive real numbers. In fact, by the Bohr–Mollerup theorem, this property can be used to characterize Euler's gamma function among the possible extensions of the factorial function to real arguments.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A logarithmically convex function, also called a log-convex function, is a function f:I(0,)f: I \to (0, \infty) defined on an interval IRI \subseteq \mathbb{R} such that logf\log f is convex on II. This is equivalent to the inequality f(tx+(1t)y)f(x)tf(y)1tf(tx + (1-t)y) \leq f(x)^t f(y)^{1-t} holding for all x,yIx, y \in I and t[0,1]t \in [0,1]. Logarithmically convex functions form an important subclass of s, as the property implies ordinary convexity via the arithmetic-geometric inequality, though the converse does not hold. Key properties include closure under pointwise products and sums, meaning the product or sum of two log-convex functions is again log-convex. They also satisfy various integral inequalities, such as bounds on averages that refine the Hermite-Hadamard inequality for s. For twice-differentiable functions, log-convexity is characterized by the second derivative satisfying f(x)f(x)[f(x)]2f(x) f''(x) \geq [f'(x)]^2. Prominent examples include the f(x)=ekxf(x) = e^{kx} for any real kk, which is strictly log-convex, and the Γ(x)\Gamma(x) for x>0x > 0, whose log-convexity is a cornerstone of the Bohr-Mollerup theorem uniquely characterizing Γ\Gamma. Log-convexity arises in diverse applications, such as bounding remainders in Taylor expansions of the and in , notably in extensions of theory, which are defined using convex modular functions introduced by Orlicz in 1936. More generally, it features in inequalities like the log-Minkowski inequality extended to Orlicz settings and in studying monotonicity properties of .

Definition and Characterizations

Formal Definition

A logarithmically convex function, also known as a log-convex function, is defined on a convex subset XX of a real vector space, where f:X(0,)f: X \to (0, \infty) maps to positive real numbers, ensuring the logarithm is well-defined. The function ff is logarithmically convex if logf:XR\log f: X \to \mathbb{R} is a convex function. This convexity of logf\log f implies the defining inequality: for all x,yXx, y \in X and t[0,1]t \in [0, 1], f(tx+(1t)y)f(x)tf(y)1t.f(tx + (1-t)y) \leq f(x)^t f(y)^{1-t}. This follows directly from applied to logf\log f, since logf(tx+(1t)y)tlogf(x)+(1t)logf(y)\log f(tx + (1-t)y) \leq t \log f(x) + (1-t) \log f(y), and exponentiating both sides preserves the inequality due to the monotonicity of the . The function ff is strictly logarithmically convex if the inequality is strict for all t(0,1)t \in (0, 1) and xyx \neq y, which occurs precisely when logf\log f is strictly convex. This framework motivates the study of logarithmically convex functions as those exhibiting convexity in a multiplicative sense, transforming additive convexity of logf\log f into the geometric mean-like bound above.

Equivalent Conditions

A logarithmically convex function f>0f > 0 on an open interval IRI \subseteq \mathbb{R} admits equivalent characterizations under smoothness assumptions, leveraging the fact that ff is logarithmically convex if and only if g=logfg = \log f is convex on II. For a once-differentiable ff, this equivalence yields the first-order condition that gg lies above its tangents: logf(x)logf(y)+f(y)f(y)(xy)\log f(x) \geq \log f(y) + \frac{f'(y)}{f(y)} (x - y) for all x,yIx, y \in I. To derive this from the definition, note that convexity of gg requires g(x)g(y)+g(y)(xy)g(x) \geq g(y) + g'(y)(x - y) for all x,yIx, y \in I, where g(y)=f(y)/f(y)g'(y) = f'(y)/f(y); substituting g=logfg = \log f directly gives the inequality, which is both necessary and sufficient for logarithmic convexity under the once-differentiable assumption. For a twice-differentiable ff, convexity of gg is equivalent to its being nonnegative everywhere: d2dx2logf(x)0\frac{d^2}{dx^2} \log f(x) \geq 0 for all xIx \in I. Expanding this yields f(x)f(x)[f(x)]20.f''(x) f(x) - [f'(x)]^2 \geq 0. The derivation follows by direct computation: g(x)=f(x)/f(x)g'(x) = f'(x)/f(x) and g(x)=[f(x)f(x)[f(x)]2]/[f(x)]2g''(x) = [f''(x) f(x) - [f'(x)]^2]/ [f(x)]^2, so g(x)0g''(x) \geq 0 holds the numerator is nonnegative (since f>0f > 0); this condition is necessary and sufficient for logarithmic convexity under twice differentiability.

Properties

Fundamental Properties

A logarithmically convex function f:I(0,)f: I \to (0, \infty), where II is a of R\mathbb{R}, satisfies the fundamental inequality f(λx+(1λ)y)f(x)λf(y)1λf(\lambda x + (1-\lambda) y) \leq f(x)^\lambda f(y)^{1-\lambda} for all x,yIx, y \in I and λ[0,1]\lambda \in [0, 1]. This quasi-multiplicative property arises directly from the convexity of logf\log f and distinguishes log-convex functions by linking additive combinations of arguments to powered products of function values, with equality holding for linear logf\log f. Log-convex functions are closed under pointwise and : if ff and gg are log-convex, then so are f+gf + g and fgf \cdot g. For twice continuously differentiable functions, log-convexity is equivalent to f(x)f(x)[f(x)]2f(x) f''(x) \geq [f'(x)]^2 for all xIx \in I. Log-convexity is preserved under pointwise limits of sequences of log-convex functions, provided the limit function remains positive on the domain. Similarly, under suitable conditions, integrals of log-convex functions—such as those appearing in representations like the via the Rogers-Hölder inequality—retain log-convexity if the resulting function is positive. These preservation results ensure that log-convex functions form a stable class under limiting operations common in . If a log-convex function ff is increasing on its domain, then logf\log f is both convex (by definition) and increasing. This combined monotonicity strengthens the utility of log-convex functions in contexts requiring ordered behavior alongside convexity. Constant functions f(x)=c>0f(x) = c > 0 are log-convex, as logf(x)=logc\log f(x) = \log c is constant and thus convex. Exponential functions f(x)=eax+bf(x) = e^{a x + b} with a,bRa, b \in \mathbb{R} are also log-convex, since logf(x)=ax+b\log f(x) = a x + b is affine and hence convex; these serve as basic building blocks for constructing more complex log-convex functions via products or limits.

Relation to Convexity

A logarithmically convex function is always convex. To see this, suppose ff is positive and logarithmically convex on a convex set, so logf\log f is convex. For θ[0,1]\theta \in [0,1] and x,yx, y in the domain, applied to logf\log f yields logf(θx+(1θ)y)θlogf(x)+(1θ)logf(y)=log(f(x)θf(y)1θ)\log f(\theta x + (1-\theta) y) \leq \theta \log f(x) + (1-\theta) \log f(y) = \log \left( f(x)^\theta f(y)^{1-\theta} \right), which implies f(θx+(1θ)y)f(x)θf(y)1θf(\theta x + (1-\theta) y) \leq f(x)^\theta f(y)^{1-\theta}. By the AM-GM inequality, f(x)θf(y)1θθf(x)+(1θ)f(y)f(x)^\theta f(y)^{1-\theta} \leq \theta f(x) + (1-\theta) f(y), so f(θx+(1θ)y)θf(x)+(1θ)f(y)f(\theta x + (1-\theta) y) \leq \theta f(x) + (1-\theta) f(y), confirming the convexity of ff. Logarithmic convexity is a strictly stronger condition than convexity. For instance, the function f(x)=x2f(x) = x^2 for x>0x > 0 is convex, as its f(x)=2>0f''(x) = 2 > 0, but it is not logarithmically convex because logf(x)=2logx\log f(x) = 2 \log x has second derivative 2/x2<0-2/x^2 < 0, making logf\log f concave. In one dimension, every logarithmically convex function on an interval is convex, but the converse does not hold; for example, any polynomial of degree greater than 1, such as quadratics with positive leading coefficient, is convex on R\mathbb{R} but not logarithmically convex, since the logarithm of a polynomial generally fails to be convex due to the concavity of the logarithm near zero or at boundaries. The dual concept to logarithmic convexity is logarithmic concavity, where a positive function ff is logarithmically concave if logf-\log f is convex (equivalently, logf\log f is concave).

Sufficient Conditions

Preservation under Operations

Logarithmically convex functions, also known as log-convex functions, form a class that is closed under certain operations, allowing the construction of new log-convex functions from existing ones. A key preservation property is that the pointwise product of log-convex functions remains log-convex. Specifically, if fi:Rn(0,)f_i: \mathbb{R}^n \to (0, \infty) for i=1,,ki = 1, \dots, k are log-convex, then so is their product raised to nonnegative powers, h(x)=i=1kfi(x)aih(x) = \prod_{i=1}^k f_i(x)^{a_i} where ai0a_i \geq 0. This follows from the additivity of the logarithm: logh(x)=i=1kailogfi(x)\log h(x) = \sum_{i=1}^k a_i \log f_i(x), and nonnegative linear combinations of convex functions are convex. Another operation that preserves log-convexity is the pointwise supremum. If {fi:Rn(0,)}iI\{f_i: \mathbb{R}^n \to (0, \infty)\}_{i \in I} is a family of log-convex functions indexed by some set II, then the pointwise supremum h(x)=supiIfi(x)h(x) = \sup_{i \in I} f_i(x) is log-convex, provided the supremum is finite and positive on the domain. For families with non-negative weights, such as a weighted supremum h(x)=supiλifi(x)h(x) = \sup_{i} \lambda_i f_i(x) with λi0\lambda_i \geq 0, log-convexity is similarly preserved, as it reduces to a nonnegative scaling in the log domain before taking the sup. The reason is that logh(x)=supiIlogfi(x)\log h(x) = \sup_{i \in I} \log f_i(x), and the pointwise supremum of convex functions is convex. Log-convexity is also preserved under specific compositions. If f:Rm(0,)f: \mathbb{R}^m \to (0, \infty) is log-convex and increasing, and g:RnRmg: \mathbb{R}^n \to \mathbb{R}^m is convex with compatible domains, then the composition h(x)=f(g(x))h(x) = f(g(x)) is log-convex. To see this, note that logh(x)=(logf)(g(x))\log h(x) = (\log f)(g(x)), where logf\log f is convex (by definition of log-convexity) and increasing (since ff is increasing and positive), and the composition of a convex increasing function with a convex function yields a convex function. This rule extends the standard convexity preservation under composition to the logarithmic scale. These preservation properties arise fundamentally from the convexity of the logarithm function itself, which transforms products into sums, suprema into suprema, and suitable compositions into convex compositions, thereby maintaining the convexity of the logged versions.

Other Criteria

For a positive function ff that is once differentiable on an interval, a sufficient condition for log-convexity is that the logarithmic derivative ff\frac{f'}{f} is non-decreasing. This condition ensures that logf\log f has a non-decreasing derivative, implying the convexity of logf\log f by the fundamental theorem of calculus and the characterization of convex functions via non-decreasing first derivatives. Another sufficient condition arises from integral representations: a positive function ff on (0,)(0, \infty) is log-convex if it can be expressed as the Laplace transform of a positive measure, i.e., f(s)=0esudμ(u)f(s) = \int_0^\infty e^{-s u} \, d\mu(u) for some positive Borel measure μ\mu. This representation guarantees log-convexity because logf(s)\log f(s) is the cumulant generating function of the distribution induced by μ\mu, which is convex. A characterization due to Paul Montel (1928) states that a positive function ff is log-convex if and only if eαxf(x)e^{\alpha x} f(x) is convex for every real α\alpha.

Examples

Classical Examples

One classical example of a logarithmically convex function is the exponential function f(x)=ecxf(x) = e^{c x}, where cc is a real constant. To verify, note that logf(x)=cx\log f(x) = c x, which is a linear function and hence convex on R\mathbb{R}. Another standard example is the power function f(x)=xpf(x) = x^{-p} for x>0x > 0 and p>0p > 0. Here, logf(x)=plogx\log f(x) = -p \log x. Since logx\log x () is concave on (0,)(0, \infty), logx-\log x is convex, and multiplication by the positive constant pp preserves convexity, making logf(x)\log f(x) convex on (0,)(0, \infty). Functions resembling Gaussians also provide classical instances, such as f(x)=expf(x) = e^{|x|^p} for p1p \geq 1. In this case, logf(x)=xp\log f(x) = |x|^p, which is convex on R\mathbb{R} because pp-norms with p1p \geq 1 are convex (verifiable by the second derivative test: for x>0x > 0, the second derivative of xpx^p is p(p1)xp20p(p-1)x^{p-2} \geq 0, and symmetry extends this to the ). Thus, f(x)f(x) is logarithmically convex on R\mathbb{R}.

Examples from Special Functions

The gamma function Γ(x)\Gamma(x) is logarithmically convex on (0,)(0, \infty). This property is established as part of the Bohr–Mollerup theorem, which characterizes Γ(x)\Gamma(x) as the unique positive function satisfying Γ(1)=1\Gamma(1) = 1, the functional equation Γ(x+1)=xΓ(x)\Gamma(x+1) = x \Gamma(x), and logarithmic convexity on (0,)(0, \infty). Alternatively, logarithmic convexity follows from the fact that the second derivative of logΓ(x)\log \Gamma(x) is the ψ1(x)=Γ(x)Γ(x)[Γ(x)]2Γ(x)2>0\psi_1(x) = \frac{\Gamma''(x) \Gamma(x) - [\Gamma'(x)]^2}{\Gamma(x)^2} > 0 for x>0x > 0. These characterizations highlight the non-trivial nature of the result, as direct verification for Γ(x)\Gamma(x) relies on its integral representation Γ(x)=0tx1etdt\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} \, dt and inequalities like Hölder's, rather than algebraic manipulation as for polynomials. The B(p,q)=01tp1(1t)q1dtB(p, q) = \int_0^1 t^{p-1} (1-t)^{q-1} \, dt for p,q>0p, q > 0 is logarithmically convex in pp for fixed q>0q > 0, and symmetrically in qq for fixed p>0p > 0. This follows from the integral representation, where the kernel tp1t^{p-1} ensures the convexity of logB(p,q)\log B(p, q) via differentiation under the integral sign and positivity of the resulting expressions. Such proofs underscore the role of integral forms in establishing log-convexity for these functions, contrasting with elementary cases. For entire functions, the growth rate function logM(r)=logmaxz=rf(z)\log M(r) = \log \max_{|z|=r} |f(z)| is convex in logr\log r for r>0r > 0. This result, due to Hadamard via the three circles theorem, arises from the subharmonicity of logf(z)\log |f(z)| and properties of the maximum modulus for functions analytic in the plane. Verification involves Phragmén–Lindelöf principles or series expansions, making it non-trivial compared to finite-degree polynomials where growth is explicitly polynomial.

Applications

In Analysis and Optimization

In , logarithmically convex functions play a key role in ensuring the existence and uniqueness of minimizers in variational problems. Specifically, if a positive function ff is strictly logarithmically convex, then logf\log f is strictly convex, which implies that ff attains a unique global minimizer over convex sets where it is coercive, as the minimizer of ff coincides with that of logf\log f due to the monotonicity of the . This property is particularly useful in problems involving maximization or likelihood functions, where log-convexity guarantees a single optimal solution without additional regularity assumptions. In optimization, logarithmically convex barrier functions are central to interior-point methods, facilitating polynomial-time convergence for convex programs. For instance, the universal barrier function introduced by Nesterov and Nemirovskii, defined as the negative logarithm of the characteristic function of a convex cone, is self-concordant, enabling efficient Newton steps while staying within the interior of the feasible set. In semidefinite programming, the log-determinant barrier logdetX-\log \det X for positive semidefinite matrices XX provides a smooth approximation to the feasible region and ensuring theoretical convergence rates of O(νlog(1/ϵ))O(\sqrt{\nu} \log(1/\epsilon))
Add your contribution
Related Hubs
User Avatar
No comments yet.