Hubbry Logo
search
logo

Support function

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In mathematics, the support function hA of a non-empty closed convex set A in describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on . Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition. Due to these properties, the support function is one of the most central basic concepts in convex geometry.

Definition

[edit]

The support function of a non-empty closed convex set A in is given by

; see [1] [2] [3]. Its interpretation is most intuitive when x is a unit vector: by definition, A is contained in the closed half space

and there is at least one point of A in the boundary

of this half space. The hyperplane H(x) is therefore called a supporting hyperplane with exterior (or outer) unit normal vector x. The word exterior is important here, as the orientation of x plays a role, the set H(x) is in general different from H(−x). Now hA(x) is the (signed) distance of H(x) from the origin.

Examples

[edit]

The support function of a singleton A = {a} is .

The support function of the Euclidean unit ball is where is the 2-norm.

If A is a line segment through the origin with endpoints −a and a, then .

Properties

[edit]

As a function of x

[edit]

The support function of a compact nonempty convex set is real valued and continuous, but if the set is closed and unbounded, its support function is extended real valued (it takes the value ). As any nonempty closed convex set is the intersection of its supporting half spaces, the function hA determines A uniquely. This can be used to describe certain geometric properties of convex sets analytically. For instance, a set A is point symmetric with respect to the origin if and only if hA is an even function.

In general, the support function is not differentiable. However, directional derivatives exist and yield support functions of support sets. If A is compact and convex, and hA'(u;x) denotes the directional derivative of hA at u0 in direction x, we have

Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined above. If AH(u) is a singleton {y}, say, it follows that the support function is differentiable at u and its gradient coincides with y. Conversely, if hA is differentiable at u, then AH(u) is a singleton. Hence hA is differentiable at all points u0 if and only if A is strictly convex (the boundary of A does not contain any line segments).

More generally, when is convex and closed then for any ,

where denotes the set of subgradients of at .

It follows directly from its definition that the support function is positive homogeneous:

and subadditive:

It follows that hA is a convex function. It is crucial in convex geometry that these properties characterize support functions: Any positive homogeneous, convex, real valued function on is the support function of a nonempty compact convex set. Several proofs are known,[3] one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function is the (convex) indicator function of a compact convex set.

Many authors restrict the support function to the Euclidean unit sphere and consider it as a function on Sn-1. The homogeneity property shows that this restriction determines the support function on , as defined above.

As a function of A

[edit]

The support functions of a dilated or translated set are closely related to the original set A:

and

The latter generalises to

where A + B denotes the Minkowski sum:

The Hausdorff distance d H(A, B) of two nonempty compact convex sets A and B can be expressed in terms of support functions,

where, on the right hand side, the uniform norm on the unit sphere is used.

The properties of the support function as a function of the set A are sometimes summarized in saying that :A h A maps the family of non-empty compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive homogeneous extension is convex. Abusing terminology slightly, is sometimes called linear, as it respects Minkowski addition, although it is not defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets. The mapping is an isometry between this cone, endowed with the Hausdorff metric, and a subcone of the family of continuous functions on Sn-1 with the uniform norm.

Variants

[edit]

In contrast to the above, support functions are sometimes defined on the boundary of A rather than on Sn-1, under the assumption that there exists a unique exterior unit normal at each boundary point. Convexity is not needed for the definition. For an oriented regular surface, M, with a unit normal vector, N, defined everywhere on its surface, the support function is then defined by

.

In other words, for any , this support function gives the signed distance of the unique hyperplane that touches M in x.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In convex analysis, the support function of a nonempty convex set KK in a Euclidean space is defined as hK(x)=supyKy,xh_K(x) = \sup_{y \in K} \langle y, x \rangle, where ,\langle \cdot, \cdot \rangle denotes the inner product and xx is a direction vector.[1] This real-valued function, which is always convex and positively homogeneous of degree one, provides a dual representation of the set KK as the intersection of supporting half-spaces and uniquely determines compact convex sets.[2] Introduced by Hermann Minkowski at the end of the 19th century, the support function has become a foundational tool for characterizing geometric properties of convex bodies.[2] The support function exhibits several key properties that make it indispensable in mathematical analysis. It is lower semicontinuous and subadditive, with the effective domain forming a closed convex cone.[3] For instance, the support function of the unit ball corresponds to the dual norm, while for a convex cone, it aligns with indicator functions in optimization contexts.[1] Moreover, the subdifferential of the support function at a point pp consists exactly of the points in KK that achieve the supremum, linking it directly to maximization problems.[3] These attributes allow the support function to test set inclusions: KLK \subseteq L if and only if hK(x)hL(x)h_K(x) \leq h_L(x) for all xx.[2] Beyond pure geometry, support functions find broad applications in optimization and economics. In convex optimization, they facilitate dual formulations and the analysis of feasible sets, such as second-order cones where h(x,t):tx2(u,v)=u2vh_{(x,t): t \geq \|x\|_2}(u,v) = \|u\|_2 v for v0v \geq 0.[1] In production theory, the support function serves as the profit function πA(p)=supyApy\pi_A(p) = \sup_{y \in A} p \cdot y over a technology set AA, enabling theorems like Hotelling's lemma for comparative statics under differentiability.[3] They also underpin the separating hyperplane theorem, ensuring that for a closed convex set AA and point outside it, a hyperplane exists with the support function bounding the separation.[3] Extensions to non-compact sets and abstract spaces further broaden their utility in modern research.[2]

Fundamentals

Definition

In convex analysis, the support function is fundamentally associated with convex sets. A subset AA of a real vector space is convex if, for any x,yAx, y \in A and λ[0,1]\lambda \in [0, 1], the convex combination λx+(1λ)yA\lambda x + (1 - \lambda) y \in A.[1] Consider a real normed vector space XX, whose dual space XX^* consists of all continuous linear functionals x:XRx^*: X \to \mathbb{R}, equipped with the duality pairing x,x=x(x)\langle x^*, x \rangle = x^*(x).[4] For a non-empty subset AXA \subseteq X, the support function hA:XR{+}h_A: X^* \to \mathbb{R} \cup \{+\infty\} is defined by
hA(x)=supxAx,x. h_A(x^*) = \sup_{x \in A} \langle x^*, x \rangle.
[2] This extended-real-valued function encodes the maximal projection of points in AA onto the direction specified by xx^*. The support function hAh_A is finite-valued if AA is bounded, as the supremum is then bounded by the product of the dual norm of xx^* and the diameter of AA; otherwise, it may attain ++\infty when AA is unbounded in the direction of xx^*.[2] In the specific setting of Euclidean space Rn\mathbb{R}^n with the standard dot product, the dual space identifies with Rn\mathbb{R}^n itself, and the support function takes the form
hA(x)=supyAxy,xRn. h_A(x) = \sup_{y \in A} x \cdot y, \quad x \in \mathbb{R}^n.
[5] The level sets where this supremum is attained correspond to supporting hyperplanes of AA.[5]

Geometric Interpretation

The support function $ h_A(x) $ of a nonempty convex set $ A $ in a Euclidean space geometrically represents the maximum projection of points in $ A $ onto the direction given by the unit vector $ x / |x| $, scaled by the norm $ |x| $.[6] Specifically, for $ x \neq 0 $, $ h_A(x) = \sup_{y \in A} \langle x, y \rangle $ attains its value at points on the boundary of $ A $ that lie farthest in the direction of $ x $.[7] This maximum is intimately connected to supporting hyperplanes: the hyperplane defined by $ { y \in X \mid \langle x, y \rangle = h_A(x) } $ supports $ A $ at the points where the supremum is achieved, meaning $ A $ lies entirely on one side of the hyperplane.[6] If $ |x| = 1 $ and $ h_A(x) < \infty $, then $ h_A(x) $ equals the signed distance from the origin to this supporting hyperplane with outward normal $ x $. If $ h_A(x) = +\infty $, the set is unbounded in the direction $ x $, and there is no finite supporting hyperplane in that direction.[6][7] In two dimensions, for a convex body $ A $ containing the origin in its interior, the support function $ h_A(\theta) $ (where $ \theta $ parameterizes the unit circle) gives the perpendicular distance from the origin to the supporting line perpendicular to the direction $ \theta $, tracing the boundary of $ A $ as a polar plot of these distances.[6] Every closed convex set $ A $ can be recovered from its support function as $ A = \bigcap_{x \in X^*} { y \mid \langle x, y \rangle \leq h_A(x) } $, the intersection of all its supporting half-spaces (where $ h_A(x) = +\infty $ yields the whole space).[6][3]

Examples

Basic Convex Sets

The support function provides a concrete way to compute the maximum projection of points in a convex set onto a given direction, offering insight into the set's extent in that direction. Consider the simplest case in one dimension: the closed interval [1,1]R[-1, 1] \subset \mathbb{R}, which is convex as the convex hull of its endpoints. The support function is $ h_{[-1,1]}(x) = \sup_{y \in [-1,1]} x y $. To evaluate this, note that the supremum occurs at one of the endpoints depending on the sign of xx: if x>0x > 0, the maximum is at y=1y = 1, yielding x1=xx \cdot 1 = x; if x<0x < 0, the maximum is at y=1y = -1, yielding x(1)=xx \cdot (-1) = -x; if x=0x = 0, it is 00. Thus, $ h_{[-1,1]}(x) = |x| $, which corresponds to the Euclidean norm in R\mathbb{R}.[8] In a general normed vector space XX with norm \|\cdot\|, the closed unit ball B={yX:y1}B = \{ y \in X : \|y\| \leq 1 \} is a fundamental convex set, symmetric about the origin. Its support function is $ h_B(x) = \sup_{y \in B} \langle x, y \rangle = |x|* $, where \|\cdot\|_* denotes the dual norm defined by $|x|* = \sup_{|y| \leq 1} \langle x, y \rangle $. This equality holds because the supremum is attained at a point yy on the unit sphere in the direction maximizing the inner product with xx, and by definition of the dual norm, it equals x\|x\|_*. For the Euclidean space Rn\mathbb{R}^n with the 2\ell_2-norm, the dual norm is also the 2\ell_2-norm, so the support function of the unit ball simplifies to $ h_B(x) = |x|_2 $. More generally, for a ball of radius r>0r > 0 centered at the origin, $ B(0, r) = { y \in \mathbb{R}^n : |y|2 \leq r } $, scaling yields $ h{B(0,r)}(x) = r |x|2 $; if centered at xcx_c, it becomes $ h{B(x_c, r)}(x) = \langle x, x_c \rangle + r |x|_2 $.[8] The standard (n1)(n-1)-simplex in [R](/page/R)n\mathbb{[R](/page/R)}^n, defined as Δn1={y[R](/page/R)n:y0,i=1nyi=1}\Delta^{n-1} = \{ y \in \mathbb{[R](/page/R)}^n : y \geq 0, \sum_{i=1}^n y_i = 1 \}, is the convex hull of the standard basis vectors e1,,ene_1, \dots, e_n and represents the set of probability distributions over nn outcomes. Its support function is $ h_{\Delta^{n-1}}(x) = \max_{i=1,\dots,n} x_i $, achieved at the vertex eie_i where xix_i is maximized. This follows from the definition, as the supremum supyΔn1x,y\sup_{y \in \Delta^{n-1}} \langle x, y \rangle is the maximum over the vertices due to convexity, and x,ei=xi\langle x, e_i \rangle = x_i. For the relaxed simplex {y[R](/page/R)n:y0,i=1nyi1}\{ y \in \mathbb{[R](/page/R)}^n : y \geq 0, \sum_{i=1}^n y_i \leq 1 \}, the convex hull of the origin and the standard basis vectors e1,,ene_1, \dots, e_n, the support function is $ h(x) = \max \left( 0, \max_{i=1,\dots,n} x_i \right) $, achieved at the origin if maxixi0\max_i x_i \leq 0 or at eje_j for j=argmaxixij = \arg\max_i x_i otherwise.[8] Half-spaces illustrate how the support function behaves for unbounded convex sets. Consider H={yRn:a,yb}H = \{ y \in \mathbb{R}^n : \langle a, y \rangle \leq b \} with a0a \neq 0 and bRb \in \mathbb{R}. The support function $ h_H(x) = \sup_{y \in H} \langle x, y \rangle $ is finite only if xx lies in the dual cone to the recession cone of HH, which is the ray generated by aa. Specifically, $ h_H(x) = +\infty $ unless x=λax = \lambda a for some λ0\lambda \geq 0, in which case $ h_H(\lambda a) = \lambda b $. To see this, normalize so a2=1\|a\|_2 = 1; then for x=λax = \lambda a with λ0\lambda \geq 0, the maximum is attained on the bounding hyperplane at the projection, yielding λb\lambda b, while any orthogonal component to aa or negative scalar would allow the supremum to diverge along the unbounded directions of HH. This reflects the half-space's infinite extent opposite to the normal aa.[8]

Polytopes and Polyhedra

Polytopes are compact convex sets that admit a finite vertex representation, allowing for an explicit expression of their support function. Specifically, a polytope PRdP \subset \mathbb{R}^d can be expressed as the convex hull of a finite set of vertices {v1,,vm}\{v_1, \dots, v_m\}, so P=\conv{v1,,vm}P = \conv\{v_1, \dots, v_m\}. The support function then simplifies to
hP(x)=maxi=1,,mx,vi, h_P(x) = \max_{i=1,\dots,m} \langle x, v_i \rangle,
since the supremum over PP is attained at one of the extreme points, which coincide with the vertices for polytopes.[9] This representation highlights the piecewise linear nature of hPh_P, with linearity over each cone of the normal fan induced by the facets of PP.[10] In the dual facet representation, a polytope PP is defined by a finite number of inequalities, P={yRdAyb}P = \{ y \in \mathbb{R}^d \mid A y \leq b \}, where AA is a matrix with rows corresponding to facet normals aja_j and bb to the offsets. The support function hP(x)h_P(x) is the optimal value of the linear program sup{x,yAyb}\sup \{ \langle x, y \rangle \mid A y \leq b \}, which by strong duality equals
hP(x)=min{j=1kλjbj  |  j=1kλjaj=x,  λj0}. h_P(x) = \min \left\{ \sum_{j=1}^k \lambda_j b_j \;\middle|\; \sum_{j=1}^k \lambda_j a_j = x, \; \lambda_j \geq 0 \right\}.
This formulation connects the support function directly to linear programming, where the minimizers λ\lambda correspond to barycentric coordinates on the dual polytope.[8] A concrete example is the unit cube C=[1,1]3R3C = [-1,1]^3 \subset \mathbb{R}^3, whose vertices are all points with coordinates ±1\pm 1. The support function is hC(x)=x1+x2+x3h_C(x) = |x_1| + |x_2| + |x_3|, obtained by maximizing x,v\langle x, v \rangle over these vertices, where the optimum aligns each coordinate of vv with the sign of the corresponding xix_i. This equals the 1\ell_1-norm of xx, reflecting the duality between the \ell_\infty-ball (the cube) and the 1\ell_1-norm.[8] In contrast, for the non-symmetric cube [0,1]3[0,1]^3, the support function is h(x)=i=13max(xi,0)h(x) = \sum_{i=1}^3 \max(x_i, 0), computed similarly via maximization over its eight vertices.[9] For unbounded polyhedra, such as P={yRdAyb}P = \{ y \in \mathbb{R}^d \mid A y \leq b \} with a nontrivial recession cone, the support function hP(x)h_P(x) may be infinite for directions xx in the dual cone of the recession cone. In the linear programming formulation, this occurs when the dual problem is infeasible, yielding hP(x)=+h_P(x) = +\infty; otherwise, it remains finite and given by the dual optimum. The recession cone must be accounted for to ensure finiteness in bounded directions.[8] Computationally, evaluating the support function of a polytope in vertex representation reduces to solving a linear program over the finite set of vertices, which is efficient for modest dimensions but scales with the number of vertices. In facet representation, it involves solving the dual LP, often preferable when the number of facets is smaller than vertices. These evaluations underpin algorithms for polytope approximation and optimization.[11]

Properties

Dependence on Direction

The support function $ h_A(\mathbf{x}) = \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle $ of a nonempty set $ A \subseteq \mathbb{R}^n $ is convex as a function of the direction vector $ \mathbf{x} $. Specifically, for any $ \mathbf{x}, \mathbf{y} \in \mathbb{R}^n $ and $ \lambda \in [0,1] $,
hA(λx+(1λ)y)λhA(x)+(1λ)hA(y). h_A(\lambda \mathbf{x} + (1-\lambda) \mathbf{y}) \leq \lambda h_A(\mathbf{x}) + (1-\lambda) h_A(\mathbf{y}).
This follows from the fact that $ h_A $ is the pointwise supremum of the family of affine (hence convex) functions $ \mathbf{x} \mapsto \langle \mathbf{x}, \mathbf{a} \rangle $ for $ \mathbf{a} \in A $, and the pointwise supremum of convex functions is convex. The support function is also positively homogeneous: for any $ \mathbf{x} \in \mathbb{R}^n $ and $ \lambda \geq 0 $,
hA(λx)=λhA(x). h_A(\lambda \mathbf{x}) = \lambda h_A(\mathbf{x}).
To see this, note that
hA(λx)=supaAλx,a=λsupaAx,a=λhA(x), h_A(\lambda \mathbf{x}) = \sup_{\mathbf{a} \in A} \langle \lambda \mathbf{x}, \mathbf{a} \rangle = \lambda \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle = \lambda h_A(\mathbf{x}),
as the supremum scales linearly with $ \lambda \geq 0 $. Positive homogeneity and convexity together imply subadditivity: for any $ \mathbf{x}, \mathbf{y} \in \mathbb{R}^n $,
hA(x+y)hA(x)+hA(y). h_A(\mathbf{x} + \mathbf{y}) \leq h_A(\mathbf{x}) + h_A(\mathbf{y}).
This holds because
hA(x+y)=supaAx+y,a=supaA(x,a+y,a)supaAx,a+supaAy,a=hA(x)+hA(y), h_A(\mathbf{x} + \mathbf{y}) = \sup_{\mathbf{a} \in A} \langle \mathbf{x} + \mathbf{y}, \mathbf{a} \rangle = \sup_{\mathbf{a} \in A} \left( \langle \mathbf{x}, \mathbf{a} \rangle + \langle \mathbf{y}, \mathbf{a} \rangle \right) \leq \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle + \sup_{\mathbf{a} \in A} \langle \mathbf{y}, \mathbf{a} \rangle = h_A(\mathbf{x}) + h_A(\mathbf{y}),
using the monotonicity of the supremum. These properties make $ h_A $ a sublinear function. If $ A $ is compact, then $ h_A $ is Lipschitz continuous with respect to the Euclidean norm on $ \mathbb{R}^n $: for any $ \mathbf{x}, \mathbf{y} \in \mathbb{R}^n $,
hA(x)hA(y)(supaAa2)xy2. |h_A(\mathbf{x}) - h_A(\mathbf{y})| \leq \left( \sup_{\mathbf{a} \in A} \|\mathbf{a}\|_2 \right) \|\mathbf{x} - \mathbf{y}\|_2.
This bound arises because
hA(x)hA(y)=supaAx,asupaAy,asupaAxy,axy2supaAa2, |h_A(\mathbf{x}) - h_A(\mathbf{y})| = \left| \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle - \sup_{\mathbf{a} \in A} \langle \mathbf{y}, \mathbf{a} \rangle \right| \leq \sup_{\mathbf{a} \in A} |\langle \mathbf{x} - \mathbf{y}, \mathbf{a} \rangle| \leq \|\mathbf{x} - \mathbf{y}\|_2 \sup_{\mathbf{a} \in A} \|\mathbf{a}\|_2,
leveraging the compactness of $ A $ to ensure the suprema are attained and finite. The epigraph of $ h_A $, defined as
epihA={(x,t)Rn×RthA(x)}={(x,t)Rn×Rtx,a aA}, \operatorname{epi} h_A = \{ (\mathbf{x}, t) \in \mathbb{R}^n \times \mathbb{R} \mid t \geq h_A(\mathbf{x}) \} = \{ (\mathbf{x}, t) \in \mathbb{R}^n \times \mathbb{R} \mid t \geq \langle \mathbf{x}, \mathbf{a} \rangle \ \forall \mathbf{a} \in A \},
is a closed convex cone. This representation underscores the convex duality inherent in the support function.

Dependence on Set

The support function of a convex set exhibits specific transformation rules under various geometric operations on the set, reflecting its role as a complete functional descriptor of the set's extent in different directions. For scaling, the support function is positively homogeneous: if λ0\lambda \geq 0, then hλA(x)=λhA(x)h_{\lambda A}(x) = \lambda h_A(x) for all xx.[12] This follows directly from the definition, as supyλAx,y=λsupzAx,z\sup_{y \in \lambda A} \langle x, y \rangle = \lambda \sup_{z \in A} \langle x, z \rangle. Under translation by a vector bb, the support function shifts additively: hA+b(x)=hA(x)+x,bh_{A + b}(x) = h_A(x) + \langle x, b \rangle.[12] This arises because supyA+bx,y=supzAx,z+b=hA(x)+x,b\sup_{y \in A + b} \langle x, y \rangle = \sup_{z \in A} \langle x, z + b \rangle = h_A(x) + \langle x, b \rangle. The support function is additive with respect to the Minkowski sum of sets: hA+B(x)=hA(x)+hB(x)h_{A + B}(x) = h_A(x) + h_B(x) for convex sets AA and BB.[12] To see this, note that for any aAa \in A and bBb \in B, x,a+b=x,a+x,bhA(x)+hB(x)\langle x, a + b \rangle = \langle x, a \rangle + \langle x, b \rangle \leq h_A(x) + h_B(x), with equality achievable by choosing near-supremizers independently, so sup(a,b)A×Bx,a+b=hA(x)+hB(x)\sup_{(a,b) \in A \times B} \langle x, a + b \rangle = h_A(x) + h_B(x). For the intersection of convex sets, the support function satisfies hAB(x)min{hA(x),hB(x)}h_{A \cap B}(x) \leq \min\{h_A(x), h_B(x)\}, since ABAA \cap B \subseteq A and ABBA \cap B \subseteq B imply the supremum over the smaller set is at most that over each.[13] Equality holds under conditions such as when AA and BB share the same recession cone in the direction x-x, ensuring the unbounded behavior (if present) is preserved in the intersection, or when the supremum-attaining points for the minimizing support function lie in ABA \cap B.[14] The support function is invariant under taking the convex hull: hconvA(x)=hA(x)h_{\operatorname{conv} A}(x) = h_A(x).[13] This is because the linear functional x,\langle x, \cdot \rangle achieves its supremum over convex combinations at an extreme point of AA, so the supremum over convA\operatorname{conv} A equals that over AA. For a closed convex set AA containing the origin, the support function of the polar set A={yy,z1 zA}A^\circ = \{ y \mid \langle y, z \rangle \leq 1 \ \forall z \in A \} coincides with the gauge function of AA: hA(x)=γA(x)h_{A^\circ}(x) = \gamma_A(x), where γA(x)=inf{λ>0xλA}\gamma_A(x) = \inf \{ \lambda > 0 \mid x \in \lambda A \}.[13] This duality relation, established via polarity, links the two functionals bidirectionally under the given assumptions.[13] This scaling behavior aligns with the positive homogeneity of the support function in its directional argument.[12]

Applications

Convex Optimization

The support function of a convex set AA plays a central role in convex optimization through its relation to the Fenchel conjugate of the indicator function δA\delta_A, defined as δA(x)=0\delta_A(x) = 0 if xAx \in A and ++\infty otherwise. Specifically, the support function hA(y)=supxAy,xh_A(y) = \sup_{x \in A} \langle y, x \rangle is the Fenchel conjugate δA(y)\delta_A^*(y).[8] In Lagrangian duality for the problem minxf(x)\min_{x} f(x) subject to xAx \in A, where ff is convex, the dual problem can be formulated using the conjugate of ff and the support function of AA. The dual maximizes f(y)hA(y)-f^*(-y) - h_A(y), providing a lower bound on the primal value via weak duality.[8] Projections onto convex sets can be computed using support functions, as the direction of the projection from a point pp to AA aligns with iterative maximization of linear functions over AA. In particular, argmaxxAd,x\arg\max_{x \in A} \langle d, x \rangle identifies boundary points that guide the projection direction in methods like conditional gradient descent, where repeated support function evaluations approximate the Euclidean projection argminxApx2\arg\min_{x \in A} \|p - x\|^2.[15] In robust optimization, support functions model uncertainty sets UU, reformulating worst-case constraints tractably. For instance, the robust counterpart of mincx\min c^\top x subject to AxbUAx - b \in U for all realizations in convex UU becomes mincx\min c^\top x subject to hU(Axb)0h_U(Ax - b) \leq 0, where hU(v)=supuUv,uh_U(v) = \sup_{u \in U} \langle v, u \rangle captures the maximum violation over UU.[16] Numerical methods for convex optimization leverage support function oracles, which provide hA(y)h_A(y) and argmaxxAy,x\arg\max_{x \in A} \langle y, x \rangle, in cutting-plane approaches to approximate feasible sets or solve linear programs over AA. These oracles generate supporting hyperplanes that refine polyhedral approximations of AA, enabling efficient convergence in ellipsoid or center-of-gravity methods for set containment or optimization.

Duality and Separation

The support function plays a central role in separation theorems for convex sets. For a nonempty closed convex set AA in a Euclidean space and a point xAx \notin A, there exists a direction y0y \neq 0 such that the hyperplane defined by {zy,z=y,x}\{z \mid \langle y, z \rangle = \langle y, x \rangle\} strictly separates xx from AA, meaning hA(y)<y,xh_A(y) < \langle y, x \rangle.[17] This strict separation follows from the proper separation theorem applied to the sets {x}\{x\} and AA, where the supporting hyperplane is determined by the supremum over AA in the direction yy. In convex duality, the support function provides a representation of the polar set and facilitates the bipolar theorem. The polar of a convex set AA containing the origin is given by A={yhA(y)1}A^\circ = \{ y \mid h_A(y) \leq 1 \}, and for any closed convex set AA containing the origin, the bipolar theorem states that A=(A)A = (A^\circ)^\circ.[6] This duality recovers AA as the intersection of half-spaces {zy,z1 yA}\{ z \mid \langle y, z \rangle \leq 1 \ \forall y \in A^\circ \}, highlighting the support function's role in dual representations of convex sets. Set inclusion between convex sets can be characterized entirely through their support functions. Specifically, for nonempty convex sets AA and BB, ABA \subseteq B if and only if hA(x)hB(x)h_A(x) \leq h_B(x) for all directions xx.[18] The forward direction holds by the definition of the support function as a supremum, while the converse follows from the fact that the closed convex hull of a set is the intersection of all supporting half-spaces {zx,zh(x) x}\{ z \mid \langle x, z \rangle \leq h(x) \ \forall x \}, so tighter bounds imply a smaller intersection. Support functions, being sublinear (convex and positively homogeneous), enable extensions of linear functionals via the Hahn-Banach theorem. If a linear functional ff is defined on a subspace and bounded above by the support function hAh_A of a convex set AA, then ff extends to a linear functional gg on the entire space satisfying ghAg \leq h_A everywhere.[19] This preserves the supremum bounds and is crucial for maintaining convexity constraints in dual problems. Finally, support functions quantify approximations between convex sets via the Hausdorff distance. For compact convex sets AA and BB, the Hausdorff distance is given by
dH(A,B)=supx=1hA(x)hB(x), d_H(A, B) = \sup_{\|x\|=1} |h_A(x) - h_B(x)|,
measuring the uniform deviation of their support functions over the unit sphere.[20] This equivalence arises because the Hausdorff metric on compact convex sets coincides with the supremum norm on their support functions restricted to the unit sphere.

Variants

Infinite-Dimensional Spaces

In a Banach space XX, the support function of a nonempty convex subset AXA \subseteq X is defined as
hA(x)=supxAx,x,xX, h_A(x^*) = \sup_{x \in A} \langle x^*, x \rangle, \quad x^* \in X^*,
where XX^* denotes the continuous dual space of XX and ,\langle \cdot, \cdot \rangle is the duality pairing between XX and XX^*.[21] This extends the finite-dimensional notion, but requires AA to be weakly closed and convex to ensure the support function is well-behaved, such as being lower semicontinuous in the appropriate topology. If AA is bounded, hAh_A is Lipschitz continuous on XX^* with constant equal to the radius of AA, reflecting the equicontinuity of the family of linear functionals induced by elements of AA.[1] In infinite-dimensional spaces, compactness presents significant challenges compared to the finite-dimensional case, where closed bounded convex sets are compact. Bounded closed convex subsets of XX are not necessarily weakly compact, so the supremum defining hA(x)h_A(x^*) may not be attained for some xXx^* \in X^*; that is, there may be no xAx \in A such that x,x=hA(x)\langle x^*, x \rangle = h_A(x^*). To address this, one often considers the weak* closure of AA in the bidual XX^{**}, as the support function of AA coincides with that of its weak* closed convex hull in XX^{**}, ensuring the representation captures the "completion" of AA under the canonical embedding XXX \hookrightarrow X^{**}.[21] A concrete example arises in the Hilbert space L2([0,1])L^2([0,1]), where the closed unit ball B={fL2([0,1]):fL21}B = \{ f \in L^2([0,1]) : \|f\|_{L^2} \leq 1 \} has support function hB(g)=gL2h_B(g) = \|g\|_{L^2} for g(L2)L2([0,1])g \in (L^2)^* \cong L^2([0,1]), by the Riesz representation theorem and Cauchy-Schwarz inequality. This illustrates how the support function recovers the dual norm in self-dual spaces like Hilbert spaces.[1] The role of reflexivity is pivotal: in a reflexive Banach space, where X=XX^{**} = X and the canonical embedding is surjective, the support function hAh_A of a closed convex bounded set AA is lower semicontinuous in the norm topology on XX^*, leveraging the weak compactness of AA. In non-reflexive spaces, such as c0c_0 or L1([0,1])L^1([0,1]), one must instead employ the weak* topology on XX^{**} to analyze hAh_A, as norm lower semicontinuity may fail without reflexivity. In functional analysis, support functions further characterize weak compactness: a weakly closed bounded convex set AXA \subseteq X is weakly compact if and only if every xXx^* \in X^* attains its supremum on AA, i.e., supxAx,x\sup_{x \in A} \langle x^*, x \rangle is achieved, as established by generalizations of James' theorem.[22][23]

Non-Convex Extensions

The support function can be formally defined for a non-convex set AA in the same manner as for convex sets: hA(x)=supyAx,yh_A(x) = \sup_{y \in A} \langle x, y \rangle. However, unlike the convex case where the function fully characterizes the set, for non-convex AA, hA(x)h_A(x) equals the support function of the convex hull convA\operatorname{conv} A, i.e., hA(x)=hconvA(x)h_A(x) = h_{\operatorname{conv} A}(x), thereby losing information about the non-convex structure.[24] This equivalence arises because the supremum over a set and its convex hull coincides for linear functionals, limiting the standard support function's utility in capturing irregularities in non-convex geometries.[25] To address non-convexity more directly, extensions incorporate generalized notions that preserve directional information without convexification. One such variant is the lower generalized support function for a non-convex constraint set SS, defined as σ^S(λ)=lim infλ~λinfu{λ~Tuλ~NS(u)}\hat{\sigma}_S(\lambda) = \liminf_{\tilde{\lambda} \to \lambda} \inf_u \{ \tilde{\lambda}^T u \mid \tilde{\lambda} \in N_S(u) \}, where NS(u)N_S(u) denotes the limiting normal cone at uu. This reduces to the standard support function σS(λ)\sigma_S(\lambda) when SS is convex, but provides a tighter lower bound for non-convex SS, enabling second-order optimality conditions in non-convex optimization problems under assumptions like directional metric subregularity.[26] Similarly, for set-constrained problems minf(x)\min f(x) subject to g(x)Λg(x) \in \Lambda with non-convex Λ\Lambda, an adapted form σ^S,A(λ)\hat{\sigma}_{S,A}(\lambda) restricts to points in a subset AA, facilitating analysis of local minima.[26] In nonsmooth non-convex optimization, the Clarke subdifferential Cf(x)\partial^C f(x) of a locally Lipschitz function ff plays a key role, with its support function relating directly to directional behavior: σCf(x)(d)=fo(x;d)\sigma_{\partial^C f(x)}(d) = f^o(x; d), where fo(x;d)f^o(x; d) is the Clarke directional derivative, maxξCf(x)ξ,d\max_{\xi \in \partial^C f(x)} \langle \xi, d \rangle. This connection extends the support function concept to subdifferentials of non-convex functions, allowing separation theorems and stationarity conditions via convex approximations of the subdifferential set.[27] For instance, in finite dimensions, the Clarke subdifferential at the origin of the difference of support functions can separate non-convex sets, generalizing hyperplane separation.[28] A complementary variant is the directed distance function to a non-convex set AA, defined as dA(x;u)=inf{t>0x+tuA}d_A(x; u) = \inf \{ t > 0 \mid x + t u \in A \}, which replaces the supremum with an infimum to measure penetration in a specific direction uu. This is particularly useful in variational analysis for handling proximal mappings and error bounds in non-convex settings, differing from the support function by focusing on minimal rather than maximal extent.[29] These non-convex extensions emerged in the framework of variational analysis during the late 1980s and 1990s, with foundational contributions from R. Tyrrell Rockafellar and collaborators, building on Clarke's nonsmooth calculus to unify geometric and analytic tools for irregular sets.[29]

References

User Avatar
No comments yet.