Recent from talks
Nothing was collected or created yet.
Subgradient method
View on WikipediaSubgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of gradient descent.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Classical subgradient rules
[edit]Let be a convex function with domain A classical subgradient method iterates where denotes any subgradient of at and is the iterate of If is differentiable, then its only subgradient is the gradient vector itself. It may happen that is not a descent direction for at We therefore maintain a list that keeps track of the lowest objective function value found so far, i.e.
Step size rules
[edit]Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence proofs are known:
- Constant step size,
- Constant step length, which gives
- Square summable but not summable step size, i.e. any step sizes satisfying
- Nonsummable diminishing, i.e. any step sizes satisfying
- Nonsummable diminishing step lengths, i.e. where
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas[1] and by Bertsekas, Nedic, and Ozdaglar.[2]
Convergence results
[edit]For constant step-length and scaled subgradients having Euclidean norm equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is
These classical subgradient methods have poor performance and are no longer recommended for general use.[4][5] However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.
Subgradient-projection and bundle methods
[edit]During the 1970s, Claude Lemaréchal and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization.[6] The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel. [7] Contemporary bundle-methods often use "level control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.[4][5]
Constrained optimization
[edit]Projected subgradient
[edit]One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem
- minimize subject to
where is a convex set. The projected subgradient method uses the iteration where is projection on and is any subgradient of at
General constraints
[edit]The subgradient method can be extended to solve the inequality constrained problem
- minimize subject to
where are convex. The algorithm takes the same form as the unconstrained case where is a step size, and is a subgradient of the objective or one of the constraint functions at Take where denotes the subdifferential of If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
See also
[edit]- Stochastic gradient descent – Optimization algorithm
References
[edit]- ^ Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms (Second ed.). Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1.
- ^ Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization (Second ed.). Belmont, MA.: Athena Scientific. ISBN 1-886529-45-0.
- ^ The approximate convergence of the constant step-size (scaled) subgradient method is stated as Exercise 6.3.14(a) in Bertsekas (page 636): Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, MA.: Athena Scientific. ISBN 1-886529-00-0. On page 636, Bertsekas attributes this result to Shor: Shor, Naum Z. (1985). Minimization Methods for Non-differentiable Functions. Springer-Verlag. ISBN 0-387-12763-1.
- ^ a b Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef (ed.). Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. Vol. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 3-540-42877-1. MR 1900016. S2CID 9048698.
- ^ a b Kiwiel, Krzysztof C.; Larsson, Torbjörn; Lindberg, P. O. (August 2007). "Lagrangian relaxation via ballstep subgradient methods" (PDF). Mathematics of Operations Research. 32 (3): 669–686. doi:10.1287/moor.1070.0261. MR 2348241.
- ^ Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, MA.: Athena Scientific. ISBN 1-886529-00-0.
- ^ Kiwiel, Krzysztof (1985). Methods of Descent for Nondifferentiable Optimization. Berlin: Springer Verlag. p. 362. ISBN 978-3540156420. MR 0797754.
Further reading
[edit]- Bertsekas, Dimitri P. (1999). Nonlinear Programming. Belmont, MA.: Athena Scientific. ISBN 1-886529-00-0.
- Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization (Second ed.). Belmont, MA.: Athena Scientific. ISBN 1-886529-45-0.
- Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1.
- Shor, Naum Z. (1985). Minimization Methods for Non-differentiable Functions. Springer-Verlag. ISBN 0-387-12763-1.
- Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton, NJ: Princeton University Press. pp. xii+454. ISBN 978-0691119151. MR 2199043.
External links
[edit]Subgradient method
View on GrokipediaFundamentals
Subgradients
In convex optimization, the subgradient generalizes the notion of the gradient to nonsmooth convex functions. For a convex function and a point , a vector is a subgradient of at if it satisfies the subgradient inequality This inequality provides a linear lower bound on that supports the graph of the function from below at . The set of all such subgradients at , denoted by the subdifferential , is nonempty for any convex at every point in the relative interior of its domain.[2] Geometrically, a subgradient corresponds to a supporting hyperplane to the epigraph of at the point , where the epigraph is a convex set. The inequality \eqref{eq:subgrad} ensures that this hyperplane lies below the epigraph everywhere, touching it at , thus generalizing the supporting role of the gradient for differentiable convex functions.[2] A classic example is the absolute value function on , which is convex but nondifferentiable at . Here, for , while , reflecting the range of possible slopes at the kink. Another example is the indicator function of a nonempty convex set , defined as if and otherwise; its subdifferential at is the normal cone .[2][3] Key properties of the subdifferential follow from the convexity of : is a nonempty, convex, and closed set. If is Lipschitz continuous near , then is compact and bounded. Moreover, the subdifferential relates to one-sided derivatives; for instance, the directional derivative satisfies .[2] The concept of subgradients traces its origins to the work of Lev Pontryagin in the 1940s on variational problems in optimal control, where similar supporting hyperplane ideas emerged, and was formally developed within convex analysis by R. Tyrrell Rockafellar in his 1970 monograph.Basic Algorithm
The subgradient method is an iterative algorithm for minimizing a convex, possibly nondifferentiable function in an unconstrained setting. It generalizes the gradient descent method by using a subgradient—a vector satisfying for all —in place of the gradient at points where is not differentiable.[4] This approach was originally developed by Naum Z. Shor in the early 1960s as a means to handle nondifferentiable optimization problems, such as transportation tasks.[5] Unlike gradient descent, the subgradient method does not guarantee a decrease in function value at each step, but it leverages the convexity of to drive iterates toward a minimizer.[1] The core procedure initializes an arbitrary point and generates a sequence of iterates via the update rule , where is a selected subgradient and is a step size. To monitor progress, the algorithm tracks the best (lowest-function-value) iterate encountered so far, as the sequence may not be monotonically decreasing.[1] The following pseudocode outlines the basic iteration:pick x_0 ∈ ℝⁿ
for k = 0, 1, 2, ...
select g_k ∈ ∂f(x_k)
choose step size α_k > 0
set x_{k+1} = x_k - α_k g_k
(optionally track x̄_k = arg min_{i=0,...,k} f(x_i))
pick x_0 ∈ ℝⁿ
for k = 0, 1, 2, ...
select g_k ∈ ∂f(x_k)
choose step size α_k > 0
set x_{k+1} = x_k - α_k g_k
(optionally track x̄_k = arg min_{i=0,...,k} f(x_i))

