Hubbry Logo
Geometric programmingGeometric programmingMain
Open search
Geometric programming
Community hub
Geometric programming
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Geometric programming
Geometric programming
from Wikipedia

A geometric program (GP) is an optimization problem of the form

where are posynomials and are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function from to defined as

where and . A posynomial is any sum of monomials.[1][2]

Geometric programming is closely related to convex optimization: any GP can be made convex by means of a change of variables.[2] GPs have numerous applications, including component sizing in IC design,[3][4] aircraft design,[5] maximum likelihood estimation for logistic regression in statistics, and parameter tuning of positive linear systems in control theory.[6]

Convex form

[edit]

Geometric programs are not in general convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. In particular, after performing the change of variables and taking the log of the objective and constraint functions, the functions , i.e., the posynomials, are transformed into log-sum-exp functions, which are convex, and the functions , i.e., the monomials, become affine. Hence, this transformation transforms every GP into an equivalent convex program.[2] In fact, this log-log transformation can be used to convert a larger class of problems, known as log-log convex programming (LLCP), into an equivalent convex form.[7]

Software

[edit]

Several software packages exist to assist with formulating and solving geometric programs.

  • MOSEK is a commercial solver capable of solving geometric programs as well as other non-linear optimization problems.
  • CVXOPT is an open-source solver for convex optimization problems.
  • GPkit is a Python package for cleanly defining and manipulating geometric programming models. There are a number of example GP models written with this package here.
  • GGPLAB is a MATLAB toolbox for specifying and solving geometric programs (GPs) and generalized geometric programs (GGPs).
  • CVXPY is a Python-embedded modeling language for specifying and solving convex optimization problems, including GPs, GGPs, and LLCPs.[7]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Geometric programming (GP) is a class of nonlinear mathematical optimization problems characterized by an objective function that is a posynomial to be minimized, subject to constraints consisting of monomial equalities and posynomial inequalities, with all variables required to be positive. A monomial is a function of the form cx1a1x2a2xnanc x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n} where c>0c > 0 and the exponents aia_i are real numbers, while a posynomial is a sum of such monomials. These problems are generally nonconvex but can be transformed into convex optimization problems via a change of variables, such as taking logarithms, enabling efficient and reliable solution using standard convex optimization methods even for large-scale instances. The origins of geometric programming trace back to the early 1960s, when Clarence Zener published an article applying arithmetic-geometric mean inequality concepts to optimization in engineering design. The field was formalized by R. J. Duffin, E. L. Peterson, and C. Zener, who developed the theoretical foundations and published the seminal book Geometric Programming: Theory and Applications in 1967, establishing GP as a distinct branch of nonlinear programming. This work introduced key duality results, existence theorems, and solution algorithms based on the arithmetic-geometric inequality, making GP particularly suited for problems involving multiplicative structures common in physical and engineering models. Geometric programming has found wide application in engineering disciplines due to its natural fit for modeling systems with power laws, ratios, and scaling behaviors. Notable uses include optimal design of integrated circuits, such as gate sizing and floorplanning; power control and resource allocation in wireless communication networks; doping profiles in semiconductor devices; and multidisciplinary design optimization in aerospace engineering, like aircraft performance trade-offs. More recent extensions allow GP to approximate non-posynomial functions through techniques like successive convex approximation or maximum-of-posynomials formulations, broadening its applicability while preserving computational tractability.

Introduction

Definition

Geometric programming (GP) is a mathematical optimization technique for solving problems in which the objective function and inequality constraints are posynomials, while equality constraints are monomials, with all variables restricted to positive real numbers. The general form of a GP is to minimize the posynomial objective f0(x)f_0(x) subject to the posynomial inequality constraints fi(x)1f_i(x) \leq 1 for i=1,,mi = 1, \dots, m and the monomial equality constraints hj(x)=1h_j(x) = 1 for j=1,,pj = 1, \dots, p, where x>0x > 0. Although GPs appear non-convex due to their polynomial structure, they possess a hidden convexity that allows transformation into convex problems, enabling efficient global optimization using standard convex solvers. This tractability makes GPs valuable for engineering applications involving non-convex, multiplicative models, such as system design and resource allocation. A simple example arises in digital circuit design, where the goal is to minimize total power consumption P=iαixiP = \sum_i \alpha_i x_i, with xix_i denoting gate scaling factors and αi>0\alpha_i > 0 constants, subject to posynomial constraints ensuring circuit delay does not exceed a target DDmaxD \leq D_{\max} and total area remains below a limit AAmaxA \leq A_{\max}.

Historical Development

Geometric programming originated in the early 1960s as a method for optimizing posynomial functions in engineering design problems, particularly for minimizing total cost in equipment design. Clarence Zener introduced the foundational approach in 1961, leveraging the geometric-arithmetic mean inequality to solve posynomial optimization problems efficiently. This work was expanded through collaborations, leading to the seminal 1967 book Geometric Programming: Theory and Application by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener, which formalized the theory, duality results, and algorithms for posynomial programs. In the 1970s, the field gained practical traction with Zener's 1971 book Engineering Design by Geometric Programming, which applied the technique to real-world design and cost optimization challenges, emphasizing its utility in engineering contexts. Subsequent works by researchers like Charles S. Beightler and Donald T. Phillips further developed applications and computational methods, solidifying geometric programming as a tool for nonlinear optimization in industrial settings. A major resurgence occurred in the 2000s, with Stephen Boyd and Lieven Vandenberghe's 2004 book Convex Optimization highlighting the connection between geometric programs and convex optimization via logarithmic transformations, making the framework more accessible to modern optimization practitioners. This was complemented by their 2007 tutorial paper, which provided comprehensive modeling guidelines and examples, boosting adoption in diverse fields. Extensions in the 2010s broadened geometric programming to signomial programs, allowing negative coefficients in posynomials while maintaining tractability through approximation methods like majorization-minimization algorithms. Key contributions include Hunter and Lange's 2010 work on MM algorithms for signomial programming, enabling solutions to more general non-convex problems. By the 2020s, geometric programming integrated with machine learning, notably in hardware-aware neural architecture search, where mixed-integer geometric programs optimize network topologies under resource constraints. Concurrently, robust geometric programming addressed uncertainties by incorporating worst-case scenarios over polyhedral or ellipsoidal uncertainty sets, with advancements like tractable approximations enhancing reliability in uncertain environments. These developments, up to 2025, underscore geometric programming's evolving role in optimization under complexity and variability.

Mathematical Foundations

Monomials and Posynomials

In geometric programming, a monomial is defined as a function f:R++nRf: \mathbb{R}_{++}^n \to \mathbb{R} of the form f(x)=ci=1nxiai,f(\mathbf{x}) = c \prod_{i=1}^n x_i^{a_i}, where c>0c > 0 is a positive constant coefficient, each aiRa_i \in \mathbb{R} is a real exponent, and x=(x1,,xn)\mathbf{x} = (x_1, \dots, x_n) with all xi>0x_i > 0. This structure ensures the function is positive and homogeneous, forming the basic building block for more complex expressions in the field. For example, f(x1,x2)=2.3x12x20.15f(x_1, x_2) = 2.3 x_1^2 x_2^{-0.15} is a monomial with coefficient 2.3 and exponents 2 and -0.15. Monomials possess several key algebraic properties that make them suitable for optimization. They are closed under multiplication, as the product of two monomials yields another monomial with exponents that are the sums of the corresponding exponents from each. Similarly, monomials are closed under inversion (yielding a monomial with negated exponents) and under addition of non-negative powers (where raising a monomial to a non-negative power α0\alpha \geq 0 results in another monomial with scaled exponents αai\alpha a_i). Additionally, monomials are log-monotone functions, meaning logf(x)\log f(\mathbf{x}) is affine in the variables logxi\log x_i, which facilitates their use in convex reformulations. A posynomial extends the monomial by allowing a finite sum of such terms: f(x)=k=1mcki=1nxiaik,f(\mathbf{x}) = \sum_{k=1}^m c_k \prod_{i=1}^n x_i^{a_{ik}}, where each ck>0c_k > 0 and aikRa_{ik} \in \mathbb{R}, again defined for xi>0x_i > 0. Posynomials inherit positivity from their monomial components and are prevalent in engineering models due to their ability to represent physical relationships like costs or delays. For instance, in VLSI circuit design, the propagation delay through an inverter can be approximated as the posynomial d(W,L)=aLW+bWd(W, L) = a \frac{L}{W} + b W, where WW is the transistor width, LL is the length, and a,b>0a, b > 0 are constants derived from process parameters. Another simple example is f(x,y)=0.23+xyf(x, y) = 0.23 + \frac{x}{y}, combining a constant monomial and a non-constant one. Posynomials exhibit closure under addition (by definition, as sums of monomials) and under multiplication or division by monomials, producing another posynomial, but they are not generally closed under multiplication with other posynomials, as cross terms generate additional monomials beyond a simple sum. They also support non-negative integer powers while remaining posynomials. Regarding compositions, substituting a posynomial into another posynomial typically yields a non-posynomial due to expanded cross terms, whereas substituting a monomial into a posynomial results in a posynomial, preserving the structure for hierarchical modeling. These properties enable posynomials to approximate broader classes of functions in applications while maintaining tractability for geometric programs.

Signomials and Generalizations

A signomial is a function that extends the concept of a posynomial by allowing monomial coefficients to be negative, expressed as f(x)=k=1Kcki=1nxiaikf(x) = \sum_{k=1}^K c_k \prod_{i=1}^n x_i^{a_{ik}}, where ckRc_k \in \mathbb{R} (possibly negative), xi>0x_i > 0, and aikRa_{ik} \in \mathbb{R}. This formulation arises in signomial programs, which minimize or constrain signomials, unlike standard geometric programs that require posynomials with positive coefficients. Signomials introduce non-convexity due to the negative terms, making global optimization intractable and limiting efficient solutions to local optima. These challenges stem from the loss of the logarithmic convexity property that enables convex reformulation in standard geometric programming. To address signomials within geometric programming frameworks, successive convex approximation (SCA) iteratively approximates them as posynomials using local monomial fits, such as first-order Taylor expansions around current iterates, followed by solving the resulting geometric program and updating with trust regions. Logarithmic change of variables can aid in handling the structure, but SCA is essential for convergence to local optima. Generalizations of geometric programming incorporating signomials include max-min geometric programs, which maximize the minimum of posynomials (e.g., for fair resource allocation by introducing auxiliary variables for the minimum value), mixed-integer geometric programs that add discrete constraints solvable via heuristics like branch-and-bound, and robust geometric programs that account for uncertainties using ellipsoidal sets U={uˉ+Pρρ21}U = \{ \bar{u} + P \rho \mid \| \rho \|_2 \leq 1 \}, approximated tractably via piecewise-linear bounds on log-sum-exp functions. For example, in resource allocation problems like wireless power control, signomials model competing terms such as desired signal gains (positive) versus interference (effectively negative), optimized via SCA to balance total power and signal-to-interference ratios. These extensions play a role in formulating generalized geometric programs with signomial objectives or constraints.

Problem Formulation

Standard Geometric Programs

A standard geometric program (GP) is formulated as the optimization problem of minimizing a posynomial objective function subject to posynomial inequality constraints and monomial equality constraints, with all variables positive. Specifically, it takes the form \minimizef0(x)\subjecttofi(x)1,i=1,,mhj(x)=1,j=1,,pxk>0,k=1,,n\begin{align*} \minimize \quad & f_0(\mathbf{x}) \\ \subjectto \quad & f_i(\mathbf{x}) \leq 1, \quad i=1,\dots,m \\ & h_j(\mathbf{x}) = 1, \quad j=1,\dots,p \\ & x_k > 0, \quad k=1,\dots,n \end{align*} where f0,f1,,fmf_0, f_1, \dots, f_m are posynomials and h1,,hph_1, \dots, h_p are monomials. This canonical form, introduced in the foundational work on geometric programming, ensures the problem can be transformed into a convex optimization problem via a logarithmic change of variables. The normalization of inequality constraints to 1\leq 1 and equality constraints to =1=1 exploits the homogeneity and scaling properties of monomials and posynomials. Monomials are homogeneous functions, satisfying g(αx)=αdg(x)g(\alpha \mathbf{x}) = \alpha^d g(\mathbf{x}) for some degree dd, which allows arbitrary scaling of the right-hand side of a monomial equality or inequality to 1 without loss of generality. For posynomial inequalities of the form f(x)bf(\mathbf{x}) \leq b where b>0b > 0, normalization to 1\leq 1 is achieved by dividing through by bb, preserving the posynomial structure on the left and leveraging the fact that posynomials are closed under positive scaling. This standardization simplifies implementation in solvers and aligns with the logarithmic transformation that converts the problem to convex form. Feasibility of standard GPs depends on the interplay between inequality and equality constraints. Posynomial inequalities fi(x)1f_i(\mathbf{x}) \leq 1 are generally feasible for sufficiently large or small positive values of the variables, as posynomials approach 0 or \infty at the boundaries of the positive orthant, providing flexibility in degrees of freedom. However, monomial equality constraints hj(x)=1h_j(\mathbf{x}) = 1 fix the scaling of variables along certain directions, effectively reducing the dimensionality by constraining the homogeneous degrees and potentially rendering the problem infeasible if the equalities conflict with the inequalities. A representative application arises in optimizing transistor sizing within integrated circuits, where delays and areas are modeled as posynomials. For instance, consider minimizing the circuit delay DD, approximated via the Elmore delay model as a posynomial D(w)=Ri(w)Ci(w)D(\mathbf{w}) = \sum R_i(\mathbf{w}) C_i(\mathbf{w}) (with w\mathbf{w} denoting transistor widths), subject to posynomial constraints on total area A(w)AmaxA(\mathbf{w}) \leq A_{\max} and power consumption P(w)PmaxP(\mathbf{w}) \leq P_{\max}, along with monomial equalities fixing certain widths relative to others. This formulation captures the trade-offs in circuit design, where increasing widths reduces resistance but increases parasitic capacitance and area. Standard GPs are computationally tractable, solvable in polynomial time using interior-point methods after reformulation as convex programs via the logarithmic transformation. This enables efficient global optimization for problems with thousands of variables and constraints on modern hardware.

Generalized Geometric Programs

Generalized geometric programs (GGPs) extend the standard geometric programming framework to accommodate more flexible constraint structures, including signomial inequalities of the form fi(x)1f_i(x) \leq 1, where fif_i are signomials, and monomial equalities gj(x)=1g_j(x) = 1, with gjg_j monomials. These programs may also incorporate operations such as maximum or minimum functions within the objective or constraints, transforming them into generalized posynomials that can be reformulated into standard geometric programs via auxiliary variables. Unlike standard GPs restricted to posynomials, GGPs allow negative coefficients in signomials, enabling modeling of subtractive or comparative terms but introducing non-convexity. Non-posynomial equalities in GGPs are typically handled through reformulations using auxiliary variables or successive approximations, such as relaxing the equality to an inequality and iteratively tightening bounds based on monotonicity assumptions. For instance, a non-monomial equality h(x)=1h(x) = 1 can be approximated locally by a monomial fit around a current iterate, ensuring feasibility within a trust region. Mixed forms of GGPs integrate additional constraint types, such as linear inequalities alongside posynomial or signomial terms, resulting in mixed linear geometric programs that remain convex after logarithmic transformation. Multi-objective GGPs, involving multiple signomial or posynomial objectives, are often addressed via weighted sum scalarization to generate Pareto-optimal solutions, balancing trade-offs like performance versus resource use. A representative example arises in low-power VLSI design, where signomial programming optimizes gate sizing to trade off circuit speed against energy consumption and area, subject to signomial delay and power constraints approximated sequentially. Here, the objective might minimize total power while constraining maximum delay via signomials that include both additive and subtractive terms for leakage and dynamic effects. GGPs sacrifice the global optimality guarantees of standard GPs due to the non-convex nature of signomials, relying instead on local solvers like interior-point methods applied to successive convex approximations, which may converge to local minima.

Convex Optimization Framework

Logarithmic Transformation

The logarithmic transformation is a change of variables that converts a standard geometric program (GP) into an equivalent convex optimization problem. Specifically, for variables xi>0x_i > 0, substitute yi=logxiy_i = \log x_i (or equivalently, xi=eyix_i = e^{y_i}), which maps the positive orthant to the full real space. This substitution transforms the non-convex structure of GPs into convex ones by exploiting the properties of logarithms and exponentials. Under this substitution, a monomial g(x)=ci=1nxiaig(x) = c \prod_{i=1}^n x_i^{a_i}, where c>0c > 0 and aiRa_i \in \mathbb{R}, becomes logg(ey)=logc+i=1naiyi\log g(e^y) = \log c + \sum_{i=1}^n a_i y_i, which is an affine function in yy. Thus, monomials, originally multiplicative in xx, linearize in the logarithmic domain. For a posynomial f(x)=k=1mcki=1nxiaikf(x) = \sum_{k=1}^m c_k \prod_{i=1}^n x_i^{a_{ik}}, the transformation yields logf(ey)=log(k=1mexp(αk+i=1naikyi))\log f(e^y) = \log \left( \sum_{k=1}^m \exp(\alpha_k + \sum_{i=1}^n a_{ik} y_i) \right), where αk=logck\alpha_k = \log c_k. This log-sum-exp form is convex in yy, as it is the composition of the convex log-sum-exp function with affine functions. A standard GP, minimizing a posynomial objective subject to posynomial inequality constraints and monomial equality constraints, transforms fully into a convex problem in yy. The objective becomes minlogf0(ey)\min \log f_0(e^y), where f0f_0 is the objective posynomial. Inequality constraints fl(ey)1f_l(e^y) \leq 1 map to logfl(ey)0\log f_l(e^y) \leq 0. Equality constraints gp(ey)=1g_p(e^y) = 1 become affine equalities i=1naipyi=logcp\sum_{i=1}^n a_{ip} y_i = -\log c_p. The resulting problem is convex, with a convex objective and constraints. This transformation preserves feasibility, optimality, and duality properties of the original GP, as the mapping is bijective and monotonic. Solutions in yy map back to optimal xx^* via xi=eyix_i^* = e^{y_i^*}, recovering the global optimum of the non-convex GP. As an example, consider minimizing the posynomial f(x)=x+1/xf(x) = x + 1/x over x>0x > 0. Substituting y=logxy = \log x gives logf(ey)=log(ey+ey)\log f(e^y) = \log \left( e^{y} + e^{-y} \right). The problem is to minimize this convex function over yRy \in \mathbb{R}. The solution is y=0y^* = 0, mapping to x=1x^* = 1, with optimal value 2.

Convex Form and Duality

After the logarithmic transformation yi=logxiy_i = \log x_i (with xi>0x_i > 0), a standard geometric program is recast as a convex optimization problem. The objective function becomes f~0(y)=log(k=1K0c0kexp(a0kTy))\tilde{f}_0(y) = \log \left( \sum_{k=1}^{K_0} c_{0k} \exp(a_{0k}^T y) \right), where each c0k>0c_{0k} > 0 and a0kRna_{0k} \in \mathbb{R}^n, which is a composition of the perspective of the exponential function and the log-sum-exp function, ensuring convexity. Inequality constraints transform to f~i(y)=log(k=1Kicikexp(aikTy))0\tilde{f}_i(y) = \log \left( \sum_{k=1}^{K_i} c_{ik} \exp(a_{ik}^T y) \right) \leq 0 for i=1,,mi = 1, \dots, m, each also convex due to the log-sum-exp structure. Equality constraints, if present as monomials, become linear: Aiy=biA_i y = b_i. This convex form allows efficient global solution via interior-point methods, with the log-sum-exp functions providing smooth, differentiable approximations to max functions. The Lagrangian dual provides a theoretical framework for analyzing geometric programs, yielding a convex maximization problem that lower-bounds the primal optimum. The Lagrangian is L(x,λ,ν)=f0(x)+i=1mλi(fi(x)1)+j=1pνj(hj(x)1)L(x, \lambda, \nu) = f_0(x) + \sum_{i=1}^m \lambda_i (f_i(x) - 1) + \sum_{j=1}^p \nu_j (h_j(x) - 1), where λ0\lambda \succeq 0 are multipliers for inequality constraints and ν\nu are unrestricted for equalities. The dual function is g(λ,ν)=infx>0L(x,λ,ν)g(\lambda, \nu) = \inf_{x > 0} L(x, \lambda, \nu), which is concave in (λ,ν)(\lambda, \nu) and can be computed explicitly for geometric programs, often resulting in an entropy-like form such as g(λ)=kλklog(λk/dk)kλkg(\lambda) = \sum_k \lambda_k \log(\lambda_k / d_k) - \sum_k \lambda_k for normalized terms, where dkd_k involve coefficients. The dual problem is then maxλ0,νg(λ,ν)\max_{\lambda \succeq 0, \nu} g(\lambda, \nu), whose solution relates to the primal-dual gap, which vanishes at optimality. Strong duality holds under Slater's condition—existence of a strictly feasible point where fi(x)<1f_i(x) < 1 for all ii—ensuring the primal and dual optimal values coincide and attaining global optima efficiently. Optimality in geometric programs is characterized by Karush-Kuhn-Tucker (KKT) conditions, which are necessary and sufficient due to convexity. Primal feasibility requires fi(x)1f_i(x^*) \leq 1 and hj(x)=1h_j(x^*) = 1; dual feasibility mandates λ0\lambda^* \succeq 0. Complementary slackness enforces λi(fi(x)1)=0\lambda_i^* (f_i(x^*) - 1) = 0 for each ii. Stationarity demands xL(x,λ,ν)=0\nabla_x L(x^*, \lambda^*, \nu^*) = 0, which for posynomials translates to weighted degree-matching: the exponents of active monomial terms sum to zero under dual weights, reflecting resource allocation interpretations. These conditions link primal solutions to dual sensitivities, such as λi=puiu=1\lambda_i^* = -\frac{\partial p^*}{\partial u_i} \big|_{u=1}
Add your contribution
Related Hubs
User Avatar
No comments yet.