Recent from talks
Nothing was collected or created yet.
Geometric programming
View on WikipediaA geometric program (GP) is an optimization problem of the form
where are posynomials and are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function from to defined as
where and . A posynomial is any sum of monomials.[1][2]
Geometric programming is closely related to convex optimization: any GP can be made convex by means of a change of variables.[2] GPs have numerous applications, including component sizing in IC design,[3][4] aircraft design,[5] maximum likelihood estimation for logistic regression in statistics, and parameter tuning of positive linear systems in control theory.[6]
Convex form
[edit]Geometric programs are not in general convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. In particular, after performing the change of variables and taking the log of the objective and constraint functions, the functions , i.e., the posynomials, are transformed into log-sum-exp functions, which are convex, and the functions , i.e., the monomials, become affine. Hence, this transformation transforms every GP into an equivalent convex program.[2] In fact, this log-log transformation can be used to convert a larger class of problems, known as log-log convex programming (LLCP), into an equivalent convex form.[7]
Software
[edit]Several software packages exist to assist with formulating and solving geometric programs.
- MOSEK is a commercial solver capable of solving geometric programs as well as other non-linear optimization problems.
- CVXOPT is an open-source solver for convex optimization problems.
- GPkit is a Python package for cleanly defining and manipulating geometric programming models. There are a number of example GP models written with this package here.
- GGPLAB is a MATLAB toolbox for specifying and solving geometric programs (GPs) and generalized geometric programs (GGPs).
- CVXPY is a Python-embedded modeling language for specifying and solving convex optimization problems, including GPs, GGPs, and LLCPs.[7]
See also
[edit]References
[edit]- ^ Richard J. Duffin; Elmor L. Peterson; Clarence Zener (1967). Geometric Programming. John Wiley and Sons. p. 278. ISBN 0-471-22370-0.
- ^ a b c S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi. A Tutorial on Geometric Programming. Retrieved 20 October 2019.
- ^ M. Hershenson, S. Boyd, and T. Lee. Optimal Design of a CMOS Op-amp via Geometric Programming. Retrieved 8 January 2019.
- ^ S. Boyd, S. J. Kim, D. Patil, and M. Horowitz. Digital Circuit Optimization via Geometric Programming. Retrieved 20 October 2019.
- ^ W. Hoburg and P. Abbeel. Geometric programming for aircraft design optimization. AIAA Journal 52.11 (2014): 2414-2426.
- ^ Ogura, Masaki; Kishida, Masako; Lam, James (2020). "Geometric Programming for Optimal Positive Linear Systems". IEEE Transactions on Automatic Control. 65 (11): 4648–4663. arXiv:1904.12976. Bibcode:2020ITAC...65.4648O. doi:10.1109/TAC.2019.2960697. ISSN 0018-9286. S2CID 140222942.
- ^ a b A. Agrawal, S. Diamond, and S. Boyd. Disciplined Geometric Programming. Retrieved 8 January 2019.
Geometric programming
View on GrokipediaIntroduction
Definition
Geometric programming (GP) is a mathematical optimization technique for solving problems in which the objective function and inequality constraints are posynomials, while equality constraints are monomials, with all variables restricted to positive real numbers.[7] The general form of a GP is to minimize the posynomial objective subject to the posynomial inequality constraints for and the monomial equality constraints for , where .[7] Although GPs appear non-convex due to their polynomial structure, they possess a hidden convexity that allows transformation into convex problems, enabling efficient global optimization using standard convex solvers.[7] This tractability makes GPs valuable for engineering applications involving non-convex, multiplicative models, such as system design and resource allocation.[8] A simple example arises in digital circuit design, where the goal is to minimize total power consumption , with denoting gate scaling factors and constants, subject to posynomial constraints ensuring circuit delay does not exceed a target and total area remains below a limit .[7]Historical Development
Geometric programming originated in the early 1960s as a method for optimizing posynomial functions in engineering design problems, particularly for minimizing total cost in equipment design. Clarence Zener introduced the foundational approach in 1961, leveraging the geometric-arithmetic mean inequality to solve posynomial optimization problems efficiently.[9] This work was expanded through collaborations, leading to the seminal 1967 book Geometric Programming: Theory and Application by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener, which formalized the theory, duality results, and algorithms for posynomial programs.[10] In the 1970s, the field gained practical traction with Zener's 1971 book Engineering Design by Geometric Programming, which applied the technique to real-world design and cost optimization challenges, emphasizing its utility in engineering contexts.[11] Subsequent works by researchers like Charles S. Beightler and Donald T. Phillips further developed applications and computational methods, solidifying geometric programming as a tool for nonlinear optimization in industrial settings.[12] A major resurgence occurred in the 2000s, with Stephen Boyd and Lieven Vandenberghe's 2004 book Convex Optimization highlighting the connection between geometric programs and convex optimization via logarithmic transformations, making the framework more accessible to modern optimization practitioners. This was complemented by their 2007 tutorial paper, which provided comprehensive modeling guidelines and examples, boosting adoption in diverse fields.[4] Extensions in the 2010s broadened geometric programming to signomial programs, allowing negative coefficients in posynomials while maintaining tractability through approximation methods like majorization-minimization algorithms.[13] Key contributions include Hunter and Lange's 2010 work on MM algorithms for signomial programming, enabling solutions to more general non-convex problems.[14] By the 2020s, geometric programming integrated with machine learning, notably in hardware-aware neural architecture search, where mixed-integer geometric programs optimize network topologies under resource constraints.[15] Concurrently, robust geometric programming addressed uncertainties by incorporating worst-case scenarios over polyhedral or ellipsoidal uncertainty sets, with advancements like tractable approximations enhancing reliability in uncertain environments.[16] These developments, up to 2025, underscore geometric programming's evolving role in optimization under complexity and variability.[17]Mathematical Foundations
Monomials and Posynomials
In geometric programming, a monomial is defined as a function of the form where is a positive constant coefficient, each is a real exponent, and with all .[4] This structure ensures the function is positive and homogeneous, forming the basic building block for more complex expressions in the field.[4] For example, is a monomial with coefficient 2.3 and exponents 2 and -0.15.[4] Monomials possess several key algebraic properties that make them suitable for optimization. They are closed under multiplication, as the product of two monomials yields another monomial with exponents that are the sums of the corresponding exponents from each.[4] Similarly, monomials are closed under inversion (yielding a monomial with negated exponents) and under addition of non-negative powers (where raising a monomial to a non-negative power results in another monomial with scaled exponents ).[4] Additionally, monomials are log-monotone functions, meaning is affine in the variables , which facilitates their use in convex reformulations.[4] A posynomial extends the monomial by allowing a finite sum of such terms: where each and , again defined for .[4] Posynomials inherit positivity from their monomial components and are prevalent in engineering models due to their ability to represent physical relationships like costs or delays.[4] For instance, in VLSI circuit design, the propagation delay through an inverter can be approximated as the posynomial , where is the transistor width, is the length, and are constants derived from process parameters.[4] Another simple example is , combining a constant monomial and a non-constant one.[4] Posynomials exhibit closure under addition (by definition, as sums of monomials) and under multiplication or division by monomials, producing another posynomial, but they are not generally closed under multiplication with other posynomials, as cross terms generate additional monomials beyond a simple sum.[4] They also support non-negative integer powers while remaining posynomials.[4] Regarding compositions, substituting a posynomial into another posynomial typically yields a non-posynomial due to expanded cross terms, whereas substituting a monomial into a posynomial results in a posynomial, preserving the structure for hierarchical modeling.[4] These properties enable posynomials to approximate broader classes of functions in applications while maintaining tractability for geometric programs.[4]Signomials and Generalizations
A signomial is a function that extends the concept of a posynomial by allowing monomial coefficients to be negative, expressed as , where (possibly negative), , and .[7] This formulation arises in signomial programs, which minimize or constrain signomials, unlike standard geometric programs that require posynomials with positive coefficients.[7] Signomials introduce non-convexity due to the negative terms, making global optimization intractable and limiting efficient solutions to local optima.[7] These challenges stem from the loss of the logarithmic convexity property that enables convex reformulation in standard geometric programming.[7] To address signomials within geometric programming frameworks, successive convex approximation (SCA) iteratively approximates them as posynomials using local monomial fits, such as first-order Taylor expansions around current iterates, followed by solving the resulting geometric program and updating with trust regions.[7] Logarithmic change of variables can aid in handling the structure, but SCA is essential for convergence to local optima.[7] Generalizations of geometric programming incorporating signomials include max-min geometric programs, which maximize the minimum of posynomials (e.g., for fair resource allocation by introducing auxiliary variables for the minimum value), mixed-integer geometric programs that add discrete constraints solvable via heuristics like branch-and-bound, and robust geometric programs that account for uncertainties using ellipsoidal sets , approximated tractably via piecewise-linear bounds on log-sum-exp functions.[7][18] For example, in resource allocation problems like wireless power control, signomials model competing terms such as desired signal gains (positive) versus interference (effectively negative), optimized via SCA to balance total power and signal-to-interference ratios.[7] These extensions play a role in formulating generalized geometric programs with signomial objectives or constraints.[7]Problem Formulation
Standard Geometric Programs
A standard geometric program (GP) is formulated as the optimization problem of minimizing a posynomial objective function subject to posynomial inequality constraints and monomial equality constraints, with all variables positive. Specifically, it takes the form where are posynomials and are monomials.[4] This canonical form, introduced in the foundational work on geometric programming, ensures the problem can be transformed into a convex optimization problem via a logarithmic change of variables. The normalization of inequality constraints to and equality constraints to exploits the homogeneity and scaling properties of monomials and posynomials. Monomials are homogeneous functions, satisfying for some degree , which allows arbitrary scaling of the right-hand side of a monomial equality or inequality to 1 without loss of generality. For posynomial inequalities of the form where , normalization to is achieved by dividing through by , preserving the posynomial structure on the left and leveraging the fact that posynomials are closed under positive scaling. This standardization simplifies implementation in solvers and aligns with the logarithmic transformation that converts the problem to convex form.[4] Feasibility of standard GPs depends on the interplay between inequality and equality constraints. Posynomial inequalities are generally feasible for sufficiently large or small positive values of the variables, as posynomials approach 0 or at the boundaries of the positive orthant, providing flexibility in degrees of freedom. However, monomial equality constraints fix the scaling of variables along certain directions, effectively reducing the dimensionality by constraining the homogeneous degrees and potentially rendering the problem infeasible if the equalities conflict with the inequalities.[4] A representative application arises in optimizing transistor sizing within integrated circuits, where delays and areas are modeled as posynomials. For instance, consider minimizing the circuit delay , approximated via the Elmore delay model as a posynomial (with denoting transistor widths), subject to posynomial constraints on total area and power consumption , along with monomial equalities fixing certain widths relative to others. This formulation captures the trade-offs in circuit design, where increasing widths reduces resistance but increases parasitic capacitance and area.[8][4] Standard GPs are computationally tractable, solvable in polynomial time using interior-point methods after reformulation as convex programs via the logarithmic transformation. This enables efficient global optimization for problems with thousands of variables and constraints on modern hardware.[4]Generalized Geometric Programs
Generalized geometric programs (GGPs) extend the standard geometric programming framework to accommodate more flexible constraint structures, including signomial inequalities of the form , where are signomials, and monomial equalities , with monomials.[7] These programs may also incorporate operations such as maximum or minimum functions within the objective or constraints, transforming them into generalized posynomials that can be reformulated into standard geometric programs via auxiliary variables.[7] Unlike standard GPs restricted to posynomials, GGPs allow negative coefficients in signomials, enabling modeling of subtractive or comparative terms but introducing non-convexity.[7] Non-posynomial equalities in GGPs are typically handled through reformulations using auxiliary variables or successive approximations, such as relaxing the equality to an inequality and iteratively tightening bounds based on monotonicity assumptions.[7] For instance, a non-monomial equality can be approximated locally by a monomial fit around a current iterate, ensuring feasibility within a trust region.[7] Mixed forms of GGPs integrate additional constraint types, such as linear inequalities alongside posynomial or signomial terms, resulting in mixed linear geometric programs that remain convex after logarithmic transformation.[7] Multi-objective GGPs, involving multiple signomial or posynomial objectives, are often addressed via weighted sum scalarization to generate Pareto-optimal solutions, balancing trade-offs like performance versus resource use.[7] A representative example arises in low-power VLSI design, where signomial programming optimizes gate sizing to trade off circuit speed against energy consumption and area, subject to signomial delay and power constraints approximated sequentially.[7] Here, the objective might minimize total power while constraining maximum delay via signomials that include both additive and subtractive terms for leakage and dynamic effects.[7] GGPs sacrifice the global optimality guarantees of standard GPs due to the non-convex nature of signomials, relying instead on local solvers like interior-point methods applied to successive convex approximations, which may converge to local minima.[7]Convex Optimization Framework
Logarithmic Transformation
The logarithmic transformation is a change of variables that converts a standard geometric program (GP) into an equivalent convex optimization problem. Specifically, for variables , substitute (or equivalently, ), which maps the positive orthant to the full real space. This substitution transforms the non-convex structure of GPs into convex ones by exploiting the properties of logarithms and exponentials.[19] Under this substitution, a monomial , where and , becomes , which is an affine function in . Thus, monomials, originally multiplicative in , linearize in the logarithmic domain.[19][4] For a posynomial , the transformation yields , where . This log-sum-exp form is convex in , as it is the composition of the convex log-sum-exp function with affine functions.[19][4] A standard GP, minimizing a posynomial objective subject to posynomial inequality constraints and monomial equality constraints, transforms fully into a convex problem in . The objective becomes , where is the objective posynomial. Inequality constraints map to . Equality constraints become affine equalities . The resulting problem is convex, with a convex objective and constraints.[19][4] This transformation preserves feasibility, optimality, and duality properties of the original GP, as the mapping is bijective and monotonic. Solutions in map back to optimal via , recovering the global optimum of the non-convex GP.[19][4] As an example, consider minimizing the posynomial over . Substituting gives . The problem is to minimize this convex function over . The solution is , mapping to , with optimal value 2.[19]Convex Form and Duality
After the logarithmic transformation (with ), a standard geometric program is recast as a convex optimization problem. The objective function becomes , where each and , which is a composition of the perspective of the exponential function and the log-sum-exp function, ensuring convexity.[19] Inequality constraints transform to for , each also convex due to the log-sum-exp structure. Equality constraints, if present as monomials, become linear: . This convex form allows efficient global solution via interior-point methods, with the log-sum-exp functions providing smooth, differentiable approximations to max functions.[19] The Lagrangian dual provides a theoretical framework for analyzing geometric programs, yielding a convex maximization problem that lower-bounds the primal optimum. The Lagrangian is , where are multipliers for inequality constraints and are unrestricted for equalities. The dual function is , which is concave in and can be computed explicitly for geometric programs, often resulting in an entropy-like form such as for normalized terms, where involve coefficients. The dual problem is then , whose solution relates to the primal-dual gap, which vanishes at optimality. Strong duality holds under Slater's condition—existence of a strictly feasible point where for all —ensuring the primal and dual optimal values coincide and attaining global optima efficiently.[19] Optimality in geometric programs is characterized by Karush-Kuhn-Tucker (KKT) conditions, which are necessary and sufficient due to convexity. Primal feasibility requires and ; dual feasibility mandates . Complementary slackness enforces for each . Stationarity demands , which for posynomials translates to weighted degree-matching: the exponents of active monomial terms sum to zero under dual weights, reflecting resource allocation interpretations. These conditions link primal solutions to dual sensitivities, such as , where is the optimal value.[19] For signomial programs, which generalize geometric programs by allowing negative coefficients in constraints (e.g., with posynomials ), the problem is nonconvex, and exact duality does not hold. Instead, approximate duality is achieved via successive convex approximations, where each signomial is locally approximated by a posynomial using monomial fits around an iterate , such as , yielding a sequence of solvable geometric programs. This iterative condensation converges to a local optimum in practice, though global guarantees require additional techniques like global search.[4]Solution Methods
Algorithms for Solving GPs
Geometric programs (GPs) in standard form can be solved globally and efficiently using interior-point methods, which exploit the convexity of the logarithmically transformed problem to apply path-following algorithms based on self-concordant barrier functions.[4] The log-sum-exp barrier function, arising from the posynomial constraints after transformation, is self-concordant, enabling polynomial-time convergence with high reliability and no need for initial point selection or parameter tuning.[4] These methods typically solve problems with thousands of variables and constraints in seconds on modern hardware, making them suitable for practical engineering applications.[4] Primal-dual interior-point methods further enhance efficiency by simultaneously solving the primal GP and its dual, formulated via convex duality, through Newton steps on the Karush-Kuhn-Tucker (KKT) conditions.[4] These algorithms generate iterates that approximate the central path, with each step reducing a potential function that measures proximity to optimality. For a GP with variables and self-concordant barriers, the methods achieve -optimality in iterations, where is the bit length of the input data, each iteration involving the solution of a linear system via Newton's method. For generalized geometric programs (GGPs), which include signomial terms with negative coefficients, successive approximation algorithms convert the nonconvex problem into a sequence of convex posynomial approximations.[4] At each iteration, signomials are upper-bounded by posynomials using first-order Taylor expansions around the current point, ensuring the approximation is tight and convex while preserving feasibility; the resulting GP is solved via interior-point methods, and the process iterates until convergence.[4] This approach yields local optima for GGPs, with global optimality guaranteed under additional conditions like unimodality.[20] Gradient-based methods, such as projected gradient descent or barrier methods, are applicable for large-scale GPs after logarithmic transformation, projecting iterates onto the feasible set defined by exponential cone constraints.[21] These techniques are particularly useful when interior-point methods become computationally expensive due to high dimensionality, offering scalable updates via gradient computations of the smooth convex objective.[21] As an illustrative example, consider optimizing a CMOS operational amplifier circuit, modeled as a GP minimizing power subject to posynomial constraints on gain, bandwidth, and area.[22] Using barrier methods within an interior-point framework, the design variables (transistor widths and lengths) are iteratively adjusted by solving the central path equations, achieving a 20% power reduction compared to hand-designed circuits while meeting specifications.[22]Software Implementations
Software implementations for geometric programming (GP) have evolved to support both standard and generalized forms, leveraging convex optimization frameworks to handle posynomials and signomials efficiently. Open-source tools dominate due to their accessibility and integration with modern programming ecosystems, while commercial solvers offer robust scalability for industrial applications. These tools typically allow users to model problems using high-level syntax for monomials and posynomials, automatically transforming them into convex forms solvable via interior-point methods or other algorithms. In Python, CVXPY provides comprehensive support for disciplined geometric programming (DGP), enabling users to formulate standard GPs directly with posynomial objectives and constraints, and approximate generalized GPs (GGPs) using techniques like signomial relaxations. It interfaces with solvers like ECOS, SCS, and MOSEK, supporting problems up to thousands of variables on standard hardware. CVXPY's ease of use stems from its NumPy-like syntax for defining posynomials, making it suitable for rapid prototyping in engineering and machine learning contexts; as of August 2024, version 1.5.3 includes improved sparse array support and DGP capabilities, but no native GPU solvers; approximations for GGPs are supported via interfaces to solvers like SCS.[23] Another Python-based option is GPkit, designed specifically for GP modeling in engineering disciplines such as aerospace and mechanical design, where it excels in handling hierarchical and multi-objective formulations. GPkit supports both standard GPs and approximations for signomials, with built-in features for sensitivity analysis and uncertainty propagation. Its last release, version 1.1.0 (January 2022), supports problems with hundreds of variables and integration with visualization tools like Matplotlib for design trade-offs; as of January 2026, no major updates since version 1.1.0 (January 2022).[24] For commercial solvers, MOSEK offers direct support for geometric programming through its geometric cone programming (GCP) module, which models GPs as conic problems solvable with high precision and efficiency, handling instances with up to millions of terms in recent versions, such as 10.1.x (as of 2024). MOSEK's interior-point solver is particularly effective for large-scale GPs, providing warm starts and parallel processing; it integrates seamlessly with CVXPY and MATLAB. MOSEK's OptServer provides remote optimization capabilities via REST API for distributed computing environments.[25] Gurobi provides indirect support for GPs by reformulating them as mixed-integer second-order cone programs (MISOCPs), allowing solution of both continuous and discrete GP variants with strong branching and cutting-plane techniques. This approach is useful for combinatorial aspects in design optimization, though it requires manual transformation; Gurobi Optimizer version 11.0 (as of 2024) includes improvements in SOCP and mixed-integer performance, enabling solutions to GP-derived problems with over 10,000 variables.[26] In MATLAB, the legacy CVX toolbox remains widely used for basic GPs, supporting posynomial formulations via a disciplined convex programming (DCP) framework and interfacing with solvers like SeDuMi and SDPT3. While not actively maintained since 2012, it handles standard GPs efficiently for educational and small-scale applications, with syntax that abstracts logarithmic transformations. GGPLAB, an older MATLAB toolbox from 2004, specializes in GGPs and signomial problems, using successive convex approximations and local search heuristics to find solutions. It is less scalable for very large instances compared to modern tools but remains valuable for research into non-convex extensions, supporting up to a few hundred variables with customizable approximation parameters.| Tool | Language | GP Support | GGP Handling | Scalability | Key Features |
|---|---|---|---|---|---|
| CVXPY | Python | Standard (DGP) | Approximations | Medium-large (1000s vars) | Easy posynomial syntax, improved sparse support and DGP capabilities, ML integration via cvxpylayers[23] |
| GPkit | Python | Standard & approx. | Approximations | Medium (100s vars) | Engineering focus, sensitivity analysis (no major updates as of January 2026)[24] |
| MOSEK | Multi | Standard (GCP) | Via cones | Large (millions terms) | High precision, OptServer for remote solving[25] |
| Gurobi | Multi | Reformulation (MISOCP) | Reformulation | Large (10,000+ vars) | Integer support, fast branching (v11.0, 2024)[26] |
| CVX | MATLAB | Standard (DCP) | Limited | Small-medium | Educational, legacy compatibility |
| GGPLAB | MATLAB | Standard | Approximations for signomials | Small (100s vars) | Research-oriented heuristics |
