Recent from talks
Nothing was collected or created yet.
Linear complementarity problem
View on WikipediaIn mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.[1][2][3]
Formulation
[edit]Given a real matrix M and vector q, the linear complementarity problem LCP(q, M) seeks vectors z and w which satisfy the following constraints:
- (that is, each component of these two vectors is non-negative)
- or equivalently This is the complementarity condition, since it implies that, for all , at most one of and can be positive.
A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that LCP(q, M) has a solution for every q, then M is a Q-matrix. If M is such that LCP(q, M) have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary.[4]
The vector w is a slack variable,[5] and so is generally discarded after z is found. As such, the problem can also be formulated as:
- (the complementarity condition)
Convex quadratic-minimization: Minimum conditions
[edit]Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function
subject to the constraints
These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem.
If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice.
Also, a quadratic-programming problem stated as minimize subject to as well as with Q symmetric
is the same as solving the LCP with
This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as:
with v the Lagrange multipliers on the non-negativity constraints, λ the multipliers on the inequality constraints, and s the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables (x, s) with its set of KKT vectors (optimal Lagrange multipliers) being (v, λ). In that case,
If the non-negativity constraint on the x is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as Q is non-singular (which is guaranteed if it is positive definite). The multipliers v are no longer present, and the first KKT conditions can be rewritten as:
or:
pre-multiplying the two sides by A and subtracting b we obtain:
The left side, due to the second KKT condition, is s. Substituting and reordering:
Calling now
we have an LCP, due to the relation of complementarity between the slack variables s and their Lagrange multipliers λ. Once we solve it, we may obtain the value of x from λ through the first KKT condition.
Finally, it is also possible to handle additional equality constraints:
This introduces a vector of Lagrange multipliers μ, with the same dimension as .
It is easy to verify that the M and Q for the LCP system are now expressed as:
From λ we can now recover the values of both x and the Lagrange multiplier of equalities μ:
In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods.[1][2] LCP problems can be solved also by the criss-cross algorithm,[6][7][8][9] conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix.[8][9] A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive.[8][9][10] Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.[11][12][13]
See also
[edit]- Complementarity theory
- Physics engine Impulse/constraint type physics engines for games use this approach.
- Contact dynamics Contact dynamics with the nonsmooth approach.
- Bimatrix games can be reduced to LCP.
Notes
[edit]- ^ a b Murty (1988).
- ^ a b Cottle, Pang & Stone (1992).
- ^ Cottle & Dantzig (1968).
- ^ Murty (1972).
- ^ Taylor (2015), p. 172.
- ^ Fukuda & Namiki (1994).
- ^ Fukuda & Terlaky (1997).
- ^ a b c den Hertog, Roos & Terlaky (1993).
- ^ a b c Csizmadia & Illés (2006).
- ^ Cottle, Pang & Venkateswaran (1989).
- ^ Todd (1985).
- ^ Terlaky & Zhang (1993).
- ^ Björner et al. (1999).
References
[edit]- Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 978-0-521-77750-6. MR 1744046.
- Cottle, R. W.; Dantzig, G. B. (1968). "Complementary pivot theory of mathematical programming". Linear Algebra and Its Applications. 1: 103–125. doi:10.1016/0024-3795(68)90052-9.
- Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 978-0-12-192350-1. MR 1150683.
- Cottle, R. W.; Pang, J.-S.; Venkateswaran, V. (March–April 1989). "Sufficient matrices and the linear complementarity problem". Linear Algebra and Its Applications. 114–115: 231–249. doi:10.1016/0024-3795(89)90463-1. MR 0986877.
- Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (PDF). Optimization Methods and Software. 21 (2): 247–266. doi:10.1080/10556780500095009. S2CID 24418835.
- Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming. 64 (1): 365–370. doi:10.1007/BF01582581. MR 1286455. S2CID 21476636.
- Fukuda, Komei; Terlaky, Tamás (1997). Thomas M. Liebling; Dominique de Werra (eds.). "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming, Series B. Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997. 79 (1–3): 369–395. CiteSeerX 10.1.1.36.9373. doi:10.1007/BF02614325. MR 1464775. S2CID 2794181. Postscript preprint.
- den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the criss-cross method" (PDF). Linear Algebra and Its Applications. 187: 1–14. doi:10.1016/0024-3795(93)90124-7.
- Murty, Katta G. (January 1972). "On the number of solutions to the complementarity problem and spanning properties of complementary cones" (PDF). Linear Algebra and Its Applications. 5 (1): 65–108. doi:10.1016/0024-3795(72)90019-5. hdl:2027.42/34188.
- Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. ISBN 978-3-88538-403-8. MR 0949214. Updated and free PDF version at Katta G. Murty's website. Archived from the original on 2010-04-01.
- Taylor, Joshua Adam (2015). Convex Optimization of Power Systems. Cambridge University Press. ISBN 9781107076877.
- Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. Degeneracy in optimization problems. 46–47 (1): 203–233. CiteSeerX 10.1.1.36.7658. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. S2CID 6058077.
- Todd, Michael J. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. Series B. 39 (2): 105–133. doi:10.1016/0095-8956(85)90042-5. MR 0811116.
Further reading
[edit]- R. Chandrasekaran. "Bimatrix games" (PDF). pp. 5–7. Retrieved 18 December 2015.
External links
[edit]Linear complementarity problem
View on GrokipediaDefinition and Formulation
Standard Definition
The linear complementarity problem (LCP) is a fundamental problem in mathematical optimization, defined for an matrix and a vector as the task of finding vectors satisfying the following conditions: This formulation, often denoted as , requires and to be nonnegative while their componentwise product sums to zero under the complementarity condition .[2] The nonnegativity constraints ensure feasibility in the nonnegative orthant, and the linear relation ties directly to via the affine transformation involving and .[3] The complementarity condition implies that for each index , at least one of or must be zero (though both may vanish simultaneously). This pairwise orthogonality captures the essence of complementary slackness, a concept borrowed from optimization duality, where variables and their dual counterparts cannot both be strictly positive. In vector form, it enforces for all , partitioning the indices into complementary sets where either and , or and , or both are zero.[2] Such structure arises naturally in modeling equilibrium points and constrained systems.[4] The LCP, building on geometric insights from Samelson, Thrall, and Wesler (1958),[5] emerged in the optimization literature of the 1960s. Lemke's 1965 paper introduced complementary pivot methods as a unifying framework for solving bimatrix games and quadratic programs.[4] This standard primal form serves as the cornerstone for subsequent theoretical and computational developments in the field.[2]Variants and Generalizations
The linear complementarity problem (LCP) admits a geometric interpretation in terms of complementary cones and affine subspaces. The solution set consists of points in the nonnegative orthant that satisfy the affine equation , where and belong to complementary cones generated by the columns of the matrix . These cones are polyhedral sets defined by index subsets , with the full cone spanning the directions where complementarity holds strictly for indices in and its complement. This structure highlights how solutions correspond to intersections of the affine space with the boundaries of these cones, providing insight into the topology of the feasible set.[6] A key variant is the mixed linear complementarity problem (MLCP), which relaxes the nonnegativity constraints on some variables to allow them to be free (unrestricted in sign) while maintaining partial complementarity conditions. Formally, given matrices , , , , and vectors , , the MLCP seeks and such that Here, is unrestricted, and complementarity applies only to the components, making the MLCP suitable for modeling systems with equality constraints or free variables, such as certain traffic equilibria or economic models. Solvability often relies on properties of the Schur complement being a P-matrix when is nonsingular.[1] The nonlinear complementarity problem (NCP) generalizes the LCP by replacing the linear mapping with a nonlinear function . It requires finding such that and , capturing broader phenomena like nonconvex optimization or equilibrium problems where linearity assumptions fail. This extension preserves the complementarity structure but introduces challenges in existence and computation due to the nonlinearity of , often addressed via smoothing or continuation methods.[1][7] Bilevel or hierarchical LCPs extend the framework to Stackelberg games, where an upper-level problem optimizes over parameters that define a lower-level LCP solved by a follower. In this setup, the leader maximizes their objective subject to the follower's LCP equilibrium, formulating a nested structure that models leader-follower dynamics in competitive settings like supply chains or policy design. The resulting problem is a mathematical program with complementarity constraints (MPCC), inheriting nonconvexity from the lower level.[8] For illustration, consider the 2×2 LCP with A solution is , , satisfying and ; this matrix is not a Q-matrix, and the instance admits analysis of solution multiplicity in related parametric variants.[1]Mathematical Properties
Solvability Conditions
The linear complementarity problem (LCP) defined by a matrix and vector is feasible if there exist vectors with , , and . This condition describes the nonempty intersection of the affine space with the nonnegative orthant , forming a polyhedron known as the feasible set of the LCP. Solvability of the LCP requires, in addition, that some point in this polyhedron satisfies the complementarity condition for all . A key solvability result holds for symmetric positive semi-definite matrices. If is symmetric positive semi-definite (meaning for all ) and the LCP is feasible, then it has at least one solution.[1] This follows from the equivalence between such LCPs and convex quadratic programs, which are solvable when the objective is bounded below on the feasible set due to the convexity of the feasible set and objective function. The solution set is convex in this case.[1] For copositive matrices, solvability depends on the vector . A matrix is copositive if for all ; under this property, if the LCP is feasible, then it is solvable (i.e., ). More precisely, for matrices that are copositive plus (copositive and satisfying that if with , then ), feasibility guarantees the existence of a complementary solution, as established in early analyses of complementary pivot methods. For example, consider copositive with such that ; the nonnegative orthant provides a feasible point (, ), ensuring solvability.[1] Uniqueness of solutions relates to strict complementarity, where a solution satisfies for all . If an LCP admits a strictly complementary solution and the matrix belongs to a class ensuring the solution set is a singleton (such as positive definite matrices), then the solution is unique. This property arises because strict complementarity partitions the variables into complementary sets without ambiguity, and the underlying polyhedral structure yields a vertex solution under these conditions. The LCP also emerges as the Karush-Kuhn-Tucker (KKT) conditions for nonlinear programming problems. Specifically, for a convex nonlinear program with inequality constraints, the KKT system can be formulated as an LCP in the primal variables and dual multipliers, where solvability of the LCP corresponds to the existence of optimal Lagrange multipliers satisfying stationarity, primal feasibility, dual feasibility, and complementarity slackness. This connection underscores that feasible LCPs arising from convex programs are solvable, mirroring the guaranteed existence of KKT points under constraint qualifications.Matrix Classes and Positivity
In the theory of the linear complementarity problem (LCP), certain classes of matrices guarantee solvability or uniqueness of solutions for specific or all right-hand-side vectors . These classifications are pivotal for understanding the structural properties that ensure the existence and uniqueness of solutions to , where is an matrix and . A matrix belongs to class P, also known as a P-matrix, if all its principal minors are positive. This property ensures that has a unique solution for every .[1] The P-matrix characterization implies that Lemke's algorithm always succeeds in finding the unique solution, highlighting its role in guaranteeing global uniqueness across all .[1] The broader class Q consists of matrices for which is solvable for every . P-matrices form a subclass of Q, but Q includes additional matrices where solvability holds without necessarily requiring uniqueness.[1] Symmetric positive semi-definite (PSD) matrices belong to the subclass Q_0 (solvable if feasible), not Q.[1] Copositive matrices and their strict variant play a key role in positivity-related solvability, particularly for nonnegative . A matrix is copositive if for all nonnegative vectors , and strictly copositive if for all with . For copositive , has a solution whenever , while strictly copositive matrices extend this guarantee more robustly, often implying bounded solution sets.[1] An illustrative example is a skew-symmetric matrix , satisfying . Such matrices always belong to class , meaning is solvable (e.g., via the trivial solution ).[1] For instance, the 2×2 matrix exemplifies this property.[1]Connections to Optimization
Relation to Quadratic Programming
The linear complementarity problem (LCP) arises naturally as the set of Karush-Kuhn-Tucker (KKT) optimality conditions for a convex quadratic program (QP). Consider the standard QP of minimizing the objective function subject to the nonnegativity constraint , where is a positive semidefinite (PSD) matrix and . The KKT conditions for this problem require stationarity (), primal feasibility (), dual feasibility (), and complementary slackness (), which together form the LCP with and . This equivalence implies that solving the LCP provides the optimal solution to the QP when is PSD, ensuring the QP is convex and thus has a global minimum. The PSD property of guarantees the existence of a solution to the corresponding LCP for any , linking the convexity of the QP directly to LCP solvability. For illustration, consider the simple QP of minimizing subject to . The KKT conditions yield the LCP with and , solved by and , confirming the QP optimum at . The LCP also captures the saddle-point conditions between a primal QP and its dual, unifying the two through complementarity. Specifically, the primal QP has dual (where is the pseudoinverse if is singular), and their joint optimality equates to solving the LCP with . The connection between LCP and QP was formalized in the 1960s, with Cottle and Dantzig introducing the LCP framework in 1968 as a unifying pivot-based approach for solving QPs and related problems.[1]Link to Variational Inequalities
The linear complementarity problem (LCP) can be reformulated as a variational inequality (VI) over the nonnegative orthant. Specifically, given a matrix and vector , the VI seeks a vector such that for all , where is the nonnegative orthant. This VI formulation is equivalent to the standard LCP, which requires finding such that , , and . The equivalence holds because, for the convex cone , the VI condition implies that solves the LCP: the inequality ensures and for , which together yield the complementarity and nonnegativity requirements. When is positive semi-definite, the mapping is monotone, meaning for all . For such monotone VIs over the orthant, solvability is guaranteed under mild conditions, such as the recession cone of the feasible set being pointed, providing a theoretical foundation for existence results in LCPs with P-matrices or copositive matrices. More broadly, the LCP represents a special case of a nonlinear VI where the operator is affine, i.e., . This connection embeds the LCP within the general theory of VIs, allowing the application of VI-specific techniques, such as minty formulations or proximal point methods, to LCP solution. A practical illustration arises in traffic network equilibrium models, where the problem of finding user-optimal flows can be cast as a VI over flow variables in the nonnegative orthant. For linear link cost functions, this VI reduces directly to an LCP, enabling the use of LCP solvers for computing equilibria in transportation systems.Solution Algorithms
Pivot-Based Methods
Pivot-based methods for solving the linear complementarity problem (LCP) rely on combinatorial pivoting techniques analogous to those in the simplex method for linear programming, but adapted to enforce the complementarity condition where , , and . These methods generate a sequence of complementary bases by pivoting on pairs of variables such that at most one is positive in each basis, traversing a path in the solution space until a feasible complementary solution is found or cycling is detected.Lemke's Algorithm
Lemke's complementary pivoting algorithm, introduced in 1965, is a foundational pivot-based method that augments the LCP with an artificial variable and a covering direction (typically the all-ones vector) to form the nearly complementary system: find , , such that and . The algorithm starts from the initial point where , , and (assuming has no negative entries for feasibility; otherwise, a Phase I procedure is used). It then performs a sequence of complementary pivots, selecting an entering variable from the pair corresponding to the zero component in the current basic solution, and uses a ratio test to determine the leaving variable, ensuring non-negativity. Pivoting continues until (yielding an LCP solution) or a secondary ray is reached (indicating no solution). The steps of Lemke's algorithm are as follows:- Initialization: Set , choose large enough so that . The initial basis consists of and the 's that are positive.
- Pivoting Phase: Identify the complementary pair where one variable (say ) is zero and the other () is non-basic. Enter into the basis by pivoting on the column of in the tableau representation of the system, using the minimum ratio test on the rows where the pivot column coefficient is negative to select the leaving variable.
- Termination Check: If and complementarity holds, return as the solution. If a variable can decrease without bound (unbounded ray), declare no solution exists. Repeat pivoting otherwise.
