Recent from talks
Nothing was collected or created yet.
Reduced cost
View on WikipediaThis article needs additional citations for verification. (May 2009) |
In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a corresponding variable to assume a positive value in the optimal solution. It is the cost for increasing a variable by a small amount, i.e., the first derivative from a certain point on the polyhedron that constrains the problem. When the point is a vertex in the polyhedron, the variable with the most extreme cost, negatively for minimization and positively maximization, is sometimes referred to as the steepest edge.
Given a system minimize subject to , the reduced cost vector can be computed as , where is the dual cost vector.
It follows directly that for a minimization problem, any non-basic variables at their lower bounds with strictly negative reduced costs are eligible to enter that basis, while any basic variables must have a reduced cost that is exactly 0. For a maximization problem, the non-basic variables at their lower bounds that are eligible for entering the basis have a strictly positive reduced cost.
Interpretation
[edit]For the case where x and y are optimal, the reduced costs can help explain why variables attain the value they do. For each variable, the corresponding sum of that stuff gives the reduced cost show which constraints forces the variable up and down. For non-basic variables, the distance to zero gives the minimal change in the objective coefficient to change the solution vector x.
In pivot strategy
[edit]In principle, a good pivot strategy would be to select whichever variable has the greatest reduced cost. However, the steepest edge might ultimately not be the most attractive, as the edge might be very short, thus affording only a small betterment of the objective function value. From a computational view, another problem is that to compute the steepest edge, an inner product must be computed for every variable in the system, making the computational cost too high in many cases. The Devex algorithm attempts to overcome the latter problem by estimating the reduced costs rather than calculating them at every pivot step, exploiting that a pivot step might not alter the reduced costs of all variables dramatically.
In linear programming
[edit]NOTE: This is a direct quote from the web site linked below: "Associated with each variable is a reduced cost value. However, the reduced cost value is only non-zero when the optimal value of a variable is zero. A somewhat intuitive way to think about the reduced cost variable is to think of it as indicating how much the cost of the activity represented by the variable must be reduced before any of that activity will be done. More precisely,
... the reduced cost value indicates how much the objective function coefficient on the corresponding variable must be improved before the value of the variable will be positive in the optimal solution.
In the case of a minimization problem, "improved" means "reduced". So, in the case of a cost-minimization problem, where the objective function coefficients represent the per-unit cost of the activities represented by the variables, the "reduced cost" coefficients indicate how much each cost coefficient would have to be reduced before the activity represented by the corresponding variable would be cost-effective. In the case of a maximization problem, "improved" means "increased". In this case, where, for example, the objective function coefficient might represent the net profit per unit of the activity. The reduced cost value indicates how much the profitability of the activity would have to be increased in order for the activity to occur in the optimal solution. The units of the reduced-cost values are the same as the units of the corresponding objective function coefficients.
If the optimal value of a variable is positive (not zero), then the reduced cost is always zero. If the optimal value of a variable is zero and the reduced cost corresponding to the variable is also zero, then there is at least one other corner that is also in the optimal solution. The value of this variable will be positive at one of the other optimal corners."[1]
See also
[edit]References
[edit]- ^ "Interpreting LP Solutions - Reduced Cost". Courses.psu.edu. Retrieved 2013-08-08.
Reduced cost
View on GrokipediaFundamentals
Definition
In linear programming, the reduced cost of a variable represents the change in the objective function value per unit increase in that variable when it is introduced into the solution at its current non-basic level. Specifically, it quantifies the amount by which the objective coefficient of a non-basic variable must be altered for that variable to become basic and enter the solution at a positive value, thereby improving the objective function. This measure is computed within the framework of the simplex method, where variables are classified as basic (included in the current solution) or non-basic (set to zero). The interpretation of reduced cost varies depending on whether the problem is a maximization or minimization, and the sign convention used. In maximization problems, a positive reduced cost for a non-basic variable indicates the potential increase in the objective value (e.g., profit) if that variable were allowed to enter the basis, signaling that the current solution is suboptimal. Conversely, in minimization problems, a negative reduced cost signifies the potential decrease in the objective value (e.g., cost) upon introducing the variable, again pointing to suboptimality if such a value exists.[1] For basic variables in an optimal solution, the reduced cost is always zero, as these variables are already at levels that do not allow further improvement in the objective function without violating constraints. Non-basic variables, however, may have non-zero reduced costs that guide the optimization process toward feasibility and optimality. Consider a simple maximization problem with objective function , subject to constraints like and non-negativity. Suppose the current basic feasible solution has , , yielding . If the reduced cost for is -1, introducing would decrease by 1 unit per unit of , confirming optimality as no better solution exists by pivoting into the basis.[1]Mathematical Formulation
In linear programming, the reduced cost is formulated within the standard primal problem of minimizing the objective subject to the equality constraints and non-negativity , where is the coefficient matrix, is the right-hand-side vector, is the cost vector, and are the decision variables.[4] The associated dual problem is to maximize subject to , where represents the dual variables or shadow prices.[4] The reduced cost for the -th variable at an optimal solution is defined as , where is the -th column of and is the optimal dual solution.[4] In vector notation, the full reduced cost vector is .[4] By strong duality, for all corresponding to nonbasic variables at optimality in the minimization problem, ensuring no further improvement is possible.[4] Within the simplex method, the reduced costs are derived from the current basic feasible solution, where the basis matrix consists of columns of for the basic variables, and are the corresponding costs. The shadow prices are , so for nonbasic .[5] In the simplex tableau, these reduced costs appear as the coefficients in the objective row (row 0) for the nonbasic variables, specifically as where .[5] For maximization problems, the reduced cost convention aligns similarly but with the objective sense reversed; a positive reduced cost for a nonbasic variable indicates that increasing it would improve (increase) the objective value, signaling a direction for basis improvement.[6] This formulation ensures that the optimality conditions tie directly to the dual feasibility (for minimization) or (for maximization).[4]Role in the Simplex Method
Entering Variable Selection
In the simplex method applied to maximization problems, the entering variable is selected from the nonbasic variables by identifying the one with the most positive reduced cost, as this indicates the largest potential increase in the objective function value per unit increase in that variable.[7] The reduced costs are computed for all nonbasic variables using the current basis, typically via the formula , where are the simplex multipliers (dual variables) and is the column for variable , but in tableau form, the negative of the reduced costs appear in the objective row.[8] The optimal pivot strategy, often called Dantzig's rule, involves calculating all reduced costs and choosing the most positive one to enter the basis, which theoretically accelerates convergence by maximizing improvement per iteration.[7] However, this full pricing step becomes computationally expensive in large problems with many variables, as it requires updating and evaluating reduced costs for potentially thousands of nonbasics, leading to high per-iteration time complexity. In practice, approximations mitigate this expense, such as partial pricing (scanning nonbasics in a fixed order until a positive reduced cost is found) or advanced heuristics like the Devex rule, which prioritizes variables likely to yield steep improvements based on historical pivots.[9] Consider a simple maximization example: maximize subject to , , . Introducing slacks , the initial tableau is:| Basic | RHS | ||||
|---|---|---|---|---|---|
| 1 | 1 | 1 | 0 | 12 | |
| 2 | 1 | 0 | 1 | 16 | |
| 40 | 30 | 0 | 0 | 0 |
Optimality Conditions
In the simplex method applied to maximization problems in linear programming, a basic feasible solution is deemed optimal when all reduced costs for non-basic variables in the final tableau are less than or equal to zero. This condition indicates that increasing any non-basic variable would not improve (and might worsen) the objective function value, as the reduced cost for each non-basic column , where are the simplex multipliers (dual variables) and is the constraint column for variable . For minimization problems, the optimality criterion reverses: all reduced costs must be greater than or equal to zero to ensure no further decrease in the objective is possible.[13] A reduced cost of exactly zero for a non-basic variable signals the presence of multiple optimal solutions, as introducing that variable into the basis at a positive level would yield an alternative basic feasible solution with the same objective value. This occurs because the variable lies on the boundary of the dual feasible region, allowing degeneracy in the sense of non-unique bases without changing the optimum. In contrast, strictly negative reduced costs (for maximization) in a tableau confirm a unique optimum, as any deviation would degrade the solution.[14] Reduced costs play a key role in confirming overall optimality by ensuring dual feasibility alongside primal feasibility, with complementary slackness providing the linkage. Specifically, the non-negativity of reduced costs (in the maximization convention, interpreted as ) satisfies the dual constraints, while complementary slackness holds because basic variables (with positive values) have zero reduced costs, and non-basic variables (at zero) permit positive reduced costs without violation. This pairwise condition—that a primal variable is positive only if its corresponding reduced cost is zero, and vice versa—guarantees that the primal and dual solutions align at the optimum.[15] To illustrate the transition to optimality, consider a maximization problem solved via the simplex method. In an initial or intermediate suboptimal tableau, the objective row (showing reduced costs) might appear as:| Basis | x1 | x2 | x3 | s1 | s2 | RHS |
|---|---|---|---|---|---|---|
| s1 | 1 | 1 | 0 | 1 | 0 | 4 |
| s2 | 2 | 1 | 1 | 0 | 1 | 5 |
| z | 3 | 1 | 4 | 0 | 0 | 0 |
| Basis | x1 | x2 | x3 | s1 | s2 | RHS |
|---|---|---|---|---|---|---|
| x3 | 1/2 | 1/2 | 1 | 1/2 | 0 | 2.5 |
| s2 | 3/2 | 1/2 | 0 | -1/2 | 1 | 1.5 |
| z | -1 | -2 | 0 | -1 | 0 | 10 |
