Hubbry Logo
Reduced costReduced costMain
Open search
Reduced cost
Community hub
Reduced cost
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Reduced cost
Reduced cost
from Wikipedia

In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a corresponding variable to assume a positive value in the optimal solution. It is the cost for increasing a variable by a small amount, i.e., the first derivative from a certain point on the polyhedron that constrains the problem. When the point is a vertex in the polyhedron, the variable with the most extreme cost, negatively for minimization and positively maximization, is sometimes referred to as the steepest edge.

Given a system minimize subject to , the reduced cost vector can be computed as , where is the dual cost vector.

It follows directly that for a minimization problem, any non-basic variables at their lower bounds with strictly negative reduced costs are eligible to enter that basis, while any basic variables must have a reduced cost that is exactly 0. For a maximization problem, the non-basic variables at their lower bounds that are eligible for entering the basis have a strictly positive reduced cost.

Interpretation

[edit]

For the case where x and y are optimal, the reduced costs can help explain why variables attain the value they do. For each variable, the corresponding sum of that stuff gives the reduced cost show which constraints forces the variable up and down. For non-basic variables, the distance to zero gives the minimal change in the objective coefficient to change the solution vector x.

In pivot strategy

[edit]

In principle, a good pivot strategy would be to select whichever variable has the greatest reduced cost. However, the steepest edge might ultimately not be the most attractive, as the edge might be very short, thus affording only a small betterment of the objective function value. From a computational view, another problem is that to compute the steepest edge, an inner product must be computed for every variable in the system, making the computational cost too high in many cases. The Devex algorithm attempts to overcome the latter problem by estimating the reduced costs rather than calculating them at every pivot step, exploiting that a pivot step might not alter the reduced costs of all variables dramatically.

In linear programming

[edit]

NOTE: This is a direct quote from the web site linked below: "Associated with each variable is a reduced cost value. However, the reduced cost value is only non-zero when the optimal value of a variable is zero. A somewhat intuitive way to think about the reduced cost variable is to think of it as indicating how much the cost of the activity represented by the variable must be reduced before any of that activity will be done. More precisely,

... the reduced cost value indicates how much the objective function coefficient on the corresponding variable must be improved before the value of the variable will be positive in the optimal solution.

In the case of a minimization problem, "improved" means "reduced". So, in the case of a cost-minimization problem, where the objective function coefficients represent the per-unit cost of the activities represented by the variables, the "reduced cost" coefficients indicate how much each cost coefficient would have to be reduced before the activity represented by the corresponding variable would be cost-effective. In the case of a maximization problem, "improved" means "increased". In this case, where, for example, the objective function coefficient might represent the net profit per unit of the activity. The reduced cost value indicates how much the profitability of the activity would have to be increased in order for the activity to occur in the optimal solution. The units of the reduced-cost values are the same as the units of the corresponding objective function coefficients.

If the optimal value of a variable is positive (not zero), then the reduced cost is always zero. If the optimal value of a variable is zero and the reduced cost corresponding to the variable is also zero, then there is at least one other corner that is also in the optimal solution. The value of this variable will be positive at one of the other optimal corners."[1]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear programming, the reduced cost of a decision variable, often denoted as cˉj\bar{c}_j, measures the change in the objective function value per unit increase in that variable from its current bound (typically zero for non-basic variables), while keeping all other variables fixed at their optimal values. It is computed as cˉj=cji=1myiaij\bar{c}_j = c_j - \sum_{i=1}^m y_i a_{ij}, where cjc_j is the original objective coefficient for variable xjx_j, yiy_i are the shadow prices (dual variables) for the constraints, and aija_{ij} are the constraint coefficients. For basic variables in the optimal solution, the reduced cost is always zero, as they are already at positive levels contributing to optimality. Reduced costs play a central role in within the simplex method, helping analysts assess how perturbations in objective coefficients affect the optimal solution. Specifically, for a non-basic variable in a maximization problem, a negative reduced cost indicates the amount by which the coefficient must increase to make the variable enter the basis and potentially improve the objective; conversely, in minimization problems, a positive reduced cost shows the deterioration if the variable is forced into the solution. This concept also equates to the shadow price of the variable's nonnegativity constraint, providing insight into the of relaxing bounds on decision variables. In practice, reduced costs are output by solvers like Excel's Solver or specialized optimization software, aiding decision-makers in , , and other applications by quantifying the of excluding a variable from the active solution. For instance, if a product's reduced cost is -2 in a model, its selling price must rise by at least $2 per unit to justify production. Zero reduced costs for non-basic variables signal multiple optimal solutions, allowing flexibility in implementation without changing the objective value.

Fundamentals

Definition

In linear programming, the reduced cost of a variable represents the change in the objective function value per unit increase in that variable when it is introduced into the solution at its current non-basic level. Specifically, it quantifies the amount by which the objective coefficient of a non-basic variable must be altered for that variable to become basic and enter the solution at a positive value, thereby improving the objective function. This measure is computed within the framework of the simplex method, where variables are classified as basic (included in the current solution) or non-basic (set to zero). The interpretation of reduced cost varies depending on whether the problem is a maximization or minimization, and the used. In maximization problems, a positive reduced cost for a non-basic variable indicates the potential increase in the objective value (e.g., profit) if that variable were allowed to enter the basis, signaling that the current solution is suboptimal. Conversely, in minimization problems, a negative reduced cost signifies the potential decrease in the objective value (e.g., ) upon introducing the variable, again pointing to suboptimality if such a value exists. For basic variables in an optimal solution, the reduced cost is always zero, as these variables are already at levels that do not allow further improvement in the objective function without violating constraints. Non-basic variables, however, may have non-zero reduced costs that guide the optimization process toward feasibility and optimality. Consider a simple maximization problem with objective function z=3x1+2x2z = 3x_1 + 2x_2, subject to constraints like x1+x24x_1 + x_2 \leq 4 and non-negativity. Suppose the current has x1=4x_1 = 4, x2=0x_2 = 0, yielding z=12z = 12. If the reduced cost for x2x_2 is -1, introducing x2x_2 would decrease zz by 1 unit per unit of x2x_2, confirming optimality as no better solution exists by pivoting x2x_2 into the basis.

Mathematical Formulation

In linear programming, the reduced cost is formulated within the standard primal problem of minimizing the objective cTx\mathbf{c}^T \mathbf{x} subject to the equality constraints Ax=b\mathbf{Ax} = \mathbf{b} and non-negativity x0\mathbf{x} \geq \mathbf{0}, where ARm×n\mathbf{A} \in \mathbb{R}^{m \times n} is the coefficient matrix, bRm\mathbf{b} \in \mathbb{R}^m is the right-hand-side vector, cRn\mathbf{c} \in \mathbb{R}^n is the cost vector, and xRn\mathbf{x} \in \mathbb{R}^n are the decision variables. The associated dual problem is to maximize bTy\mathbf{b}^T \mathbf{y} subject to ATyc\mathbf{A}^T \mathbf{y} \leq \mathbf{c}, where yRm\mathbf{y} \in \mathbb{R}^m represents the dual variables or shadow prices. The reduced cost for the jj-th variable xjx_j at an optimal solution is defined as cˉj=cjajTy\bar{c}_j = c_j - \mathbf{a}_j^T \mathbf{y}, where aj\mathbf{a}_j is the jj-th column of A\mathbf{A} and y\mathbf{y} is the optimal dual solution. In vector notation, the full reduced cost vector is cˉ=cATy\bar{\mathbf{c}} = \mathbf{c} - \mathbf{A}^T \mathbf{y}. By , cˉj0\bar{c}_j \geq 0 for all jj corresponding to nonbasic variables at optimality in the minimization problem, ensuring no further improvement is possible. Within the simplex method, the reduced costs are derived from the current basic feasible solution, where the basis matrix B\mathbf{B} consists of columns of A\mathbf{A} for the basic variables, and cB\mathbf{c}_B are the corresponding costs. The shadow prices are yT=cBTB1\mathbf{y}^T = \mathbf{c}_B^T \mathbf{B}^{-1}, so cˉj=cjcBTB1aj\bar{c}_j = c_j - \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j for nonbasic jj. In the simplex tableau, these reduced costs appear as the coefficients in the objective row (row 0) for the nonbasic variables, specifically as cjzjc_j - z_j where zj=cBTB1ajz_j = \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j. For maximization problems, the reduced cost convention aligns similarly but with the objective sense reversed; a positive reduced cost for a nonbasic variable indicates that increasing it would improve (increase) value, signaling a direction for basis . This formulation ensures that the optimality conditions tie directly to the dual feasibility ATyc\mathbf{A}^T \mathbf{y} \leq \mathbf{c} (for minimization) or ATyc\mathbf{A}^T \mathbf{y} \geq \mathbf{c} (for maximization).

Role in the Simplex Method

Entering Variable Selection

In the simplex method applied to maximization problems, the entering variable is selected from the nonbasic variables by identifying the one with the most positive reduced cost, as this indicates the largest potential increase in function value per unit increase in that variable. The reduced costs are computed for all nonbasic variables using the current basis, typically via the formula cˉj=cjπTAj\bar{c}_j = c_j - \pi^T A_j, where π\pi are the simplex multipliers (dual variables) and AjA_j is the column for variable jj, but in tableau form, the negative of the reduced costs appear in the objective row. The optimal pivot strategy, often called Dantzig's rule, involves calculating all reduced costs and choosing the most positive one to enter the basis, which theoretically accelerates convergence by maximizing improvement per . However, this full pricing step becomes computationally expensive in large problems with many variables, as it requires updating and evaluating reduced costs for potentially thousands of nonbasics, leading to high per- time . In practice, approximations mitigate this expense, such as partial pricing (scanning nonbasics in a fixed order until a positive reduced cost is found) or advanced heuristics like the rule, which prioritizes variables likely to yield steep improvements based on historical pivots. Consider a simple maximization example: maximize Z=40x1+30x2Z = 40x_1 + 30x_2 subject to x1+x212x_1 + x_2 \leq 12, 2x1+x2162x_1 + x_2 \leq 16, x1,x20x_1, x_2 \geq 0. Introducing slacks s1,s20s_1, s_2 \geq 0, the initial tableau is:
Basicx1x_1x2x_2s1s_1s2s_2RHS
s1s_1111012
s2s_2210116
ZZ4030000
The reduced costs appear in the ZZ row: 40 for x1x_1 and 30 for x2x_2. The most positive is 40, so x1x_1 is selected as the entering variable. The minimum on the pivot column (quotients 12/1=12 and 16/2=8) identifies s2s_2 as leaving, with pivot at 2. After pivoting, the new tableau has x1x_1 basic, updated reduced costs (now 10 for x2x_2), and Z=320Z=320. To avoid in degenerate linear programs, where the method might loop infinitely without progress, anti-cycling rules incorporate reduced costs into variable selection. Bland's rule, for instance, chooses the entering variable as the eligible nonbasic (positive reduced cost) with the smallest index and the leaving variable similarly among minimum-ratio ties, ensuring terminates in a finite number of steps without cycles. This rule maintains computational simplicity while guaranteeing non-cycling behavior, though it may lead to more iterations than aggressive strategies like Dantzig's.

Optimality Conditions

In the simplex method applied to maximization problems in linear programming, a basic feasible solution is deemed optimal when all reduced costs for non-basic variables in the final tableau are less than or equal to zero. This condition indicates that increasing any non-basic variable would not improve (and might worsen) the objective function value, as the reduced cost cˉj=cjπTAj0\bar{c}_j = c_j - \pi^T A_j \leq 0 for each non-basic column jj, where π\pi are the simplex multipliers (dual variables) and AjA_j is the constraint column for variable jj. For minimization problems, the optimality criterion reverses: all reduced costs must be greater than or equal to zero to ensure no further decrease in the objective is possible. A reduced cost of exactly zero for a non-basic variable signals the presence of multiple optimal solutions, as introducing that variable into the basis at a positive level would yield an alternative with the same objective value. This occurs because the variable lies on the boundary of the dual feasible region, allowing degeneracy in the sense of non-unique bases without changing the optimum. In contrast, strictly negative reduced costs (for maximization) in a tableau confirm a unique optimum, as any deviation would degrade the solution. Reduced costs play a key role in confirming overall optimality by ensuring dual feasibility alongside primal feasibility, with complementary slackness providing the linkage. Specifically, the non-negativity of reduced costs (in the maximization convention, interpreted as πTAjcj0\pi^T A_j - c_j \geq 0) satisfies the dual constraints, while complementary slackness holds because basic variables (with positive values) have zero reduced costs, and non-basic variables (at zero) permit positive reduced costs without violation. This pairwise condition—that a primal variable is positive only if its corresponding reduced cost is zero, and vice versa—guarantees that the primal and dual solutions align at the optimum. To illustrate the transition to optimality, consider a maximization problem solved via the simplex method. In an initial or intermediate suboptimal tableau, the objective row (showing reduced costs) might appear as:
Basisx1x2x3s1s2RHS
s1110104
s2211015
z314000
Here, positive reduced costs (3 for x1x_1, 1 for x2x_2, 4 for x3x_3) indicate the solution is not optimal, prompting selection of an entering variable (e.g., x3x_3). After pivoting operations, the final optimal tableau could be:
Basisx1x2x3s1s2RHS
x31/21/211/202.5
s23/21/20-1/211.5
z-1-20-1010
Now, all reduced costs for non-basics (1-1 for x1x_1, 2-2 for x2x_2, 1-1 for s1s_1) are non-positive, confirming optimality with z=10z = 10. If a non-basic reduced cost were zero here, pivoting on it would reveal an alternate optimum.

Interpretations

Economic Interpretation

In , the reduced cost of a non-basic variable represents the marginal value associated with its objective coefficient, specifically indicating how much that coefficient must improve (for a maximization problem) before the variable becomes economically viable to introduce into the optimal solution. For instance, a negative reduced cost signifies that increasing the variable by one unit would decrease the objective value by the absolute amount of the reduced cost, reflecting the net penalty or shortfall in contribution relative to resource costs. This interpretation arises from the reduced cost's role as the shadow price on the non-negativity constraint for that variable, quantifying the imputed cost of forcing the activity into production. In the context of , particularly for non-basic variables held at zero in the optimal solution, the reduced cost measures the minimum required to justify allocating additional resources to that activity. It embodies the threshold at which the marginal benefit of the activity equals the of the resources it consumes, ensuring that only activities with non-positive reduced costs (in maximization) are excluded without regret. This guides decision-makers in evaluating whether scarce resources, such as labor or materials, should be diverted to an underutilized option, preventing inefficient expansions. A practical example occurs in , where reduced costs determine the profit threshold for introducing a new product line. Consider a furniture manufacturer optimizing and bench production subject to wood and labor constraints; if the reduced cost for tables is -$0.30, the profit per table must rise by at least $0.30 to make table production worthwhile, as this would offset the resource and enter the optimal mix without reducing overall profit. Such analysis helps firms assess market-driven changes, like price adjustments, to expand product offerings efficiently. Non-zero reduced costs directly connect to opportunity costs by quantifying the foregone benefits of not engaging the variable, equivalent to the net loss in objective value from maintaining it at zero. In economic terms, this represents the implicit penalty for exclusion, derived from the difference between the variable's direct contribution and the shadow-priced value of its resource demands, thereby highlighting trade-offs in constrained environments.

Sensitivity Analysis

In linear programming sensitivity analysis, reduced costs quantify the robustness of an optimal solution to perturbations in the objective function coefficients, particularly by indicating the extent to which a non-basic variable's coefficient can change before it becomes eligible to enter the basis. Specifically, for a maximization problem, a non-basic variable xjx_j remains non-basic as long as the change in its coefficient cjc_j does not increase its reduced cost cˉj\bar{c}_j above zero, preserving the current basis's optimality. This leverages the economic interpretation of reduced costs as the marginal improvement needed for a variable to contribute positively to the objective. The allowable range for cjc_j of a non-basic variable is determined directly from its reduced cost: in a maximization problem, cjc_j can increase by up to cˉj-\bar{c}_j (where cˉj<0\bar{c}_j < 0) without altering the basis, while decreases are typically unbounded as they further deter the variable from entering the basis. For instance, if cˉj=1.5\bar{c}_j = -1.5, the coefficient can be increased by at most 1.5 units before xjx_j enters the basis upon re-optimization. These ranges are computed by ensuring that the updated reduced costs for all non-basic variables remain non-positive after the perturbation. Solvers like Excel's provide sensitivity reports that illustrate these ranges numerically; consider a product mix problem maximizing profit 4x1+6x24x_1 + 6x_2 subject to constraints, where the optimal basis has x1x_1 basic and x2x_2 non-basic with cˉ2=2\bar{c}_2 = -2. The report would show that c2c_2 (the coefficient for x2x_2) has an allowable increase of 2 (to 8) and unlimited decrease, meaning the basis remains optimal for c28c_2 \leq 8. A key limitation is that reduced costs inform sensitivity only for non-basic variables' objective coefficients; for basic variables, changes affect the dual prices and thus the reduced costs of multiple variables, often requiring a full re-optimization or tableau adjustment to determine ranges rather than a simple bound.

Advanced Applications

In Duality Theory

In linear programming, reduced costs are integral to duality theory. Consider the primal problem formulated as a maximization: maxcTx\max \mathbf{c}^T \mathbf{x} subject to AxbA \mathbf{x} \leq \mathbf{b}, x0\mathbf{x} \geq \mathbf{0}. The associated dual is minyTb\min \mathbf{y}^T \mathbf{b} subject to yTAcT\mathbf{y}^T A \geq \mathbf{c}^T, y0\mathbf{y} \geq \mathbf{0}, where y\mathbf{y} is the vector of dual variables (shadow prices). The reduced cost for the jj-th primal variable, consistent with the standard definition cˉj=cjyTAj\bar{c}_j = c_j - \mathbf{y}^T A_j, must satisfy cˉj0\bar{c}_j \leq 0 for all jj at primal optimality in a maximization problem. This condition is equivalent to the dual feasibility requirement yTAjcj\mathbf{y}^T A_j \geq c_j, as the dual constraint slacks are cˉj0-\bar{c}_j \geq 0. Complementary slackness links the optimal solutions of the primal and dual, as per the strong duality theorem. For optimal x\mathbf{x}^* and y\mathbf{y}^*, xj(yTAjcj)=0x_j^* (\mathbf{y}^{*T} A_j - c_j) = 0 for each jj, meaning cˉj=0\bar{c}_j = 0 if and only if xj>0x_j^* > 0 (since cˉj0\bar{c}_j \leq 0). Thus, positive primal variables correspond to tight dual constraints (zero reduced cost), while non-zero reduced costs (negative in maximization) imply the primal variable is at its bound (zero). This ensures equality of the primal and dual objectives: cTx=yTb\mathbf{c}^T \mathbf{x}^* = \mathbf{y}^{*T} \mathbf{b}. From the dual viewpoint, the negative reduced costs cˉj-\bar{c}_j quantify the slack in the dual constraints, indicating by how much yTAj\mathbf{y}^T A_j exceeds cjc_j. A negative cˉj\bar{c}_j (non-zero) reflects excess in the dual constraint for that variable, highlighting the margin of dual feasibility. This symmetry underscores the connection between primal optimality (non-positive reduced costs) and dual feasibility. To illustrate, consider the primal: max6x1+14x2+13x3\max 6x_1 + 14x_2 + 13x_3 subject to 12x1+2x2+x324\frac{1}{2}x_1 + 2x_2 + x_3 \leq 24, x1+2x2+4x360x_1 + 2x_2 + 4x_3 \leq 60, x1,x2,x30x_1, x_2, x_3 \geq 0. The optimal solution is x1=36x_1^* = 36, x2=0x_2^* = 0, x3=6x_3^* = 6, with objective 294. The dual is min24y1+60y2\min 24y_1 + 60y_2 subject to 12y1+y26\frac{1}{2}y_1 + y_2 \geq 6, 2y1+2y2142y_1 + 2y_2 \geq 14, y1+4y213y_1 + 4y_2 \geq 13, y1,y20y_1, y_2 \geq 0, with optimal y1=11y_1^* = 11, y2=0.5y_2^* = 0.5, objective 294. The reduced costs are cˉ1=6(110.5+0.51)=0\bar{c}_1 = 6 - (11 \cdot 0.5 + 0.5 \cdot 1) = 0, cˉ2=14(112+0.52)=9\bar{c}_2 = 14 - (11 \cdot 2 + 0.5 \cdot 2) = -9, cˉ3=13(111+0.54)=0\bar{c}_3 = 13 - (11 \cdot 1 + 0.5 \cdot 4) = 0. Here, cˉ2<0\bar{c}_2 < 0 corresponds to x2=0x_2^* = 0, and zero reduced costs align with positive x1x_1^* and x3x_3^*, confirming complementary slackness. The dual slacks for the second constraint are zero, matching the binding primal constraints.

Extensions to Other Optimization Problems

In , reduced costs from the are used in branch-and-bound algorithms for variable fixing and bound tightening. Reduced cost fixing leverages complementary slackness from optimal dual solutions to eliminate variables that cannot improve the objective beyond the current primal bound, strengthening the and reducing the search space. Modern mixed-integer programming solvers like CPLEX incorporate enhanced reduced cost fixing techniques, often through presolving, to improve efficiency on benchmarks like MIPLIB. In network flow problems, reduced costs are adapted using node potentials in algorithms like the successive shortest path method for minimum cost flows. Node potentials π(v)\pi(v) define reduced edge costs as c~(uv)=c(uv)+π(u)π(v)\tilde{c}(u \to v) = c(u \to v) + \pi(u) - \pi(v), which preserve shortest path distances while ensuring non-negativity for efficient computation with rather than Bellman-Ford. Potentials are updated after each flow augmentation using distances from the latest , maintaining non-negative reduced costs and yielding a time complexity of O(BElogV)O(B E \log V), where BB is the total flow value. This extends the economic interpretation of reduced costs to capacitated networks, aiding optimality verification without negative cycles. In interior-point methods for , primal-dual frameworks produce analogs to reduced costs via dual multipliers and barrier terms, which measure deviations from optimality and guide steps toward . Primal-dual interior-point algorithms compute Newton directions incorporating scaled dual variables, similar to reduced costs, to diminish the and follow the central path parameterized by the barrier μ\mu. These provide sensitivity insights akin to simplex reduced costs, assessing perturbation impacts across barrier subproblems. In contrast to the method, interior-point approaches trace smooth interior paths, with the barrier enforcing strict feasibility and self-concordance enabling convergence.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.