Hubbry Logo
Linear inequalityLinear inequalityMain
Open search
Linear inequality
Community hub
Linear inequality
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Linear inequality
Linear inequality
from Wikipedia

In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:[1]

  • < less than
  • > greater than
  • ≤ less than or equal to
  • ≥ greater than or equal to
  • ≠ not equal to

A linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign.

Linear inequalities of real numbers

[edit]

Two-dimensional linear inequalities

[edit]
Graph of linear inequality:
x + 3y < 9

Two-dimensional linear inequalities, are expressions in two variables of the form:

where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane.[2] The line that determines the half-planes (ax + by = c) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point (x0, y0) which is not on the line and observe whether or not the inequality is satisfied.

For example,[3] to draw the solution set of x + 3y < 9, one first draws the line with equation x + 3y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality.

Linear inequalities in general dimensions

[edit]

In Rn linear inequalities are the expressions that may be written in the form

or

where f is a linear form (also called a linear functional), and b a constant real number.

More concretely, this may be written out as

or

Here are called the unknowns, and are called the coefficients.

Alternatively, these may be written as

or

where g is an affine function.[4]

That is

or

Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs.

Systems of linear inequalities

[edit]

A system of linear inequalities is a set of linear inequalities in the same variables:

Here are the unknowns, are the coefficients of the system, and are the constant terms.

This can be concisely written as the matrix inequality

where A is an m×n matrix of constants, x is an n×1 column vector of variables, b is an m×1 column vector of constants and the inequality relation is understood row-by-row.

In the above systems both strict and non-strict inequalities may be used.

  • Not all systems of linear inequalities have solutions.

Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination.[5]

Applications

[edit]

Polyhedra

[edit]

The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation.

The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n-dimensional space Rn.

Linear programming

[edit]

A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function) subject to a number of constraints on the variables which, in general, are linear inequalities.[6] The list of constraints is a system of linear inequalities.

Generalization

[edit]

The above definition requires well-defined operations of addition, multiplication and comparison; therefore, the notion of a linear inequality may be extended to ordered rings, and in particular to ordered fields.

References

[edit]

Sources

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A linear inequality is a mathematical statement that relates two linear expressions using one of the inequality symbols: less than (<), greater than (>), less than or equal to (≤), or greater than or equal to (≥). Unlike linear equations, which seek exact equality, linear inequalities describe ranges of values satisfying the relation, forming the basis for modeling constraints in various fields. Linear inequalities can involve one or more variables and are solved using methods analogous to those for linear equations, with the key adjustment of reversing the inequality direction when multiplying or dividing both sides by a negative number. For a single linear inequality in one variable, such as ax+b<cax + b < c where a0a \neq 0, the solution is an interval on the real number line, which can be represented using interval notation or graphed on a number line. In two variables, linear inequalities like ax+bycax + by \leq c are graphed as half-planes bounded by the line ax+by=cax + by = c, with the solution set consisting of all points on one side of the boundary, shaded to indicate the feasible region. Systems of linear inequalities, comprising multiple such constraints, define polyhedral regions in the plane or higher dimensions, which are central to —a technique for optimizing linear objective functions subject to these constraints. Applications span economics, engineering, and operations research, including inventory control, production planning, pricing strategies, and resource allocation, where businesses and scientists use these models to identify optimal solutions within real-world limitations. The study of linear inequalities also underpins advanced topics in optimization and control theory, with foundational results like Farkas's lemma providing conditions for the solvability of such systems.

Fundamentals of Linear Inequalities

Definition and Notation

A linear inequality in nn real variables x1,,xnx_1, \dots, x_n is an inequality of the form a1x1++anxnba_1 x_1 + \dots + a_n x_n \leq b, where the coefficients a1,,an,bRa_1, \dots, a_n, b \in \mathbb{R} and not all ai=0a_i = 0. Equivalent forms use the relations <<, \geq, or >> in place of \leq. This expression involves a of the variables compared to a constant, distinguishing it from nonlinear inequalities that include higher-degree terms or products of variables. The distinction between strict and non-strict inequalities affects the nature of the solution set: non-strict forms \leq and \geq define closed half-spaces, which include the bounding hyperplane, while strict forms << and >> define open half-spaces, excluding the boundary. In Euclidean space Rn\mathbb{R}^n, a half-space is the set of all points satisfying such an inequality, partitioned by the hyperplane where equality holds. Standard notation employs vector and matrix forms for conciseness, especially in higher dimensions. A single inequality is written as aTxb\mathbf{a}^T \mathbf{x} \leq b, where aRn\mathbf{a} \in \mathbb{R}^n is the row vector of coefficients and xRn\mathbf{x} \in \mathbb{R}^n is the column vector of variables. For a system of mm such inequalities, the compact affine form is AxbA \mathbf{x} \leq \mathbf{b}, with AA an m×nm \times n matrix and bRm\mathbf{b} \in \mathbb{R}^m. This notation highlights the affine nature of the inequalities, as they can be rewritten as aT(xx0)0\mathbf{a}^T (\mathbf{x} - \mathbf{x_0}) \leq 0 for some point x0\mathbf{x_0}. The origins of linear inequalities trace to 18th-century , where Euler and contemporaries applied them in studying geometric constraints and systems of equations. The first systematic treatment appeared in Joseph Fourier's 1827 work on solving systems of linear inequalities, which introduced an elimination method later refined by Theodore Motzkin in 1936 and known as Fourier-Motzkin elimination.

Basic Properties and Operations

Linear inequalities over the real numbers satisfy several fundamental properties that allow for algebraic manipulation while preserving the truth of the statement. For any real numbers aa, bb, and cc, if a<ba < b, then a+c<b+ca + c < b + c and ac<bca - c < b - c, meaning addition or subtraction of the same constant to both sides preserves the direction of the inequality. Similarly, if a<ba < b and c>0c > 0, then ac<bca \cdot c < b \cdot c and ac<bc\frac{a}{c} < \frac{b}{c}, so multiplication or division by a positive scalar also preserves the inequality direction. However, if c<0c < 0, the inequality reverses: ac>bca \cdot c > b \cdot c and ac>bc\frac{a}{c} > \frac{b}{c}. These properties extend to combining inequalities through transitivity: if aba \leq b and bcb \leq c, then aca \leq c for real numbers aa, bb, and cc. Equivalence under scaling holds for positive multipliers; multiplying both sides of a linear inequality by a positive constant yields an equivalent inequality with the same solution set. For instance, starting from 2x+372x + 3 \leq 7, subtracting 3 from both sides gives 2x42x \leq 4, and dividing by 2 (positive) yields x2x \leq 2, preserving the solution. In contrast, for 3x+15-3x + 1 \geq 5, subtracting 1 gives 3x4-3x \geq 4, and dividing by -3 (negative) reverses the inequality to x43x \leq -\frac{4}{3}. Degenerate cases arise when the coefficient of the variable is zero, reducing the inequality to a constant comparison independent of the variable. For example, 0x+bd0 \cdot x + b \leq d simplifies to bdb \leq d; if b>0b > 0 and dbd \geq b, the inequality holds for all real xx, while if b>db > d, it holds for no xx. These properties form the basis for manipulating linear inequalities without altering their validity over the reals.

Linear Inequalities in Low Dimensions

Inequalities in One Variable

A linear inequality in one variable takes the form ax+b<cax + b < c, ax+bcax + b \leq c, ax+b>cax + b > c, or ax+bcax + b \geq c, where a0a \neq 0 and a,b,ca, b, c are real constants. The solution consists of all values of xx that satisfy the inequality. To solve such an inequality, perform algebraic operations to isolate the variable on one side, treating it similarly to solving an but reversing the inequality direction when multiplying or dividing both sides by a . For instance, consider 2x572x - 5 \leq 7: add 5 to both sides to get 2x122x \leq 12, then divide by 2 to obtain x6x \leq 6. If the is negative, as in 3x+4>10-3x + 4 > 10, subtract 4 to yield 3x>6-3x > 6, then divide by -3 and reverse the sign: x<2x < -2. The solution set is an interval on the real number line, such as (,k)(-\infty, k), (,k](-\infty, k], [k,)[k, \infty), or (k,)(k, \infty), depending on whether the inequality is strict or inclusive. For the earlier example x6x \leq 6, the interval is (,6](-\infty, 6]. In practical contexts, linear inequalities model constraints like budgets. For example, if items cost $5 each and the budget is $100, the inequality 5x1005x \leq 100 simplifies to x20x \leq 20, indicating at most 20 items can be bought. Absolute value inequalities in linear form, such as xab|x - a| \leq b with b>0b > 0, translate to compound inequalities bxab-b \leq x - a \leq b, or equivalently abxa+ba - b \leq x \leq a + b. Solving x34|x - 3| \leq 4 yields 1x7-1 \leq x \leq 7, an interval of length 8 centered at 3. For strict inequalities like xa<b|x - a| < b, the solution is the open interval (ab,a+b)(a - b, a + b).

Graphical Solutions in Two Dimensions

To graphically represent a linear inequality in two variables, such as ax+bycax + by \leq c, where aa, bb, and cc are constants and not both aa and bb are zero, first plot the corresponding boundary line given by the equation ax+by=cax + by = c. This line divides the Cartesian plane into two regions known as half-planes. The solution set, or feasible region, consists of one of these half-planes, determined by which side satisfies the original inequality. The graphing process begins by finding the intercepts of the boundary line: set y=0y = 0 to solve for the x-intercept and set x=0x = 0 to solve for the y-intercept, then plot these points and connect them with a straight line. Use a solid line if the inequality includes equality (≤ or ≥), indicating that points on the boundary are part of the solution set; use a dashed line for strict inequalities (< or >), excluding the boundary. To identify the correct half-plane for shading, select a test point not on the line, such as the origin (0,0), and substitute its coordinates into the inequality. If the inequality holds true for the test point, shade the half-plane containing that point; if false, shade the opposite half-plane. This method works reliably unless the line passes through the test point, in which case choose another point, like (1,1). For vertical or horizontal lines, which are parallel to the axes, the process remains the same but simplifies due to the line's orientation. A vertical line like x=kx = k divides the plane into left and right half-planes, while a horizontal line like y=ky = k divides it into upper and lower half-planes; testing a point determines the shading direction accordingly. Consider the example x+y4x + y \leq 4. The boundary line is x+y=4x + y = 4, with intercepts at (4,0) and (0,4), plotted as a solid line. Testing (0,0): 0+040 + 0 \leq 4 is true, so shade the half-plane below and including the line, forming the feasible region. For the strict inequality x+y<4x + y < 4, use a dashed line and shade the same half-plane but exclude the boundary. Another example is 2x3y62x - 3y \geq 6. The boundary 2x3y=62x - 3y = 6 has intercepts at (3,0) and (0,-2), plotted solid. Testing (0,0): 2(0)3(0)62(0) - 3(0) \geq 6 is false, so shade the opposite half-plane, above and including the line. The feasible region for a single linear inequality is always an unbounded half-plane, providing a visual representation of all points (x,y) satisfying the condition. This graphical approach emphasizes the geometric interpretation of linear inequalities as dividing the plane into solution and non-solution areas.

Linear Inequalities in Higher Dimensions

General Form and Solution Sets

In the context of n-dimensional real space, a linear inequality takes the general form axb\mathbf{a} \cdot \mathbf{x} \leq b, where xRn\mathbf{x} \in \mathbb{R}^n is the variable vector, aRn\mathbf{a} \in \mathbb{R}^n is a fixed nonzero coefficient vector (the normal vector), and bRb \in \mathbb{R} is a constant scalar determining the offset. This form generalizes the concept from lower dimensions and represents a constraint that defines one side of a separating boundary in Rn\mathbb{R}^n. The solution set of this inequality, denoted H={xRnaxb}H = \{\mathbf{x} \in \mathbb{R}^n \mid \mathbf{a} \cdot \mathbf{x} \leq b\}, is known as a closed half-space in Rn\mathbb{R}^n. Half-spaces possess key geometric properties: they are , meaning that the line segment connecting any two points in HH lies entirely within HH, as established by the fact that if ayb\mathbf{a} \cdot \mathbf{y} \leq b and azb\mathbf{a} \cdot \mathbf{z} \leq b, then for 0t10 \leq t \leq 1, a(ty+(1t)z)=t(ay)+(1t)(az)tb+(1t)b=b\mathbf{a} \cdot (t\mathbf{y} + (1-t)\mathbf{z}) = t(\mathbf{a} \cdot \mathbf{y}) + (1-t)(\mathbf{a} \cdot \mathbf{z}) \leq t b + (1-t) b = b. Additionally, half-spaces are unbounded, extending infinitely in directions orthogonal to a\mathbf{a} or away from the boundary, unless the inequality is inconsistent (resulting in the empty set) or tautological (the entire space). In specific dimensions, these solution sets take familiar forms that illustrate the abstraction. For n=1n=1, where x=xR\mathbf{x} = x \in \mathbb{R}, the half-space is a closed interval of the form (,b/a](-\infty, b/a] if a>0a > 0, or [b/a,)[b/a, \infty) if a<0a < 0, representing a ray on the real line. For n=2n=2, the solution set is a closed half-plane in the plane, bounded by a line and extending infinitely in one direction, which can be visualized using graphical techniques as discussed in two-dimensional representations. The boundary of the half-space is the defined by the equality ax=b\mathbf{a} \cdot \mathbf{x} = b, which forms an (n1)(n-1)-dimensional affine subspace of Rn\mathbb{R}^n. This acts as the separating surface, with the half-space consisting of all points on one side including the boundary itself.

Systems of Linear Inequalities

A system of linear inequalities consists of a finite collection of inequalities of the form AxbAx \leq b, where AA is an m×nm \times n matrix with real entries, xRnx \in \mathbb{R}^n is the vector of variables, and bRmb \in \mathbb{R}^m is a vector of constants. Each row of the system defines a half-space in Rn\mathbb{R}^n, and the inequalities collectively constrain the for xx. The solution set of the system, denoted P={xRnAxb}P = \{x \in \mathbb{R}^n \mid Ax \leq b\}, is the intersection of these mm half-spaces, forming a polyhedron in Rn\mathbb{R}^n. This set PP is feasible if it is non-empty, meaning there exists at least one xx satisfying all inequalities; otherwise, the system is infeasible and P=P = \emptyset. One method for determining the feasibility of such a system is the Fourier-Motzkin elimination algorithm, which eliminates variables sequentially to reduce the problem to a single inequality. The steps are as follows: (1) select a variable xkx_k to eliminate; (2) classify the inequalities based on the coefficient of xkx_k: those with positive coefficients provide upper bounds on xkx_k, those with negative coefficients provide lower bounds, and those with zero coefficients are retained unchanged; (3) for each pair consisting of a lower bound xkLix_k \geq L_i and an upper bound xkUjx_k \leq U_j (where LiL_i and UjU_j are affine functions of the remaining variables), generate the new inequality LiUjL_i \leq U_j, which eliminates xkx_k; (4) combine these new inequalities with the unchanged ones to form a system in n1n-1 variables; (5) repeat until only one variable remains, then check if its bounds are consistent (i.e., the greatest lower bound is at most the least upper bound). The algorithm has exponential complexity, with the number of inequalities potentially growing as O(2m)O(2^m) in the worst case due to the pairwise combinations generated during elimination. Redundancy in a arises when an inequality does not affect the , meaning PP remains unchanged if that inequality is removed. To identify a redundant inequality aiTxbia_i^T x \leq b_i (the ii-th row of AxbAx \leq b), solve the problem max{aiTxAjxbj,ji}\max \{ a_i^T x \mid A_j x \leq b_j, \, j \neq i \}; the inequality is redundant if the optimal value is at most bib_i. This check can be applied to each inequality, though computational efficiency improves when integrated with simplex-based solvers for the overall .

Applications of Linear Inequalities

Polyhedra and Convex Sets

The solution set of a finite system of linear inequalities in Rn\mathbb{R}^n defines a , which is the intersection of the corresponding half-spaces [\web:6](https://www.math.ucdavis.edu/ deloera/TEACHING/MATH168/notesconvexity.pdf)[ \web:6](https://www.math.ucdavis.edu/~deloera/TEACHING/MATH168/notesconvexity.pdf). Polyhedra may be bounded, in which case they are called , or unbounded; a polytope can equivalently be expressed as the of a finite set of points [\web:6](https://www.math.ucdavis.edu/ deloera/TEACHING/MATH168/notesconvexity.pdf)[ \web:6](https://www.math.ucdavis.edu/~deloera/TEACHING/MATH168/notesconvexity.pdf). Polyhedra admit two primary representations: the H-representation, given by a finite collection of linear inequalities AxbAx \leq b where AA is an m×nm \times n matrix and bRmb \in \mathbb{R}^m, and the V-representation, consisting of the convex hull of finitely many vertices along with rays and lines for unbounded directions [\web:5](https://people.inf.ethz.ch/fukudak/Docpub/polyfaq220115c.pdf)[ \web:5](https://people.inf.ethz.ch/fukudak/Doc_pub/polyfaq220115c.pdf). Each half-space defined by a linear inequality is convex, and the intersection of any collection of convex sets is convex, so every is a [\web:24](https://www.damtp.cam.ac.uk/user/hf323/L22IIIOPT/lecture2.pdf)[ \web:24](https://www.damtp.cam.ac.uk/user/hf323/L22-III-OPT/lecture2.pdf). For bounded polyhedra, the Minkowski-Weyl theorem establishes the precise equivalence between the H- and V-representations: a set is a if and only if it is the of finitely many points, and conversely, every such can be expressed as the to a finite system of linear inequalities [\web:12](https://people.inf.ethz.ch/fukudak/polyfaq/node14.html)[ \web:12](https://people.inf.ethz.ch/fukudak/polyfaq/node14.html). A representative example in three dimensions is the standard 3-simplex (a ), defined by the inequalities x0x \geq 0, y0y \geq 0, z0z \geq 0, and x+y+z1x + y + z \leq 1, whose vertices are the origin and the points (1,0,0)(1,0,0), (0,1,0)(0,1,0), and (0,0,1)(0,0,1) [\web:6](https://www.math.ucdavis.edu/ deloera/TEACHING/MATH168/notesconvexity.pdf)[ \web:6](https://www.math.ucdavis.edu/~deloera/TEACHING/MATH168/notesconvexity.pdf). Similarly, the unit cube in R3\mathbb{R}^3 is the intersection of the six inequalities ±x1\pm x \leq 1, ±y1\pm y \leq 1, and ±z1\pm z \leq 1, forming a polytope with eight vertices at the points where three pairwise perpendicular faces meet [\web:6](https://www.math.ucdavis.edu/ deloera/TEACHING/MATH168/notesconvexity.pdf)[ \web:6](https://www.math.ucdavis.edu/~deloera/TEACHING/MATH168/notesconvexity.pdf). In the H-representation of a , each inequality corresponds to a whose intersection with the polyhedron forms a facet (a maximal face of one); vertices arise as the intersection points of at least nn linearly independent inequalities that are tight, while edges connect pairs of such vertices lying on the same facet [\web:35](https://people.inf.ethz.ch/fukudak/lect/pclect/notes2013/ch514.pdf)[ \web:35](https://people.inf.ethz.ch/fukudak/lect/pclect/notes2013/ch5-14.pdf). This structure reflects a form of duality, where the facets of the primal (defined by inequalities) correspond to vertices in the polar , and vice versa, preserving the combinatorial incidence of edges [\web:35](https://people.inf.ethz.ch/fukudak/lect/pclect/notes2013/ch514.pdf)[ \web:35](https://people.inf.ethz.ch/fukudak/lect/pclect/notes2013/ch5-14.pdf).

Linear Programming and Optimization

Linear programming represents a fundamental application of linear inequalities to optimization problems, where the goal is to minimize or maximize a linear objective function subject to a system of linear constraints. The feasible solutions form the of half-spaces defined by the inequalities, resulting in a convex polyhedron known as the feasible region. Optimal solutions, if they exist, occur at vertices of this polyhedron. The standard of a linear program is to minimize cx\mathbf{c}^\top \mathbf{x} (or equivalently, maximize cx-\mathbf{c}^\top \mathbf{x}) subject to Axb\mathbf{A} \mathbf{x} \leq \mathbf{b} and x0\mathbf{x} \geq \mathbf{0}, where cRn\mathbf{c} \in \mathbb{R}^n, ARm×n\mathbf{A} \in \mathbb{R}^{m \times n}, bRm\mathbf{b} \in \mathbb{R}^m, and xRn\mathbf{x} \in \mathbb{R}^n. Here, the is a , and the problem is feasible if this region is nonempty and bounded below for minimization (or above for maximization) to ensure optimality. This captures a wide range of decision problems by modeling constraints as linear inequalities and objectives as linear combinations of decision variables. The , introduced by in 1947, provides an efficient method to solve linear programs by iteratively pivoting between basic feasible solutions—corresponding to vertices of the —along edges that improve value. Starting from an initial , the algorithm selects a variable to enter the basis (pivoting in) based on reduced costs and updates the basis by solving a , effectively traversing the boundary of the toward the optimum. Degeneracy arises when multiple bases yield the same , potentially causing , though anti-cycling rules like Bland's rule mitigate this issue. In practice, the simplex method exhibits excellent average-case performance despite its exponential worst-case complexity. Central to linear programming theory is the duality theorem, which associates each primal problem min{cxAxb,x0}\min \{\mathbf{c}^\top \mathbf{x} \mid \mathbf{A} \mathbf{x} \geq \mathbf{b}, \mathbf{x} \geq \mathbf{0}\} with a dual max{byAyc,y0}\max \{\mathbf{b}^\top \mathbf{y} \mid \mathbf{A}^\top \mathbf{y} \leq \mathbf{c}, \mathbf{y} \geq \mathbf{0}\}. The weak duality theorem guarantees that any feasible primal objective value is at least the corresponding dual value, establishing an upper bound for minimization. Strong duality holds under feasibility and boundedness (or by for strict inequalities), equating the optimal primal and dual values and providing complementary slackness conditions that characterize optimality: for optimal x\mathbf{x}^* and y\mathbf{y}^*, (bAx)y=0(\mathbf{b} - \mathbf{A} \mathbf{x}^*)^\top \mathbf{y}^* = 0 and (cAy)x=0(\mathbf{c} - \mathbf{A}^\top \mathbf{y}^*)^\top \mathbf{x}^* = 0. This framework, originating from John von Neumann's work on around 1947, enables and economic interpretations of shadow prices. Linear programming finds extensive applications in resource allocation, such as optimizing production mixes under limited inputs like labor and materials, and in transportation problems, where it minimizes shipping costs from multiple sources to destinations subject to constraints. For instance, the transportation problem can be modeled as mini,jcijxij\min \sum_{i,j} c_{ij} x_{ij} subject to row and column sum inequalities representing supplies and demands, with xij0x_{ij} \geq 0. Regarding computational complexity, while the simplex method is not polynomial-time in the worst case, Leonid Khachiyan's ellipsoid method from 1979 demonstrated that is solvable in time by iteratively shrinking an containing a feasible point using subgradient information from violated constraints. This theoretical breakthrough, running in O(n6L)O(n^6 L) time where LL is the , paved the way for interior-point methods that are now competitive in practice.

Generalizations and Extensions

Inequalities over Ordered Fields

An ordered field is a field FF equipped with a total order \leq that is compatible with the field operations. Specifically, the order is defined via a subset PFP \subseteq F of positive elements satisfying: (1) 0P0 \notin P; (2) for every aFa \in F, exactly one of aPa \in P, a=0a = 0, or aP-a \in P holds (trichotomy); (3) PP is closed under addition; and (4) PP is closed under multiplication. The order is then given by aba \leq b if and only if baP{0}b - a \in P \cup \{0\}. Classic examples include the rational numbers Q\mathbb{Q} and the real numbers R\mathbb{R}, both of which satisfy these properties. In an FF, a linear inequality takes the form a1x1++anxnba_1 x_1 + \cdots + a_n x_n \leq b, where ai,bFa_i, b \in F and xiFx_i \in F are variables. The consists of all points x=(x1,,xn)Fnx = (x_1, \dots, x_n) \in F^n satisfying the inequality, which defines a half-space in the FnF^n. This generalizes the real case, where such sets are convex regions bounded by hyperplanes, but in FnF^n, the depends on the field's order structure rather than . Key properties of inequalities in ordered fields follow from the order axioms. Addition preserves inequalities: if aba \leq b and cdc \leq d, then a+cb+da + c \leq b + d. Multiplication by positives preserves direction: if aba \leq b and 0c0 \leq c, then [ac](/page/Multiplication)[bc](/page/Multiplication)[a c](/page/Multiplication) \leq [b c](/page/Multiplication); multiplication by negatives reverses it. These ensure that manipulations of linear inequalities, such as adding terms or scaling coefficients, remain valid without altering the solution set's order relations. However, not all ordered fields share the completeness of R\mathbb{R}, where every nonempty bounded above has a least upper bound. For instance, Q\mathbb{Q} lacks this property, as the set {qQq2<2}\{ q \in \mathbb{Q} \mid q^2 < 2 \} is bounded above but has no least upper bound in Q\mathbb{Q}. This incompleteness affects solution sets of inequalities in Qn\mathbb{Q}^n, potentially leading to "gaps" not present in Rn\mathbb{R}^n. Beyond Q\mathbb{Q} and R\mathbb{R}, ordered fields include constructions like fields. For an ordered field KK and ordered GG, the field K((G))K((G)) of formal gGagtg\sum_{g \in G} a_g t^g (with agKa_g \in K, finitely many negative powers) can be ordered by declaring a series positive if its leading (lowest gg with ag0a_g \neq 0) is positive in KK. Linear inequalities in such fields yield solution half-spaces in K((G))nK((G))^n, often non-Archimedean, illustrating extensions beyond familiar number systems.

Linear Inequalities in Abstract Settings

In abstract algebraic structures beyond the real numbers, linear inequalities generalize to settings where order is defined via cones, lattices, or semirings, allowing the study of solution sets in non-traditional domains. These frameworks extend classical notions by incorporating partial orders compatible with operations or lattice meet/join structures, enabling applications in optimization, logic, and combinatorial without relying on metric properties. Partially ordered vector spaces provide a foundational abstract setting for linear inequalities, where the order is induced by a . In a real VV, a nonempty KVK \subseteq V defines a partial order by xKyx \leq_K y yxKy - x \in K, making VV a partially with KK as the positive cone. Linear inequalities in this context take the form ai,xbi\langle a_i, x \rangle \leq b_i for i=1,,mi = 1, \dots, m, where the solution set is an order ideal if it is downward closed under the partial order, meaning if xx satisfies the inequalities and yKxy \leq_K x, then yy does too. Order ideals play a central role in decomposing spaces and analyzing positive operators, as they correspond to subspaces stable under the order. Seminal work by Kantorovich established that such spaces admit extensions of positive linear functionals, analogous to Hahn-Banach for ordered settings. Integer linear inequalities arise in the study of Diophantine approximations and polyhedra, where solutions are restricted to lattice points in Zn\mathbb{Z}^n. The set {xZnAxb}\{x \in \mathbb{Z}^n \mid A x \leq b\}, with AQm×nA \in \mathbb{Q}^{m \times n} and bQmb \in \mathbb{Q}^m, forms an polyhedron, whose vertices and facets are governed by totally unimodular matrices or Gomory-Chvátal closures for integrality. These structures capture Diophantine approximations, such as bounding Axb\|A x - b\|_\infty for xx, with applications to simultaneous approximation theorems like Dirichlet's, where the existence of good approximations follows from pigeonhole principles on polyhedral tilings. Schrijver's comprehensive theory shows that polyhedra can be described by their Hilbert basis of extreme rays, ensuring finite generation under certain boundedness conditions. In lattice theory, linear inequalities manifest through order relations in distributive lattices and Boolean algebras, where the partial order is the lattice order defined by meets (\wedge) and joins (\vee). Distributive lattices satisfy a(bc)=(ab)(ac)a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c), allowing inequalities like xyx \leq y to be interpreted as x=xyx = x \wedge y, with solution sets forming down-sets or filters. Birkhoff's representation theorem establishes that every finite distributive lattice is isomorphic to the lattice of order ideals of a poset, where inequalities define the ideals as subsets closed under meets. In Boolean algebras, a complemented distributive lattice, inequalities correspond to subset inclusions, with linear-like constraints arising in Dedekind's theory of free distributive lattices, bounding the size by the number of join-irreducibles. Tropical geometry reframes linear inequalities in max-plus algebras, where addition is replaced by maximum and multiplication by addition, turning constraints into min/max operations over the tropical semiring (R{},max,+)(\mathbb{R} \cup \{\infty\}, \max, +). A tropical linear inequality maxi(aij+xj)bi\max_i (a_{ij} + x_j) \leq b_i defines a tropical polytope as the solution set, whose vertices are determined by tight constraints forming a tropical basis. In this setting, systems of such inequalities model piecewise-linear functions and amoebas of algebraic varieties in the limit, with duality theorems ensuring strong optimality conditions analogous to . Mikhalkin's foundational work links these to enumerative invariants, where the number of solutions to tropical systems counts complex curve intersections via stable intersections.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.