Hubbry Logo
APXAPXMain
Open search
APX
Community hub
APX
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
APX
APX
from Wikipedia

In computational complexity theory, the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor approximation algorithms for short). In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.

An approximation algorithm is called an -approximation algorithm for input size if it can be proven that the solution that the algorithm finds is at most a multiplicative factor of times worse than the optimal solution. Here, is called the approximation ratio. Problems in APX are those with algorithms for which the approximation ratio is a constant . The approximation ratio is conventionally stated greater than 1. In the case of minimization problems, is the found solution's score divided by the optimum solution's score, while for maximization problems the reverse is the case. For maximization problems, where an inferior solution has a smaller score, is sometimes stated as less than 1; in such cases, the reciprocal of is the ratio of the score of the found solution to the score of the optimum solution.

A problem is said to have a polynomial-time approximation scheme (PTAS) if for every multiplicative factor of the optimum worse than 1 there is a polynomial-time algorithm to solve the problem to within that factor. Unless P = NP there exist problems that are in APX but without a PTAS, so the class of problems with a PTAS is strictly contained in APX. One example of a problem with a PTAS is the knapsack problem.

APX-hardness and APX-completeness

[edit]

A problem is said to be APX-hard if there is a PTAS reduction from every problem in APX to that problem, and to be APX-complete if the problem is APX-hard and also in APX. As a consequence of P ≠ NP ⇒ PTAS ≠ APX, if P ≠ NP is assumed, no APX-hard problem has a PTAS. In practice, reducing one problem to another to demonstrate APX-completeness is often done using other reduction schemes, such as L-reductions, which imply PTAS reductions.

Examples

[edit]

One of the simplest APX-complete problems is MAX-3SAT-3, a variation of the Boolean satisfiability problem. In this problem, we have a Boolean formula in conjunctive normal form where each variable appears at most 3 times, and we wish to know the maximum number of clauses that can be simultaneously satisfied by a single assignment of true/false values to the variables.

Other APX-complete problems include:

[edit]

PTAS

[edit]

PTAS (polynomial time approximation scheme) consists of problems that can be approximated to within any constant factor besides 1 in time that is polynomial to the input size, but the polynomial depends on such factor. This class is a subset of APX.

APX-intermediate

[edit]

Unless P = NP, there exist problems in APX that are neither in PTAS nor APX-complete. Such problems can be thought of as having a hardness between PTAS problems and APX-complete problems, and may be called APX-intermediate. The bin packing problem is thought to be APX-intermediate. Despite not having a known PTAS, the bin packing problem has several "asymptotic PTAS" algorithms, which behave like a PTAS when the optimum solution is large, so intuitively it may be easier than problems that are APX-hard.

One other example of a potentially APX-intermediate problem is min edge coloring.

f(n)-APX

[edit]

One can also define a family of complexity classes -APX, where -APX contains problems with a polynomial time approximation algorithm with a approximation ratio. One can analogously define -APX-complete classes; some such classes contain well-known optimization problems. Log-APX-completeness and poly-APX-completeness are defined in terms of AP-reductions rather than PTAS-reductions; this is because PTAS-reductions are not strong enough to preserve membership in Log-APX and Poly-APX, even though they suffice for APX.

Log-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor logarithmic in the input size, includes min dominating set when degree is unbounded.

Poly-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor polynomial in the input size, includes max independent set in the general case.

There also exist problems that are exp-APX-complete, where the approximation ratio is exponential in the input size. This may occur when the approximation is dependent on the value of numbers within the problem instance; these numbers may be expressed in space logarithmic in their value, hence the exponential factor.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , the class APX (an abbreviation for "approximable") consists of all NP optimization problems that admit a polynomial-time guaranteeing a solution within a constant factor of the optimal value. Formally, APX is a subclass of NPO, the class of NP optimization problems whose solutions can be verified in polynomial time, where for every problem in APX there exists a constant c > 1 such that the algorithm's output ALG(I) satisfies OPT(I)/ALG(I)c for maximization problems (or ALG(I)/OPT(I)c for minimization problems) on all instances I. This class captures problems that are computationally tractable to approximate to a bounded relative error, distinguishing them from harder problems requiring more sophisticated schemes like polynomial-time approximation schemes (PTAS). Introduced by Christos Papadimitriou and Mihalis Yannakakis in 1991 in the foundational work on optimization complexity classes, APX builds on earlier concepts like MAX-SNP and provides a framework for studying the approximability of NP-hard problems. Key examples include the problem, which has a 2-approximation algorithm and is APX-complete, and the traveling salesman problem with the , approximable to within 3/2. APX-completeness, established via L-reductions that preserve approximation ratios, highlights problems like MAX-3SAT to which all APX problems can be reduced while maintaining constant-factor guarantees. These reductions underscore the class's role in proving inapproximability results, such as the impossibility of better-than-constant approximations for APX-complete problems unless P = NP. The study of APX has influenced broader areas of algorithm design, including applications in scheduling, network design, and , where constant-factor approximations enable practical solutions to intractable optimization tasks. Related subclasses, such as PTAS (problems with (1 ± ε)-approximations for any ε > 0 in polynomial time) and log-APX (constant logarithmic factor approximations), further refine the hierarchy of approximability. Ongoing research explores the structure of APX, including self-improvability and gaps between subclasses, with implications for and quantum approximations.

Definition and Basics

Formal Definition

The class APX, introduced by Papadimitriou and Yannakakis, encompasses all NP optimization problems—those in the class NPO—that can be approximated to within a constant factor in polynomial time. Specifically, APX includes optimization problems for which there exists a polynomial-time that produces a solution whose value is guaranteed to be no worse than a fixed multiple of the optimal value, regardless of input size. This constant-factor approximability distinguishes APX from classes requiring more precise or instance-dependent approximations. Formally, consider an Π\Pi \in NPO. Then Π\Pi \in APX if there exist a constant ρ1\rho \geq 1 and a polynomial-time AA such that for every instance xx of Π\Pi, A(x)A(x) is a feasible solution satisfying the following:
  • For maximization problems: 1ρopt(Π,x)A(x)opt(Π,x)\frac{1}{\rho} \cdot \text{opt}(\Pi, x) \leq A(x) \leq \text{opt}(\Pi, x) (Equivalently, A(x)opt(Π,x)/ρA(x) \geq \text{opt}(\Pi, x) / \rho.)
  • For minimization problems: 1ρopt(Π,x)A(x)opt(Π,x)\frac{1}{\rho} \cdot \text{opt}(\Pi, x) \leq A(x) \leq \text{opt}(\Pi, x) (Equivalently, A(x)ρopt(Π,x)A(x) \leq \rho \cdot \text{opt}(\Pi, x).)
    Here, opt(Π,x)\text{opt}(\Pi, x) denotes the optimal objective value for instance xx, and the ratio ρ\rho (the approximation ratio) remains bounded by a constant independent of x|x|. This definition ensures that the relative error is controlled uniformly across all instances.
The distinction between decision and optimization versions is central to APX membership. The decision variant of a problem Π\Pi \in APX—asking whether opt(Π,x)k\text{opt}(\Pi, x) \geq k for a given threshold kk (or k\leq k for minimization)—lies in NP, as verifying a yes-instance requires checking a certificate in polynomial time. However, APX focuses on the optimization version's approximability: even if the decision problem is NP-complete, the class captures whether a good-enough solution (within constant factor) can be found efficiently, without solving the exact optimization exactly. This framework highlights the between computational feasibility and solution quality in NP optimization.

Relation to NPO and NP Optimization Problems

The class NPO encompasses all NP optimization problems, consisting of instances that can be recognized in polynomial time, feasible solutions that are verifiable in polynomial time, and an objective function that is computable in polynomial time, with the goal of finding a solution that maximizes or minimizes this function. APX forms a proper subset of NPO, comprising those optimization problems for which there exists a polynomial-time that approximates the optimal solution within a constant factor, though exact solutions may not be achievable in polynomial time unless P = NP. This class was motivated historically by the need to study the approximability of NP optimization problems beyond exact solvability; Papadimitriou and Yannakakis introduced foundational concepts in 1991 through their work on MAX NP and related subclasses, laying the groundwork for analyzing approximation properties in a structured complexity framework. APX thus distinguishes itself from P-optimizable problems—those in the class PO where optimal solutions can be computed exactly in polynomial time—highlighting the inapproximability barriers for certain NPO problems under the assumption P ≠ NP, where constant-factor approximations remain feasible while exact optimization is intractable.

Approximation Reductions

L-Reduction

The L-reduction is a polynomial-time that preserves the approximability properties of , serving as the tool for establishing results within the APX complexity class. Introduced by Papadimitriou and Yannakakis, it transforms an instance of one optimization problem Π to an instance of another problem Σ while bounding both the scaling of optimal values and the propagation of approximation errors. Formally, an L-reduction from Π to Σ consists of two polynomial-time computable functions ff and gg, and constants α,β>0\alpha, \beta > 0 (independent of the instance) satisfying: (1) y=f(x)y = f(x) is an instance of Σ for any instance xx of Π; (2) for any solution yy' to yy, g(x,y)g(x, y') is a solution to xx; (3) for maximization problems, opt(Σ,y)αopt(Π,x)\mathrm{opt}(\Sigma, y) \leq \alpha \cdot \mathrm{opt}(\Pi, x) (for minimization, opt(Σ,y)αopt(Π,x)\mathrm{opt}(\Sigma, y) \geq \alpha \cdot \mathrm{opt}(\Pi, x) with α1\alpha \geq 1); (4) opt(Π,x)cΠ(g(x,y))βopt(Σ,y)cΣ(y)|\mathrm{opt}(\Pi, x) - c_\Pi(g(x, y'))| \leq \beta \cdot |\mathrm{opt}(\Sigma, y) - c_\Sigma(y')|, where cc denotes the objective function value of a solution. The α\alpha thus bounds the scaling of optima between instances, while β\beta bounds the error propagation from approximate solutions. A key property of L-reductions is their preservation of constant-factor approximability: if Π is hard to approximate within a factor ρ\rho, then Σ is hard to approximate within ραβ\rho \cdot \alpha \cdot \beta. This compositionality allows chaining multiple L-reductions while maintaining bounded approximation ratios, enabling systematic proofs of inapproximability in APX. Unlike standard Karp reductions (many-one) or Cook reductions (Turing), which suffice for exact solvability in NP but do not inherently control approximation ratios, L-reductions are necessary for APX because they explicitly account for the relative scaling of optima and error terms, preventing the loss of approximability guarantees during transformations. L-reductions are frequently employed to demonstrate APX-hardness by reducing from known hard problems while preserving constant-factor barriers.

Other Approximation-Preserving Reductions

In addition to L-reductions, which serve as the standard for preserving constant-factor s within , several alternative reductions have been developed to handle specialized scenarios in approximation complexity. These include AP-reductions, E-reductions, P-reductions, and gap-preserving reductions, each tailored to preserve approximability properties under varying conditions. While L-reductions maintain a fixed constant bound on the approximation ratio, these alternatives often allow for input-dependent factors or focus on gap amplification for hardness proofs, making them useful in extensions beyond core APX membership. AP-reductions generalize the preservation of constant-factor approximations by ensuring that a solution with r>1r > 1 for the target problem maps back to a solution with at most 1+α(r1)1 + \alpha (r - 1) for the original, where α\alpha is a fixed constant. Formally, for minimization problems Π1\Pi_1 and Π2\Pi_2, an AP-reduction consists of polynomial-time functions ff and gg, and constant α\alpha, such that if ρ2(f(x),y)r\rho_2(f(x), y) \leq r, then ρ1(x,g(x,y))1+α(r1)\rho_1(x, g(x, y)) \leq 1 + \alpha (r - 1). This structure preserves membership in APX: if Π2\Pi_2 \in APX, then Π1\Pi_1 \in APX. Introduced in foundational work on hierarchies, AP-reductions are particularly employed to establish completeness results for subclasses like Log-APX, where logarithmic approximation factors are considered. Unlike L-reductions, which bound both the additive and multiplicative errors tightly, AP-reductions permit a looser multiplicative scaling, facilitating reductions between problems with differing error tolerances. E-reductions and P-reductions extend preservation to scenarios involving polynomial-bounded errors or schemes. An E-reduction, akin to an L-reduction but with the scaling parameter α\alpha replaced by a polynomial p(x)p(|x|) in the input size, ensures that approximations are preserved up to a polynomial factor, implying L-reduction-like hardness when the polynomial is constant. This makes E-reductions suitable for proving completeness in classes like APX-PB (APX with polynomial-bounded approximations). P-reductions, in contrast, focus on preserving polynomial-time approximation schemes (PTAS); they map instances such that a (1+ϵ)(1 + \epsilon)-approximation for the target yields a (1+δ(ϵ))(1 + \delta(\epsilon))-approximation for the original, with δ\delta depending on ϵ>0\epsilon > 0. These reductions are crucial for PTAS-completeness, as they transfer the existence of PTAS between problems, though they do not directly apply to constant-factor APX without additional constraints. Both types imply stronger preservation under specific conditions but are less stringent than L-reductions for fixed constants. Gap-preserving reductions play a pivotal role in inapproximability results, especially those derived from the , by mapping instances to "gap-instances" that distinguish near-optimal solutions (ratio close to 1) from far-from-optimal ones (ratio bounded away from 1). Formally, a gap-preserving reduction from a (α,β)(\alpha, \beta)-gap problem Π1\Pi_1 (where yes-instances have optimum at least α\alpha-factor better than no-instances at β\beta) to Π2\Pi_2 ensures the target instance is a (cα,cβ)(c\alpha, c\beta)-gap for some constant cc, preserving the hardness gap. Originating from PCP-based techniques, these amplify small gaps from NP-complete problems like into larger ones for optimization problems, proving that no polynomial-time algorithm can approximate within certain factors unless P=NP. For instance, the yields a gap-preserving reduction from to MAX-3SAT, establishing inapproximability up to a constant factor. While powerful for hardness in APX and beyond, gap-preserving are less common for direct APX membership proofs, as they emphasize distinguishing thresholds rather than exact ratio transfer, and are typically used in extensions like NPO or for PCP-derived lower bounds. These reductions, though versatile, have limitations in APX contexts: AP- and gap-preserving variants may introduce input-dependent factors that dilute constant-ratio guarantees, unlike L-reductions' uniform preservation, and E- and P-reductions often require polynomial overheads unsuitable for tight APX bounds. They shine in broader settings, such as PTAS analysis or inapproximability via probabilistically checkable proofs, but their application in core APX remains secondary to L-reductions.

Hardness and Completeness

APX-Hardness

A problem Π\Pi in the class of NP optimization problems is defined as APX-hard if every problem in APX L-reduces to Π\Pi. This means that for any problem σ\sigma in APX, there exists an L-reduction from σ\sigma to Π\Pi that preserves the optimal solution values and guarantees up to constant factors. The notion of APX-hardness was introduced by Papadimitriou and Yannakakis as part of their framework for classifying optimization problems based on approximability. To prove that a given problem Π\Pi is APX-hard, researchers construct chains of L-reductions starting from known hard problems in APX, often tracing back to complete problems in the subclass MAX-SNP. The core proof technique relies on showing that a ρ\rho- for Π\Pi (for some constant ρ>1\rho > 1) would imply the existence of a corresponding constant-factor for every problem in APX via these reductions. APX-hardness carries significant implications for approximation algorithms: if Π\Pi is APX-hard, then it cannot admit a polynomial-time approximation scheme (PTAS) unless APX \subseteq PTAS, a collapse widely believed to be false assuming P \neq NP. This hardness notion thus establishes fundamental inapproximability thresholds, indicating that constant-factor approximations are the best achievable in polynomial time for such problems under standard complexity assumptions. The study of APX-hardness evolved from earlier work on the MAX-SNP class, a historical precursor introduced by Papadimitriou and Yannakakis, where hardness results for constant-factor approximations were first systematically explored through similar reduction techniques.

APX-Completeness

A problem is APX-complete if it belongs to the complexity class and is APX-hard, meaning that every in APX can be reduced to it via an L-reduction, which preserves constant-factor approximability. This definition mirrors the notion of , where hardness is established through polynomial-time reductions that maintain the approximability properties of the source problem. APX-complete problems cannot be self-improvable—meaning they do not admit algorithms that iteratively refine s to arbitrarily better ratios—unless P = NP, which underscores the inherent hardness of these problems. The existence of APX-complete problems reveals a structured "difficulty spectrum" within the realm of algorithms, analogous to the role of NP-complete problems in decision problems, by identifying the maximally hard instances under constant-factor approximations. This completeness notion implies that developing a polynomial-time scheme (PTAS) for any APX-complete problem would yield a PTAS for all problems in APX. An open question concerns whether APX admits complete problems under stronger, more restrictive reductions such as PTAS-reductions, which preserve the existence of schemes; current proofs of APX-completeness rely primarily on L-reductions, which are less stringent.

Examples of APX-Complete Problems

One example of an APX-complete problem is the Minimum Vertex Cover problem, which seeks the smallest set of vertices in an undirected graph that touches all edges. This minimization problem is APX-complete, as established via an L-reduction from 3-SAT that preserves factors within constants. A simple 2- exists based on maximal matching, but stronger inapproximability results show it cannot be approximated within a factor of 1.3606 unless P=NP, derived from techniques. Recent analyses confirm this bound remains tight without assuming stronger conjectures like the . A brief sketch of the L-reduction from 3-SAT to involves constructing a graph where variable gadgets consist of two vertices connected by an edge (representing true and false literals), and clause gadgets are triangles with additional edges linking to the appropriate literal vertices in the variable gadgets; the reduction sets the target cover to the number of variables plus twice the number of clauses, ensuring that a satisfying assignment corresponds to a cover of that , while unsatisfiable instances require larger covers, with the cost difference bounded by a constant factor to preserve APX-membership. Another prominent APX-complete problem is Maximum 3-Satisfiability (MAX-3SAT), a maximization variant of 3-SAT that aims to satisfy the maximum number of clauses in a 3-CNF formula. It is APX-complete under L-reductions from related problems. A randomized 7/8-approximation achieves this ratio by derandomizing a local search procedure, but Håstad's results using long-code PCPs prove it is inapproximable within 7/8 + ε for any ε > 0 unless P=NP. The Metric Traveling Salesman Problem (Metric TSP), which requires finding a minimum-length Hamiltonian cycle in a with distances satisfying the , is also APX-complete. provides a 3/2-approximation by combining a with a minimum-weight on odd-degree vertices, guaranteeing a tour at most 1.5 times the optimum. Inapproximability within factors better than 123/122 is known unless P=NP, though basic APX-hardness stems from early L-reductions.

PTAS

The polynomial-time approximation scheme (PTAS) is a complexity class within the broader framework of NPO (NP optimization problems), consisting of those optimization problems for which, given any ϵ>0\epsilon > 0, there exists a polynomial-time algorithm that approximates the optimal solution within a factor of 1+ϵ1 + \epsilon for maximization problems (or 1ϵ1 - \epsilon for minimization problems). The running time of such algorithms is polynomial in the input size nn, but may depend exponentially on 1/ϵ1/\epsilon, allowing the approximation quality to improve arbitrarily at the cost of increased computation. A stronger variant is the fully polynomial-time approximation scheme (FPTAS), where the running time is polynomial in both nn and 1/ϵ1/\epsilon, independent of the specific numerical values in the problem instance. This distinction highlights that while PTAS algorithms can achieve near-optimal solutions for any desired precision, their efficiency in handling small ϵ\epsilon may vary significantly across problems, whereas FPTAS provides uniform polynomial dependence on the precision parameter. PTAS is a subclass of APX, as setting ϵ\epsilon to a fixed constant yields a constant-factor , which defines APX membership. However, unless P = NP, APX-complete problems do not admit a PTAS, implying that APX \ PTAS is nonempty under this assumption. Problems in PTAS but not in P are considered easier to approximate than APX-complete ones, as the former allow arbitrarily good polynomial-time approximations while the latter resist even constant-factor improvements beyond certain thresholds unless P = NP. For instance, the 0-1 admits an FPTAS, enabling solutions within (1ϵ)(1 - \epsilon) of optimal in time polynomial in nn and 1/ϵ1/\epsilon.

APX-Intermediate Problems

APX-intermediate problems are those NP optimization problems that belong to the complexity class —meaning they admit a polynomial-time constant-factor —but are neither contained in PTAS (the class of problems admitting a polynomial-time scheme) nor APX-hard (meaning no APX-complete problem reduces to them via an approximation-preserving reduction, unless = PTAS). This notion parallels problems in decision , capturing problems that sit strictly between the "easier" approximable cases in PTAS and the hardest cases in . The existence of APX-intermediate problems follows from a diagonalization argument analogous to Ladner's theorem for NP, assuming APX ≠ PTAS (which is implied by P ≠ NP). Specifically, if there exists a problem in APX \ PTAS, one can constructively diagonalize over the finite set of PTAS and APX-hard problems to produce infinitely many distinct APX-intermediate problems. While such artificial constructions guarantee the non-emptiness of the class under standard assumptions, natural examples were scarce until explicit constructions were provided. A prominent natural example of an APX-intermediate problem is the Minimum , where the goal is to pack a set of items of given sizes into the minimum number of bins of fixed capacity. This problem admits a polynomial-time with ratio 3/2 + ε for any ε > 0 but is not in PTAS unless P = NP, as achieving a (1 + ε)- would solve the NP-hard partition problem exactly for sufficiently small ε. Moreover, Minimum is not APX-complete unless the collapses, establishing its intermediate status. Regarding graph coloring generalizations, the problem of approximating the chromatic number for graphs that are k-colorable (with fixed k ≥ 3) allows polynomial-time algorithms using O(n^{1 - ε}) colors for some ε > 0, but no constant-factor exists unless P = NP; however, since the approximation ratio grows with n, this variant lies outside APX and thus does not qualify as intermediate within it. The significance of APX-intermediate problems lies in demonstrating the intricate hierarchy within APX, beyond a simple between PTAS and APX-complete problems. This reveals that approximation complexity can exhibit fine-grained structure similar to decision classes, motivating further study of and approximability thresholds. Few natural APX-intermediate problems are known, underscoring an incompleteness in the of optimization problems and spurring ongoing research into identifying more examples and refining diagonalization techniques for approximation classes, as discussed in recent surveys and preprints.

f(n)-APX and Poly-APX

The class f(n)-APX generalizes the complexity class to encompass NPO problems that admit a polynomial-time with performance ratio bounded by some unbounded function f(n) tending to as the input size n grows. This framework captures optimization problems where constant-factor approximations are infeasible, but algorithms achieving ratios that grow slower than any exist. Poly-APX forms a specific subclass of f(n)-APX, restricted to cases where f(n) is a polynomial function, such as O(n^k) for some constant k ≥ 1. Under the assumption that P ≠ NP, the inclusion chain PTAS ⊂ APX ⊂ Poly-APX ⊂ NPO holds strictly, positioning Poly-APX as an intermediate layer between constant-ratio approximability and the full scope of NP optimization problems. A representative example is the graph coloring problem, which lies outside APX but belongs to Poly-APX; it admits a polynomial-time O(n / log n)-approximation algorithm via a modified greedy approach that iteratively colors random subsets of vertices. Formally, an NPO problem Π belongs to f(n)-APX if there exists a polynomial-time A such that for every instance x of Π, the ratio |OPT(x) - A(x)| / OPT(x) ≤ f(n) (for minimization problems), where OPT(x) denotes the optimal value and n = |x| is the input length. Approximation-preserving for f(n)-APX and Poly-APX extend those for APX by ensuring the transformation maintains the bounding function f(n), such as through E-reductions that linearly relate the performance ratios of the source and target problems. These classes facilitate the study of broader inapproximability thresholds; for instance, is NP-hard to approximate within a factor of n^{1-ε} for any ε > 0 (unless NP ⊆ ZPP), yet its membership in Poly-APX highlights the gap between polynomial-factor hardness and achievability.
Add your contribution
Related Hubs
User Avatar
No comments yet.