Recent from talks
Nothing was collected or created yet.
Polynomial-time approximation scheme
View on WikipediaIn computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems).
A PTAS is an algorithm which takes an instance of an optimization problem and a parameter ε > 0 and produces a solution that is within a factor 1 + ε of being optimal (or 1 – ε for maximization problems). For example, for the Euclidean traveling salesman problem, a PTAS would produce a tour with length at most (1 + ε)L, with L being the length of the shortest tour.[1]
The running time of a PTAS is required to be polynomial in the problem size for every fixed ε, but can be different for different ε. Thus an algorithm running in time O(n1/ε) or even O(nexp(1/ε)) counts as a PTAS.
Variants
[edit]Deterministic
[edit]A practical problem with PTAS algorithms is that the exponent of the polynomial could increase dramatically as ε shrinks, for example if the runtime is O(n(1/ε)!). One way of addressing this is to define the efficient polynomial-time approximation scheme or EPTAS, in which the running time is required to be O(nc) for a constant c independent of ε. This ensures that an increase in problem size has the same relative effect on runtime regardless of what ε is being used; however, the constant under the big-O can still depend on ε arbitrarily. In other words, an EPTAS runs in FPT time where the parameter is ε.
Even more restrictive, and useful in practice, is the fully polynomial-time approximation scheme or FPTAS, which requires the algorithm to be polynomial in both the problem size n and 1/ε.
Unless P = NP, it holds that FPTAS ⊊ PTAS ⊊ APX.[2] Consequently, under this assumption, APX-hard problems do not have PTASs.
Another deterministic variant of the PTAS is the quasi-polynomial-time approximation scheme or QPTAS. A QPTAS has time complexity npolylog(n) for each fixed ε > 0. Furthermore, a PTAS can run in FPT time for some parameterization of the problem, which leads to a parameterized approximation scheme.
Randomized
[edit]Some problems which do not have a PTAS may admit a randomized algorithm with similar properties, a polynomial-time randomized approximation scheme or PRAS. A PRAS is an algorithm which takes an instance of an optimization or counting problem and a parameter ε > 0 and, in polynomial time, produces a solution that has a high probability of being within a factor ε of optimal. Conventionally, "high probability" means probability greater than 3/4, though as with most probabilistic complexity classes the definition is robust to variations in this exact value (the bare minimum requirement is generally greater than 1/2). Like a PTAS, a PRAS must have running time polynomial in n, but not necessarily in ε; with further restrictions on the running time in ε, one can define an efficient polynomial-time randomized approximation scheme or EPRAS similar to the EPTAS, and a fully polynomial-time randomized approximation scheme or FPRAS similar to the FPTAS.[3]
As a complexity class
[edit]The term PTAS may also be used to refer to the class of optimization problems that have a PTAS. PTAS is a subset of APX, and unless P = NP, it is a strict subset. [2]
Membership in PTAS can be shown using a PTAS reduction, L-reduction, or P-reduction, all of which preserve PTAS membership, and these may also be used to demonstrate PTAS-completeness. On the other hand, showing non-membership in PTAS (namely, the nonexistence of a PTAS), may be done by showing that the problem is APX-hard, after which the existence of a PTAS would show P = NP. APX-hardness is commonly shown via PTAS reduction or AP-reduction.
See also
[edit]- Parameterized approximation scheme, an approximation scheme that runs in FPT time
References
[edit]- ^ Sanjeev Arora, Polynomial-time Approximation Schemes for Euclidean TSP and other Geometric Problems, Journal of the ACM 45(5) 753–782, 1998.
- ^ a b Jansen, Thomas (1998), "Introduction to the Theory of Complexity and Approximation Algorithms", in Mayr, Ernst W.; Prömel, Hans Jürgen; Steger, Angelika (eds.), Lectures on Proof Verification and Approximation Algorithms, Lecture Notes in Computer Science, vol. 1367, Springer, pp. 5–28, doi:10.1007/BFb0053011, ISBN 9783540642015. See discussion following Definition 1.30 on p. 20.
- ^ Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. pp. 294–295. ISBN 3-540-65367-8.
External links
[edit]- Complexity Zoo: PTAS, EPTAS.
- Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, and Gerhard Woeginger, A compendium of NP optimization problems – list which NP optimization problems have PTAS.
Polynomial-time approximation scheme
View on GrokipediaBackground Concepts
Approximation Algorithms
Approximation algorithms provide a practical approach to solving optimization problems where finding an exact optimal solution is computationally intractable, typically because the problem is NP-hard. These algorithms run in polynomial time and produce solutions that are provably close to the optimum, offering a balance between efficiency and solution quality.[6] They emerged as a response to the theoretical limitations highlighted by the P vs. NP question, enabling the tackling of real-world applications in areas like scheduling, network design, and resource allocation.[7] The quality of an approximation algorithm is measured by its approximation ratio ρ, where ρ ≥ 1, which bounds how far the algorithm's solution deviates from the optimal one. For a maximization problem on input instance I, the algorithm A satisfies ALG(I) ≥ OPT(I) / ρ, ensuring the approximate solution is at least a 1/ρ fraction of the optimum; for minimization, ALG(I) ≤ ρ · OPT(I).[6] This ratio is typically multiplicative (relative), comparing the solutions proportionally, though additive (absolute) ratios exist, where |ALG(I) - OPT(I)| ≤ k for some constant k, independent of the optimum's scale.[8] Relative ratios are preferred for problems where solution values can vary widely in magnitude, as they provide scale-invariant guarantees.[6] The development of approximation algorithms was spurred by Richard Karp's 1972 demonstration that numerous combinatorial optimization problems, such as the traveling salesman problem and set cover, are NP-complete, implying no efficient exact algorithms exist unless P = NP.[9] Early examples include greedy heuristics, like the one for the set cover problem introduced by David S. Johnson in 1973, which selects sets to cover uncovered elements while minimizing incremental cost, achieving an O(log n) approximation ratio. Vasek Chvátal refined this analysis in 1979, proving the greedy algorithm's ratio matches the harmonic number H_n ≈ ln n + 1, establishing a foundational benchmark for approximation guarantees.[10] In the 1970s and 1980s, the field matured as researchers sought systematic ways to cope with NP-hardness, with Michael Garey and David S. Johnson's 1979 book Computers and Intractability providing a comprehensive survey of approximation techniques and performance bounds for over 100 NP-complete problems.[11] This period saw the formalization of approximation classes and the recognition of approximation as a core tool in algorithm design, shifting focus from exact solvability to quantifiable near-optimality.[7]NP-Hard Optimization Problems
Optimization problems in computational complexity theory seek to find the best solution among a set of feasible solutions according to a specified objective function, such as minimizing cost or maximizing value.[12] Unlike decision problems, which require a yes/no answer to whether a solution exists satisfying a given criterion (e.g., for the traveling salesman problem, does a tour of length at most k exist?), optimization versions demand constructing the actual optimal solution, like the shortest possible tour. This distinction highlights that solving the decision version does not immediately yield the optimization solution, though polynomial-time solvability of the decision problem often facilitates binary search approaches for the optimization counterpart.[11] NP-hard optimization problems are those whose optimization versions are at least as difficult as the hardest problems in NP, meaning no polynomial-time algorithm exists for finding an optimal solution unless P = NP. The foundation of NP-hardness traces to Cook's theorem from 1971, which established that the Boolean satisfiability problem (SAT) is NP-complete by showing any problem in NP reduces to SAT in polynomial time via Turing machine simulations. Subsequent reductions from SAT or other NP-complete problems extend this hardness to optimization contexts; for instance, Karp's 1972 work demonstrated polynomial-time reductions among 21 combinatorial problems, proving many optimization problems NP-hard by linking their decision variants to known NP-complete cases. Canonical examples of NP-hard optimization problems include the vertex cover problem, where the goal is to find the smallest set of vertices that touches every edge in a graph; the 0-1 knapsack problem, which maximizes value packed into a knapsack without exceeding weight capacity; and the traveling salesman problem (TSP), seeking the shortest Hamiltonian cycle in a graph. Vertex cover's NP-hardness follows from a reduction from 3-SAT in Karp's framework, while knapsack's decision version reduces from partition, and TSP from Hamiltonian cycle, all preserving computational difficulty.[11] The implications of NP-hardness are profound: absent a collapse of P to NP, no exact polynomial-time algorithms exist for these problems, rendering exhaustive search infeasible for large instances and emphasizing the search (optimization) version's greater challenge over mere verification in decision forms. To analyze approximation feasibility across such problems, resource-bounded reductions like L-reductions are employed; introduced by Papadimitriou and Yannakakis in 1991, these transformations preserve approximation ratios by ensuring the optimal values and solution distances scale linearly between problems.[13]Core Definitions
Polynomial-Time Approximation Scheme (PTAS)
A polynomial-time approximation scheme (PTAS) for a minimization (or maximization) optimization problem Π is a family of algorithms {A_ε}_{ε>0} such that, for every fixed ε > 0, the algorithm A_ε takes an instance of Π of size n as input and, in time polynomial in n, outputs a solution whose cost is at most (1 + ε) times (or at least (1 - ε) times) the optimal cost.[13] This guarantee holds for any desired precision ε > 0, allowing arbitrarily close approximations to the optimum, though the running time may degrade as ε decreases.[13] The running time of A_ε is typically expressed as T(n, 1/ε) = n^{f(1/ε)}, where f is some arbitrary computable function independent of n.[13] This form highlights the polynomial dependence on the input size n, but allows superpolynomial (even exponential) dependence on the precision parameter 1/ε, distinguishing PTAS from stronger approximation guarantees. Unlike fixed-ratio approximation algorithms, which provide a constant factor (e.g., 2-approximation) independent of ε, a PTAS achieves any approximation ratio 1 + ε (or 1 - ε) in polynomial time, making it a flexible scheme for problems where exact solutions are intractable.[13] Early constructions of PTAS relied on dynamic programming techniques tailored to geometric problems, as pioneered by Papadimitriou and Yannakakis in the 1980s.[13] For instance, Hochbaum and Maass developed one of the first such schemes in 1985 for nonconvex covering problems in the plane, using a shifting strategy to partition the instance and apply exact dynamic programming on subproblems. The existence of a PTAS for a problem implies it cannot be APX-hard unless P = NP. This follows from the definition of APX-hardness, established via L-reductions that preserve approximation ratios: if an APX-hard problem had a PTAS, one could chain such reductions to solve an NP-hard problem exactly in polynomial time, collapsing P and NP.[13]Fully Polynomial-Time Approximation Scheme (FPTAS)
A fully polynomial-time approximation scheme (FPTAS) refines the concept of a polynomial-time approximation scheme (PTAS) by providing stronger runtime guarantees, ensuring that the approximation algorithm runs in time polynomial in both the input size and the inverse of the approximation parameter . Specifically, for a maximization problem, an FPTAS is a family of algorithms that, for any , computes a solution with value at least times the optimal value in running time . This fixed-degree polynomial dependence on distinguishes FPTAS from general PTAS, where the runtime may be exponential in . The notation for the runtime of an FPTAS can be expressed as for some constants and function that is polynomial in , ensuring scalability even for small . In contrast, a PTAS allows runtimes like , which becomes impractical for high precision. This polynomial scaling in makes FPTAS particularly suitable for problems where near-optimal solutions are needed with controlled computational cost. The existence of an FPTAS for an optimization problem is equivalent to the decision version of the problem admitting a pseudo-polynomial time algorithm, which runs in time polynomial in and the numeric values of the input (but exponential in the input length if values are large). This characterization holds for weakly NP-hard problems, as opposed to strongly NP-hard ones, which lack pseudo-polynomial solvability and thus cannot admit an FPTAS unless P = NP. Seminal work by Ibarra and Kim (1975) introduced the first FPTAS for the 0-1 knapsack problem, demonstrating these techniques on a classic weakly NP-hard optimization task.[13] Construction of an FPTAS often relies on scaling techniques combined with dynamic programming to handle the pseudo-polynomial nature of the problem. For the 0-1 knapsack problem, where items have profits and weights , and the goal is to maximize total profit without exceeding capacity , an upper bound OPT on the optimal profit is first computed (e.g., via a greedy heuristic). Profits are then scaled and rounded down to with , so . A dynamic programming table computes, for each possible scaled profit sum , the minimum weight needed to achieve exactly . Updating for each of the items takes time per item, resulting in overall time complexity . This rounding introduces at most error in the approximation while preserving polynomial time.[14]Properties and Classifications
As a Complexity Class
The class PTAS comprises the set of all NP optimization problems that admit a polynomial-time approximation scheme, meaning for every fixed , there exists a polynomial-time algorithm achieving an approximation ratio of .[15] Similarly, the class FPTAS consists of those NP optimization problems admitting a fully polynomial-time approximation scheme, where the running time is polynomial in both the input size and .[15] Formally, these classes can be characterized using gap languages over promise problems. For a maximization problem , define the -gap promise problem with yes-instances and no-instances , where the input is promised to be in one of these sets, and the task is to distinguish between them. Then, if, for every , ; the class is the union over all such . Analogously, if the running time for solving is polynomial in and .[15] The class PTAS is not closed under standard polynomial-time many-one reductions, as such reductions may not preserve the existence of a PTAS, but it is closed under approximation-preserving (AP) reductions, which map solutions while controlling the approximation factor via a linear function . In contrast, FPTAS is closed under polynomial-time parsimonious reductions, which preserve the optimal solution value exactly.[16][6] PTAS properly contains the class of P-optimizable problems (those solvable exactly in polynomial time) but is properly contained in APX (the class of problems admitting constant-factor approximations in polynomial time), assuming P NP. The class lacks complete problems under standard reductions due to its incompleteness, as membership depends sensitively on the problem structure rather than universal hardness. The PCP theorem by Arora, Lund, Motwani, Sudan, and Szegedy (1998) establishes that certain problems like MAX-3SAT are APX-complete under AP-reductions and thus not in PTAS unless P = NP, highlighting the boundaries of these classes.[15] Relativized separations exist where PTAS APX, even relative to oracles satisfying P = NP in some cases, underscoring that non-closure properties persist beyond unrelativized assumptions.[16]Relationships to Other Approximation Classes
The polynomial-time approximation scheme (PTAS) occupies a central position in the hierarchy of approximation complexity classes for NP optimization problems, as formalized by Papadimitriou and Yannakakis.[13] For maximization problems, this hierarchy establishes the strict inclusions P ⊆ FPTAS ⊆ PTAS ⊆ APX ⊆ NPO, where P denotes decision problems solvable in polynomial time, FPTAS the class of problems admitting fully polynomial-time approximation schemes, APX the class of problems approximable within a constant factor greater than 1 in polynomial time, and NPO the class of all NP optimization problems.[13] Problems in P admit a trivial FPTAS by exact solution, implying P = FPTAS in this context.[17] The class APX captures problems with fixed-ratio approximations independent of any error parameter ε > 0, such as a constant-factor guarantee like 1.5 or 2, whereas PTAS requires approximations arbitrarily close to 1 for any ε > 0, albeit with running time polynomial only in the input size and potentially exponential in 1/ε.[13] Assuming P ≠ NP, APX properly contains PTAS, since no APX-complete problem admits a PTAS; otherwise, such a scheme would collapse the hierarchy undesirably.[17] PTAS-hardness delineates problems unlikely to admit PTAS unless P = NP, often shown via approximation-preserving reductions from known hard problems. For instance, MAX-3SAT-6—a bounded variant of maximum 3-satisfiability aiming to satisfy at least 6/7 of the clauses—is PTAS-hard, as established by inapproximability results demonstrating it cannot be approximated better than 7/8 + ε for any ε > 0.[18] Håstad's reductions further confirm that MAX-3SAT itself resists approximation within 7/8 + ε, implying no PTAS exists unless P = NP.[19] Within this framework, MAX-SNP forms a syntactic subclass of APX, defined via bounded alternating existential-universal quantifiers and containing natural problems like MAX-3SAT and vertex cover; reductions preserve membership in MAX-SNP, aiding completeness proofs.[13] The Papadimitriou-Yannakakis hierarchy also encompasses related classes like BI-CRPTAS, which generalize PTAS to bi-criteria settings where one objective is approximated within (1 + ε) at the expense of relaxing another constraint by a bounded factor, useful for resource-constrained optimizations.[13] Recent developments reinforce PTAS placements in the hierarchy, particularly for geometric problems in APX; for example, Arora's 1998 PTAS for the Euclidean traveling salesman problem in fixed dimensions achieves (1 + ε)-approximation in polynomial time, highlighting how PTAS can resolve approximability for specific subclasses unless P = NP.[3] Up to 2025, no fundamental separations or collapses in the PTAS-APX relation have emerged, maintaining the assumed inclusions under standard conjectures.[17]Variants and Extensions
Deterministic PTAS
A deterministic polynomial-time approximation scheme (PTAS) refers to a family of deterministic algorithms that, for any fixed ε > 0, compute a solution within a (1 + ε) factor of the optimal value for an NP-hard optimization problem, running in time polynomial in the input size n but possibly exponential in 1/ε. Unlike randomized variants, these algorithms provide worst-case guarantees without any probability of failure or variance in performance.[20] Key techniques for constructing deterministic PTAS often rely on dynamic programming combined with input rounding or simplification to bound the state space. For instance, in the bin packing problem, large items (those exceeding ε in size) are handled by enumerating feasible packings into a small number of bins using dynamic programming over configurations, while small items are grouped by size classes and packed greedily or via further DP; this approach yields an asymptotic PTAS with approximation ratio 1 + ε + 1/OPT, where OPT is the optimal number of bins. The seminal asymptotic PTAS for bin packing, achieving linear time O(n log(1/ε)), was developed by Fernández de la Vega and Lueker in 1981 through this grouping and DP method, reducing the problem to solving a bounded knapsack-like subproblem exactly. Another foundational technique is Baker's shifting method from the 1980s, which applies to scheduling and planar graph problems by iteratively removing layers of vertices to create subgraphs of bounded treewidth (O(1/ε) layers), solvable via deterministic dynamic programming on tree decompositions; this guarantees a (1 + ε)-approximation by combining solutions from shifted instances and selecting the best. The formal presentation of Baker's technique appeared in 1994, enabling PTAS for problems like maximum independent set on planar graphs. Runtime analysis for these deterministic PTAS typically shows a dependence of n^{O(1/ε)} or 2^{O(1/ε)} poly(n), arising from enumerating polytopes or configurations in the DP table whose dimension or size scales with 1/ε. For example, in the partition problem—to divide a set of positive integers into two subsets with sums as equal as possible—a simple deterministic PTAS uses scaling and dynamic programming as follows:- Compute the total sum S = ∑ a_i.
- Set δ = ε S / (2n).
- Round each a_i down to the nearest multiple of δ, obtaining b_i = ⌊a_i / δ⌋ δ; the total rounding error is at most n δ = ε S / 2.
- Use standard subset sum DP to find a subset of the b_i with sum closest to S/2, solvable in O(n^2 / ε) time since the target is O(n / ε) in scaled units.
- The resulting partition has difference at most ε S from optimal, yielding a (1 + ε)-approximation to the minimum difference (or equivalently to the maximum achievable subset sum ≤ S/2).
