Recent from talks
Nothing was collected or created yet.
Strong NP-completeness
View on WikipediaIn computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters.
A problem is said to be strongly NP-complete (NP-complete in the strong sense), if it remains NP-complete even when all of its numerical parameters are bounded by a polynomial in the length of the input.[1] A problem is said to be strongly NP-hard if a strongly NP-complete problem has a pseudo-polynomial reduction to it. This pseudo-polynomial reduction is more restrictive than the usual poly-time reduction used for NP-hardness proofs. In special, the pseudo-polynomial reduction cannot output a numerical parameter that is not polynomially bounded by the size and value of numbers in the input.[2]
Normally numerical parameters to a problem are given in positional notation, so a problem of input size n might contain parameters whose size is exponential in n. If we redefine the problem to have the parameters given in unary notation, then the parameters must be bounded by the input size. Thus strong NP-completeness or NP-hardness may also be defined as the NP-completeness or NP-hardness of this unary version of the problem.
For example, bin packing is strongly NP-complete while the 0-1 Knapsack problem is only weakly NP-complete. Thus the version of bin packing where the object and bin sizes are integers bounded by a polynomial remains NP-complete, while the corresponding version of the Knapsack problem can be solved in pseudo-polynomial time by dynamic programming.
From a theoretical perspective any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have a fully polynomial-time approximation scheme (or FPTAS) unless P = NP.[3] [4] However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.[5]
Some strongly NP-complete problems may still be easy to solve on average, but it's more likely that difficult instances will be encountered in practice.
Strong and weak NP-hardness vs. strong and weak polynomial-time algorithms
[edit]Assuming P ≠ NP, the following are true for computational problems on integers:[6]
- If a problem is weakly NP-hard, then it does not have a weakly polynomial time algorithm (polynomial in the number of integers and the number of bits in the largest integer), but it may have a pseudopolynomial time algorithm (polynomial in the number of integers and the magnitude of the largest integer). An example is the partition problem. Both weak NP-hardness and weak polynomial-time correspond to encoding the input integers in binary coding.
- If a problem is strongly NP-hard, then it does not even have a pseudo-polynomial time algorithm. It also does not have a fully-polynomial time approximation scheme. An example is the 3-partition problem. Both strong NP-hardness and pseudo-polynomial time correspond to encoding the input integers in unary coding.
References
[edit]- ^ Garey, M. R.; Johnson, D. S. (July 1978). "'Strong' NP-Completeness Results: Motivation, Examples, and Implications". Journal of the Association for Computing Machinery. 25 (3). New York, NY: ACM: 499–508. doi:10.1145/322077.322090. ISSN 0004-5411. MR 0478747. S2CID 18371269.
- ^ Hetland', Magnus Lie. "Home Questions Unanswered Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Illustration of upvote icon after it is clicked Illustration of upvote icon after it is clicked Can strong NP-hardness really be shown using plain polytime reductions?". Retrieved 29 May 2025.
- ^ Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. pp. 294–295. ISBN 3-540-65367-8. MR 1851303.
- ^ Garey, M. R.; Johnson, D. S. (1979). Victor Klee (ed.). Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co. pp. x+338. ISBN 0-7167-1045-5. MR 0519066.
- ^ H. Kellerer; U. Pferschy; D. Pisinger (2004). Knapsack Problems. Springer.
- ^ Demaine, Erik. "Algorithmic Lower Bounds: Fun with Hardness Proofs, Lecture 2".
Strong NP-completeness
View on GrokipediaBackground Concepts
NP and NP-Completeness
The complexity class NP, for "nondeterministic polynomial time," encompasses all decision problems for which a "yes" instance can be verified in polynomial time by a deterministic Turing machine, or equivalently, solved in polynomial time by a nondeterministic Turing machine.[2] This class formalizes problems where solutions can be checked efficiently, even if finding them may be difficult.[2] NP-completeness identifies the hardest problems within NP: a problem is NP-complete if it belongs to NP and every other problem in NP can be reduced to it in polynomial time via a many-one (Karp) reduction.[2] The concept was introduced by Stephen Cook in 1971, who proved that the Boolean satisfiability problem (SAT) is NP-complete, establishing it as the first such problem.[2] Independently, Leonid Levin developed similar ideas around the same time, publishing in 1973 on universal search problems that align with NP-completeness. A fundamental property of NP-complete problems is that if any one of them admits a polynomial-time algorithm, then the entire class NP collapses to P, the class of problems solvable in polynomial time on a deterministic Turing machine—implying P = NP.[2] This equivalence underscores the central role of reductions in proving hardness within NP.[2]Input Encoding in Decision Problems
In decision problems within computational complexity theory, the input size, denoted as , is formally defined as the total length of the encoded instance, measured in bits or symbols required to represent the input over a fixed finite alphabet, such as .[3] This encoding ensures that the complexity of algorithms is analyzed relative to the actual information content of the input, independent of the specific representation scheme as long as it is polynomial-time computable and invertible.[4] For numerical decision problems, binary encoding is the standard convention, where integers are represented using a logarithmic number of bits proportional to for a number , leading to compact inputs where the size grows slowly with the magnitude of the numbers.[1] In contrast, unary encoding represents a number using instances of a single symbol (e.g., a string of ones), making the input size directly proportional to the value of the number rather than its logarithm.[1] This distinction is crucial for problems involving large integers, as unary encoding expands the input size exponentially compared to binary for large values, while binary keeps it concise.[5] Numerical problems, such as those in optimization or partitioning, default to binary encoding unless specified otherwise, as it reflects realistic computational scenarios where data storage and transmission prioritize efficiency.[1] The choice of encoding significantly impacts time complexity analysis. An algorithm that runs in time polynomial in the unary input size—effectively treating the magnitude of numbers as part of the input length—may require exponential time relative to the binary input size when numbers are large, since decoding or processing large binary numbers can amplify costs.[1] For instance, pseudopolynomial-time algorithms, which are efficient for small number magnitudes but scale poorly with large ones, appear polynomial under unary encoding but not under binary, highlighting how encoding influences the perceived tractability of NP-complete problems.[5] A representative example is the Partition problem, where the input consists of a set of positive integers, and the goal is to decide if they can be divided into two subsets with equal sums. Under binary encoding, the problem is NP-complete, but it admits a dynamic programming algorithm running in time, where is the number of integers and is the sum of the integers; this is pseudopolynomial because can be exponentially larger than the input size .[1] If the integers are bounded by a polynomial in (small magnitudes), the algorithm becomes truly polynomial in the binary input size, but for unbounded large numbers, the complexity remains intractable, demonstrating how encoding and number sizes alter the problem's effective hardness.[1]Definitions of Strong and Weak NP-Completeness
Weak NP-Completeness
Weak NP-completeness refers to a subclass of NP-complete problems where the computational hardness depends critically on the use of large numeric values in the input, under the standard binary encoding scheme. Specifically, a decision problem is weakly NP-complete if it is NP-complete when inputs are encoded in binary but possesses a pseudo-polynomial time algorithm that runs in time polynomial in both the input length and the magnitudes of the numeric parameters, yet exponential in the bit length of those parameters.[6] This distinction arises because binary encoding permits numbers whose values can be exponentially larger than the input size, amplifying the effective search space for certain instances.[1] A key characteristic of weakly NP-complete problems is that their intractability diminishes when restricted to instances featuring numbers bounded polynomially by the input length. In such cases, the pseudo-polynomial algorithm reduces to a true polynomial-time solution, as the numeric values no longer introduce exponential factors relative to the input size.[7] Formally, a problem admits a pseudo-polynomial time algorithm if there exists a running time bound of , where denotes the length of the input, is the maximum absolute value of any number in the input, and and are fixed constants independent of the instance.[8] This condition ensures solvability in polynomial time whenever is at most polynomial in , highlighting how the hardness is "artificial" in the sense of encoding rather than inherent structural complexity. The terminology of "weak" NP-completeness was introduced by Garey and Johnson in their seminal 1979 compendium on NP-completeness, where it was used to categorize optimization problems like the knapsack problem that satisfy these criteria.[6] This classification built on earlier work distinguishing strong from weak hardness based on unary versus binary reductions, providing a framework for understanding problems solvable efficiently in practice for moderate-sized numbers despite formal NP-completeness.[1]Strong NP-Completeness
Strong NP-completeness refines the notion of NP-completeness by addressing the role of numerical magnitudes in input instances, ensuring that the computational hardness does not depend on the exploitation of large numbers. Formally, a decision problem is strongly NP-complete if it belongs to NP and remains NP-hard even when restricted to instances where all numerical parameters are bounded by a polynomial in the input length; specifically, there exists a polynomial such that the restricted problem —defined by for every instance of —is NP-hard.[1] This restriction, often denoted as for some constant and input length , prevents reductions or algorithms from relying on the exponential growth of number sizes to achieve tractability.[1] A defining characteristic of strongly NP-complete problems is the absence of pseudo-polynomial time algorithms, meaning that no solution exists that runs in time polynomial in both the input length and the magnitudes of the numbers, unless P = NP.[1] In contrast to weakly NP-complete problems, where pseudo-polynomial algorithms may provide practical solutions for instances with small numbers, strong NP-completeness implies that exponential time is required regardless of the numerical bounds, establishing a more robust form of intractability.[1] The formalization of strong NP-completeness relies on bounded reductions, particularly pseudo-polynomial transformations, which preserve the polynomial bounds on numerical parameters during reductions between problems. A pseudo-polynomial transformation from to is a function computable in time polynomial in both and , such that the output instance satisfies and for some polynomials and .[1] These transformations ensure that if is strongly NP-hard, then so is , without introducing numbers that exceed polynomial bounds relative to the input size. This approach contrasts with standard Karp reductions, which are polynomial-time transformations based solely on the input length and may inadvertently create instances with exponentially large numbers, potentially masking true hardness by allowing pseudo-polynomial solvability.[1] Strongly polynomial reductions, by incorporating magnitude bounds, provide a stricter criterion that highlights inherent combinatorial intractability independent of number representations.[1]Distinctions and Equivalences
Strong vs. Weak NP-Hardness
The distinction between strong and weak NP-hardness lies primarily in the nature of the reductions used to establish hardness. In weak NP-hardness, polynomial-time reductions may exponentially increase the magnitude of numerical values in the instance, allowing for pseudo-polynomial-time transformations that depend on both the input length and the largest number's value.[1] In contrast, strong NP-hardness requires reductions—termed strongly polynomial-time reductions—that keep the numerical values polynomially bounded relative to the original instance size, ensuring no such exponential blowup occurs.[1] Proving strong NP-hardness typically demands more rigorous techniques, such as direct combinatorial constructions or reductions from established strongly NP-hard problems like 3-SAT, which avoid inflating numerical sizes to preserve the bounded restriction.[1] These proofs often involve restricting the problem to instances where the maximum number is at most a polynomial in the input length, demonstrating that even this bounded version remains NP-hard.[1] Weak NP-hardness proofs, however, can leverage ordinary polynomial-time reductions that permit number growth, which simplifies demonstrations but limits the strength of the intractability claim. A fundamental theorem states that every strongly NP-hard problem is NP-hard in the ordinary sense, since the bounded restriction being NP-hard implies the full problem is at least as hard; the converse does not hold, as some NP-hard problems rely on numerical blowups that vanish under bounded restrictions.[1] Furthermore, weakly NP-hard problems become solvable in polynomial time when numerical inputs are encoded in unary, as this encoding makes the input length proportional to the numerical values, transforming pseudo-polynomial algorithms into true polynomial-time ones.[9] In practice, strong NP-hardness signals an inherent combinatorial difficulty that persists regardless of encoding schemes or reliance on arithmetic operations with large numbers, underscoring challenges that cannot be mitigated by pseudo-polynomial methods unless P = NP.[1] This contrasts with weak NP-hardness, where intractability may stem more from the efficient encoding of large numbers in binary rather than structural complexity.[1]Relation to Polynomial-Time Algorithms
Weak NP-complete problems are distinguished by the existence of pseudo-polynomial time algorithms for their solution. A pseudo-polynomial time algorithm runs in time that is polynomial in both the length of the input encoding and the numeric values encoded within it, but this can become exponential in the total bit length when the numbers are exponentially large. For instance, the knapsack problem, a classic weakly NP-complete decision problem, can be solved using dynamic programming in O(nW) time, where n is the number of items and W is the capacity value, demonstrating this behavior. In contrast, strongly NP-complete problems do not admit pseudo-polynomial time algorithms unless P = NP. If such a problem is solvable in polynomial time, the algorithm must be strongly polynomial, meaning its running time depends only on the input length (number of bits), independent of the magnitudes of the encoded numbers. This follows because strong NP-completeness is defined via reductions that preserve both input size and number magnitudes, ensuring that pseudo-polynomial solvability would imply broader tractability conflicting with NP-hardness assumptions.[8] Under the assumption that P ≠ NP, a decision problem admits a polynomial-time algorithm if and only if it is not NP-hard.[1] This highlights the theoretical boundary: problems remaining hard even under unary number encodings (where magnitudes equal bit lengths) cannot be efficiently solved without resolving the P vs. NP question. This distinction extends to optimization variants. Weakly NP-hard optimization problems often admit fully polynomial-time approximation schemes (FPTAS), which provide (1 + ε)-approximations in time polynomial in the input length and 1/ε, as seen in the knapsack problem. However, strongly NP-hard optimization problems with polynomially bounded objectives do not admit FPTAS unless P = NP, because an FPTAS would imply a pseudo-polynomial exact algorithm, contradicting the strong hardness.[10]Examples and Applications
Strongly NP-Complete Problems
One canonical example of a strongly NP-complete problem is 3-SAT, the decision problem of determining whether a Boolean formula in conjunctive normal form, with each clause consisting of exactly three literals, is satisfiable.[1] This problem is strongly NP-complete because its instances involve only polynomially bounded parameters (such as the number of variables and clauses), precluding pseudo-polynomial-time algorithms unless P = NP.[1] Another fundamental example is the Vertex Cover problem, which asks whether there exists a set of at most k vertices in an undirected graph such that every edge is incident to at least one vertex in the set.[11] Vertex Cover is strongly NP-complete, as it can be reduced from 3-SAT using transformations that avoid introducing large numerical values, ensuring the hardness persists under bounded inputs.[1] The Hamiltonian Cycle problem, which determines whether an undirected graph contains a cycle that visits each vertex exactly once, provides yet another illustration of strong NP-completeness.[11] Proven NP-complete by reduction from Vertex Cover, its strong variant holds because the problem structure relies solely on graph connectivity without dependence on large numbers, making it intractable even in unary encoding.[1] In the domain of packing problems, the decision version of Bin Packing—given a set of n items with sizes bounded by a polynomial in n, bin capacity B, and integer m, does there exist a packing into at most m bins?—is strongly NP-complete.[1] This hardness arises from a reduction from the strongly NP-complete 3-Partition problem, where item sizes are polynomially bounded, eliminating opportunities for pseudo-polynomial solutions.[1] Most natural problems in graph theory and propositional logic, such as those above, are classified as strongly NP-complete unless their formulations explicitly involve unbounded numerical parameters that enable pseudo-polynomial algorithms.[1]Weakly NP-Complete Problems
Weakly NP-complete problems are those NP-complete problems that admit pseudo-polynomial time algorithms, meaning their hardness arises primarily from large numerical values in the input rather than the structural complexity alone. These problems, often termed "number problems," were classified as such by Garey and Johnson in their seminal work, highlighting their solvability in time polynomial in the magnitudes of the numbers involved but exponential in the input length when encoded in binary. A key characteristic is that they become solvable in polynomial time if the numerical inputs are encoded in unary or if the numbers are bounded by a polynomial in the input size. The Subset Sum problem exemplifies weak NP-completeness: given a set of n positive integers and a target sum W, determine if there is a subset that sums exactly to W. Proven NP-complete via reduction from 3-Partition or other problems, it is weakly so because a dynamic programming algorithm solves it in O(n W) time by building a table of achievable sums up to W. This pseudo-polynomial runtime exploits the numeric values directly, confirming its weak classification. The 0-1 Knapsack decision problem is another canonical example: given n items each with weight w_i and value v_i, and a knapsack capacity C, decide if there is a subset of items with total weight at most C and total value at least V. Like Subset Sum, it is NP-complete in binary encoding due to capacity constraints permitting large values, yet weakly so, as dynamic programming resolves it in O(n C) time, analogous to the subset sum approach but tracking both weight and value bounds. The Partition problem involves deciding whether a multiset of n positive integers can be divided into two subsets with equal sums. This is a special case of Subset Sum where the target is half the total sum, and it inherits weak NP-completeness, solvable via dynamic programming in pseudo-polynomial time O(n S/2), where S is the total sum. Garey and Johnson catalog it among number problems amenable to such techniques, underscoring how bounded or unary-encoded inputs render it polynomial-time solvable.Implications for Algorithm Design
The classification of NP-complete problems as weak or strong has profound implications for designing algorithms that address them in practice. For weakly NP-complete problems, such as the 0-1 knapsack problem, dynamic programming techniques enable exact solutions in pseudo-polynomial time, which is feasible when the numerical parameters (e.g., weights or capacities) are not excessively large.[12] These algorithms, running in time proportional to the product of the input size and the magnitude of the numbers, provide practical efficiency for real-world instances where numbers fit within reasonable bounds, like in resource allocation scenarios with limited capacities.[12] Moreover, the optimization versions of weakly NP-complete problems often admit fully polynomial-time approximation schemes (FPTAS), which deliver (1 - ε)-approximate solutions in time polynomial in both the input size and 1/ε, allowing scalable approximations without sacrificing much accuracy.[13] In contrast, strongly NP-complete problems resist such approaches. Unless P = NP, no FPTAS exists for strongly NP-complete optimization problems, as the existence of an FPTAS would imply a pseudo-polynomial algorithm, contradicting their strong intractability even under unary encoding of numbers.[13][1] Instead, algorithm designers turn to polynomial-time approximation algorithms with fixed ratios, such as the 2-approximation for the vertex cover problem using maximal matching, which guarantees solutions within twice the optimum and runs in linear time.[14] These constant-factor approximations are crucial for applications like network design, where exact solutions are unattainable but bounded errors are tolerable. For exact solutions, branch-and-bound or integer programming solvers are employed, but their performance degrades exponentially with instance size.[1] In parameterized complexity, strong NP-completeness frequently signals W[15]-hardness when parameterized by natural solution sizes, such as the clique size in the clique problem, precluding fixed-parameter tractable (FPT) algorithms unless FPT = W[15].[16] This hardness guides designers to seek alternative parameters, like treewidth, where FPT algorithms may exist via dynamic programming on tree decompositions. In software implementation, a key practical strategy is to inspect input numbers at runtime: if they are small (unary-like), invoke pseudo-polynomial methods for weak cases; otherwise, fall back to approximations or heuristics for strong ones, optimizing for typical data distributions.[1] Modern extensions highlight further challenges; for instance, strongly NP-complete problems like 3-SAT resist polynomial-time solutions even on quantum computers, with no known quantum algorithm achieving such efficiency despite Grover's search providing only quadratic speedup for unstructured search.[17] This underscores the need for hybrid classical-quantum heuristics, such as quantum approximate optimization algorithms (QAOA), in future designs for optimization tasks.[18]References
- `` Strong '' NP-Completeness Results: Motivation, Examples, and Implications ... GAREY, M R, JOHNSON, D S, AND SETHI, R The complexRy of flowshop and jobshop ...
