Hubbry Logo
♯P♯PMain
Open search
♯P
Community hub
♯P
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
♯P
♯P
from Wikipedia

In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

Relation to decision problems

[edit]

An NP decision problem is often of the form "Are there any solutions that satisfy certain constraints?" For example:

The corresponding #P function problems ask "how many" rather than "are there any". For example:

  • How many subsets of a list of integers add up to zero?
  • How many Hamiltonian cycles in a given graph have cost less than 100?
  • How many variable assignments satisfy a given CNF formula?
  • How many roots of a univariate real polynomial are positive?
[edit]

Clearly, a #P problem must be at least as hard as the corresponding NP problem. If it's easy to count answers, then it must be easy to tell whether there are any answers—just count them and see whether the count is greater than zero. Some of these problems, such as root finding, are easy enough to be in FP, while others are #P-complete.

One consequence of Toda's theorem is that a polynomial-time machine with a #P oracle (P#P) can solve all problems in PH, the entire polynomial hierarchy. In fact, the polynomial-time machine only needs to make one #P query to solve any problem in PH. This is an indication of the extreme difficulty of solving #P-complete problems exactly.

Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For more information on this, see #P-complete.

The closest decision problem class to #P is PP, which asks whether a majority (more than half) of the computation paths accept. This finds the most significant bit in the #P problem answer. The decision problem class ⊕P (pronounced "Parity-P") instead asks for the least significant bit of the #P answer.

Formal definitions

[edit]

#P is formally defined as follows:

#P is the set of all functions such that there is a polynomial time nondeterministic Turing machine such that for all , equals the number of accepting branches in 's computation graph on .[1]

#P can also be equivalently defined in terms of a verifer. A decision problem is in NP if there exists a polynomial-time checkable certificate to a given problem instance—that is, NP asks whether there exists a proof of membership for the input that can be checked for correctness in polynomial time. The class #P asks how many certificates there exist for a problem instance that can be checked for correctness in polynomial time.[1] In this context, #P is defined as follows:

#P is the set of functions such that there exists a polynomial and a polynomial-time deterministic Turing machine , called the verifier, such that for every , .[2] (In other words, equals the size of the set containing all of the polynomial-size certificates).

History

[edit]

The complexity class #P was first defined by Leslie Valiant in a 1979 article on the computation of the permanent of a square matrix, in which he proved that permanent is #P-complete.[3]

Larry Stockmeyer has proved that for every #P problem there exists a randomized algorithm using an oracle for SAT, which given an instance of and returns with high probability a number such that .[4] The runtime of the algorithm is polynomial in and . The algorithm is based on the leftover hash lemma.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
♯P (pronounced "sharp P" or "hash P") is a function complexity class in computational complexity theory that captures the counting versions of decision problems in NP. Formally, a function f:{0,1}Nf: \{0,1\}^* \to \mathbb{N} is in ♯P if there exists a MM running in polynomial time such that for every input xx, f(x)f(x) equals the number of accepting paths of MM on xx. The class was introduced by Leslie G. Valiant in 1979 in the context of studying the computational complexity of the permanent function. Within ♯P, the subclass of ♯P-complete problems represents the hardest counting tasks under parsimonious or Turing reductions, and these are believed to be intractable even if P = NP, since counting solutions is generally more difficult than merely deciding existence. Notable ♯P-complete problems include computing the permanent of a 0-1 matrix (PERMANENT), counting the number of satisfying assignments to a formula (), and counting the number of perfect matchings in a (#PM). Valiant demonstrated the ♯P-completeness of PERMANENT using arithmetic reductions, establishing it as a foundational result that highlighted the intractability of many natural enumeration problems. ♯P is closely related to other complexity classes: the decision version of a ♯P problem—asking if the count is positive—defines languages in NP, while the approximate majority version (greater than half) characterizes PP, the class of probabilistic polynomial-time languages with two-sided error. A landmark result by Seinosuke Toda in 1991 shows that the entire polynomial hierarchy PH is contained in P^{\♯P}, implying that if ♯P functions could be computed efficiently relative to P, the PH would collapse, underscoring the power of counting oracles. Despite extensive study, no polynomial-time algorithms are known for ♯P-complete problems, and they serve as benchmarks for approximation algorithms and parameterized complexity approaches in areas like combinatorics and statistical physics.

Fundamentals

Formal Definition

♯P is a complexity class consisting of counting functions that compute the number of accepting computation paths of nondeterministic Turing machines running in polynomial time. Specifically, it includes all functions f:{0,1}Nf: \{0,1\}^* \to \mathbb{N} where N\mathbb{N} denotes the non-negative integers, and for each input xx, f(x)f(x) represents the count of valid accepting paths for xx. A (NTM) for ♯P operates in time, meaning there exists a pp such that all paths on input xx have length at most p(x)p(|x|). The machine has a finite set of states QQ, and nondeterminism allows multiple possible transitions from each configuration, forming a tree whose leaves indicate acceptance or rejection. The function value f(x)f(x) is precisely the number of root-to-leaf paths in this tree that end in an accepting state. Formally, #P={fMNPTM,\# \mathrm{P} = \{ f \mid \exists M \in \mathrm{NPTM}, \exists p,x{0,1},f(x)=#M(x)}p, \forall x \in \{0,1\}^*, f(x) = \#_M(x) \}, where NPTM\mathrm{NPTM} is the class of nondeterministic polynomial-time Turing machines, and #M(x)\#_M(x) denotes the number of accepting paths of MM on xx. This definition captures the inherent nondeterminism by enumerating all possible computation branches. The output of functions in ♯P is a non-negative integer represented in binary, with the maximum possible value bounded by 2p(x)2^{p(|x|)}, reflecting the exponential number of potential paths in the computation tree due to nondeterministic choices. This class relates to decision problems in NP, where existence of at least one accepting path defines membership.

Connection to NP

The class ♯P is intimately connected to the decision class NP through a natural correspondence between decision problems and counting problems. Specifically, for every language LNPL \in \mathrm{NP}, there exists a nondeterministic polynomial-time Turing machine verifier VV and a polynomial pp such that the counting function fL(x)={y:yp(x),V(x,y)=1}f_L(x) = |\{ y : |y| \leq p(|x|), V(x,y)=1 \}| belongs to ♯P, where fL(x)f_L(x) counts the number of accepting witness strings for input xx.90044-6) This establishes a one-to-one correspondence: every ♯P function arises as the counting version of some NP language, and conversely, for any fPf \in \mathrm{♯P}, the associated decision problem {x:f(x)>0}\{ x : f(x) > 0 \} is in NP. Under this correspondence, the function fLf_L satisfies a key property with respect to LL: if xLx \notin L, then fL(x)=0f_L(x) = 0, while if xLx \in L, then fL(x)1f_L(x) \geq 1. This links the existence of solutions (captured by NP) directly to their enumeration (captured by ♯P). However, ♯P extends beyond this by allowing the computation of solution counts even in cases where the underlying is trivial, such as when every input has either zero or a fixed positive number of witnesses, yet determining the exact count remains hard.90044-6) To establish hardness within ♯P, reductions must preserve the number of solutions, unlike the many-one reductions used for NP that only preserve membership. A standard tool is the parsimonious reduction, a polynomial-time that maps instances such that the number of accepting witnesses for the original instance equals the number for the reduced instance. Such reductions, which exactly preserve solution counts, suffice to show ♯P-completeness for many natural problems derived from NP-complete ones. More generally, ♯P reductions can multiply or add to the solution count by a factor, but parsimonious ones provide the tightest preservation.90044-6) Finally, the decision version of a ♯P function ff—specifically, the problem of determining whether f(x)1f(x) \geq 1—lies in NP and is whenever ff is ♯P-complete under parsimonious reductions. For a more general threshold, the problem {(x,k):f(x)k}\{ (x, k) : f(x) \geq k \} (with kk given in binary) captures the of ♯P but belongs to the broader class PP; however, the case k=1k=1 directly inherits from the underlying structure.90044-6)

Properties and Completeness

Basic Properties

♯P is closed under addition and absolute subtraction. For functions f, g ∈ ♯P, the function f + g can be computed by a nondeterministic Turing machine that nondeterministically branches to simulate either the machine for f or the machine for g on the input, with the total number of accepting paths equaling the sum of the individual counts. Similarly, |f - g| can be obtained using a product construction on the nondeterministic machines for f and g, where the machine simulates both computations in parallel and counts the number of unpaired accepting paths, corresponding to the absolute difference. The class ♯P is contained in FNP, the functional analogue of NP, since each ♯P function f corresponds to a polynomial-time verifiable relation whose satisfying assignments are counted by f. Additionally, ♯P ⊆ FPSPACE, as the number of accepting paths can be computed using a polynomial-space recursive procedure that tallies the counts without storing all paths explicitly. Furthermore, ♯P ⊆ P^♯P, as a polynomial-time with access to a ♯P can query the count directly for subproblems in the computation. Most functions in ♯P are not sparse, meaning they take non-zero values on exponentially many inputs. For ♯P-complete functions under parsimonious reductions, their support sets (where the function is positive) form NP-complete languages and thus cannot be sparse unless P = NP, by Mahaney's theorem. ♯P exhibits strong hardness properties relative to decision classes. If P = NP, then ♯P = FP, since the ability to decide the existence of certificates in polynomial time allows enumeration and counting via self-reducibility in the functional setting. More dramatically, if a ♯P-complete problem admits a polynomial-time algorithm, then the polynomial hierarchy collapses to P, as Toda's theorem places PH in P^♯P, and completeness would place ♯P in FP, implying PH ⊆ P.

♯P-Complete Problems

A problem ff is #P\#P-complete if f#Pf \in \#P and every function g#Pg \in \#P is polynomial-time Turing-reducible to ff, meaning there exists a polynomial-time that computes gg using polynomially many oracle queries to ff. Canonical examples of #P\#P-complete problems include #SAT, which asks for the number of satisfying truth assignments to a given formula in (CNF). #SAT is #P\#P-complete under parsimonious reductions, a type of that preserves the exact number of solutions, from the #P\#P-complete problem #CircuitSAT of counting accepting paths in a . Similarly, #3SAT, the restriction of #SAT to 3-CNF formulas, is #P\#P-complete via parsimonious reductions from #SAT. Another fundamental #P\#P-complete problem is computing the permanent of a 0-1 matrix A{0,1}n×nA \in \{0,1\}^{n \times n}, defined as \per(A)=σSni=1nAi,σ(i),\per(A) = \sum_{\sigma \in S_n} \prod_{i=1}^n A_{i,\sigma(i)}, where SnS_n is the set of all permutations of $$. Valiant established this hardness using Turing reductions from #SAT, which transform a formula into a of matrix instances whose permanents encode the count of satisfying assignments. These reductions imply strong inapproximability: there is no polynomial-time algorithm approximating the permanent of an n×nn \times n 0-1 matrix within a factor of 2n(1ϵ)2^{n(1-\epsilon)} for any ϵ>0\epsilon > 0, unless RP = NP. Other notable #P\#P-complete problems include #Hamiltonian Cycles, which counts the number of Hamiltonian cycles in a directed graph and is shown complete via parsimonious reductions from #3SAT that map each satisfying assignment to exactly one cycle, and #Perfect Matchings, which counts the number of perfect matchings in a bipartite graph and reduces from the permanent problem since the two are equivalent for 0-1 matrices. These examples play a key role in demonstrating hardness for graph-theoretic counting problems, as reductions to them establish #P\#P-completeness for variants like counting cycles or matchings in restricted graph classes.

Probabilistic and Decision Classes

The probabilistic polynomial-time class PP consists of all decision problems for which there exists a that accepts yes-instances with probability strictly greater than 1/2 and rejects no-instances with probability at least 1/2. This class admits an equivalent characterization in terms of #P functions: a language L belongs to PP if there is a polynomial-time whose number of accepting paths on input x, denoted f(x) \in #P, satisfies f(x) > 2^{p(|x|)-1} for yes-instances x \in L and f(x) \leq 2^{p(|x|)-1} for no-instances, where p is a . The canonical PP-complete problem is MAJSAT, which asks whether a given formula in is satisfied by more than half of all possible assignments; its PP-completeness follows from reductions combining probabilistic acceptance thresholds with nondeterministic path counting. Known inclusions place BPP strictly inside PP, since algorithms with bounded error probability greater than 1/2 can be amplified to achieve the majority threshold of PP, while PP itself is contained in PSPACE, as the acceptance probability can be computed exactly by enumerating all 2^{p(n)} computation paths in polynomial space. Moreover, the function class #P is contained in P^{PP}, the class of functions computable in polynomial time with access to a PP oracle; this follows from a binary search procedure over the possible values of a #P function f(x), where each query determines whether the number of accepting paths exceeds a given threshold using O(\log (2^{p(n)})) = O(p(n)) oracle calls to decide half-intervals. These relations highlight how probabilistic decision-making via majority thresholds captures the essential hardness of exact counting, with #P functions requiring oracle access beyond pure probabilistic power for exact computation. Threshold-based variants extend this connection to approximate counting problems, formalized within promise classes like PromisePP. In PromisePP, the input is promised to belong to either a "low" set (where f(x) \leq (1-\epsilon) \cdot 2^{p(n)}) or a "high" set (where f(x) \geq (1+\epsilon) \cdot 2^{p(n)}), for some fixed \epsilon > 0 and #P function f, and the task is to distinguish which promise holds. Such approximate decisions align with probabilistic classes, as they can be resolved using a constant number of PP oracle queries via iterative thresholding to refine the estimate within the promised gap. However, exact #P computation remains strictly harder than these probabilistic threshold decisions, as approximating #P-complete functions to within factors better than certain constants (e.g., 2^{n^{1-\delta}} for \delta > 0) is #P-hard, precluding inclusion in PP or even the polynomial hierarchy without collapse. Oracle separations further underscore the pivotal role of #P relative to probabilistic and decision hierarchies. Toda's theorem establishes that the entire polynomial hierarchy PH is contained in P^{#P}, meaning every problem in PH can be solved in polynomial time given oracle access to a #P function; this implies that #P encodes the computational power underpinning alternating quantifiers in PH, as the oracle allows simulating existential and universal quantification via counting arguments over nondeterministic paths. In contrast, probabilistic assumptions like BPP = P do not collapse #P to FP, preserving the hardness of exact counting even under derandomization, unlike the case where P = NP would force #P = FP via self-reducibility. Similarly, assuming RP \neq NP maintains the incompressibility of #P outputs, as randomized polynomial-time solvability of NP would enable efficient sampling and approximation that undermines the bit-level hardness of counting without affecting exact computation.

Counting Hierarchies and Variants

The counting hierarchy, often denoted CH, extends the polynomial hierarchy by incorporating counting oracles from ♯P, providing a framework for analyzing problems that require iterated counting operations. It is defined recursively with C0P=PC_0 \mathrm{P} = \mathrm{P} and Ck+1P=PCkP\♯PC_{k+1} \mathrm{P} = \mathrm{P}^{C_k \mathrm{P} \cup \♯\mathrm{P}} for k0k \geq 0, where the superscript denotes Turing oracle access, and CH=kCkP\mathrm{CH} = \bigcup_k C_k \mathrm{P}. This construction yields C1P=PPC_1 \mathrm{P} = \mathrm{PP}, since PP=P\♯P\mathrm{PP} = \mathrm{P}^{\♯\mathrm{P}}, and higher levels build upon previous ones with additional counting power. The hierarchy strictly contains the polynomial hierarchy PH\mathrm{PH}, as PHPPC1P\mathrm{PH} \subseteq \mathrm{PP} \subseteq C_1 \mathrm{P}, and is contained within PSPACE\mathrm{PSPACE}.90175-C) Approximate counting variants of ♯P focus on efficiently estimating the number of solutions within a relative error, rather than exact computation, leading to classes defined by approximation schemes. A fully polynomial randomized approximation scheme (FPRAS) provides a (1+ϵ)(1 + \epsilon)-approximation with high probability in time polynomial in nn and 1/ϵ1/\epsilon. For instance, counting satisfying assignments to a disjunctive normal form (DNF) formula admits an FPRAS using Markov chain Monte Carlo methods to sample from the solution space. In contrast, counting for 3-SAT (#3SAT) is hard to approximate: there is no FPRAS unless NP=RP\mathrm{NP} = \mathrm{RP}, due to self-reducibility and amplification of approximation gaps. These results highlight a dichotomy in the approximability of ♯P-complete problems.90130-X) Other variants of ♯P address restrictions on computational resources or function totality. SpanL consists of functions computable by counting the number of distinct accepting computations of a nondeterministic log-space machine, capturing linear-space bounded counting and contained within TotP. TotP, a subclass of ♯P, includes total functions where the underlying nondeterministic polynomial-time machine has at least one accepting path for every input, ensuring the output is always nonnegative; it is closed under parsimonious reductions and contains self-reducible problems with polynomial-time decision versions, such as #2SAT. Promise-♯P extends ♯P to partial functions by relaxing totality via promises on inputs (e.g., guaranteeing the solution count is positive or bounded away from zero), facilitating of approximate or gap versions in randomized settings.90252-O)90130-X) For parallel computation models, NC^{\♯P} captures decision problems solvable in polylogarithmic parallel time with access to a ♯P oracle, allowing non-adaptive (parallel) queries that contrast with the adaptive, serial nature of Turing reductions in standard ♯P hierarchies. This class facilitates studying the parallelizability of counting-augmented computations, where multiple oracle calls can be made simultaneously, potentially reducing depth in circuit models while maintaining width bounded by polynomial size. Such variants underscore the trade-offs between parallelism and counting power in extending ♯P.

Applications and Implications

In Algorithm Design

Exact algorithms for ♯P-complete problems often rely on dynamic programming techniques that exploit the structure of the input to achieve exponential but practical time complexities for moderate instance sizes. For instance, counting the number of Hamiltonian paths in a graph can be solved using a dynamic programming approach over subsets of vertices, where the state tracks the set of visited vertices and the current endpoint, leading to a of O(2nn2)O(2^n n^2). Similarly, the permanent of an n×nn \times n matrix, which counts the number of perfect matchings in a , can be computed exactly using Ryser's inclusion-exclusion formula, which requires O(n2n)O(n 2^n) time by summing over all subsets of rows and evaluating determinants of submatrices. Approximation techniques are crucial for scaling to larger instances where exact methods are infeasible. Markov chain Monte Carlo (MCMC) methods provide fully polynomial randomized approximation schemes (FPRAS) for approximate counting in problems like #Graph Coloring, by constructing a Markov chain on proper colorings of a graph and using stationary distribution sampling to estimate the total number within a relative error ϵ\epsilon in time polynomial in nn and 1/ϵ1/\epsilon, assuming the graph has bounded degree. For counting variants of the knapsack problem, such as the number of subsets summing to a target value, deterministic FPTAS exist that run in time O(n3ϵ1log(n/ϵ))O(n^3 \epsilon^{-1} \log(n/\epsilon)) or better, scaling with the approximation parameter ϵ\epsilon and leveraging dynamic programming with scaling to handle the count approximation. Heuristics and specialized exact solvers enhance practicality for specific ♯P problems like #SAT, which counts satisfying assignments of a Boolean formula. SharpSAT is a prominent solver that extends DPLL-based SAT techniques with component caching and implicit Boolean constraint propagation to efficiently compute exact model counts, often outperforming earlier methods on industrial benchmarks by orders of magnitude. For counting matchings, branch-and-bound methods systematically explore the search space of possible edge selections, pruning branches using upper and lower bounds derived from partial matchings to accelerate exact enumeration, particularly effective for sparse graphs. In , many ♯P problems admit fixed-parameter tractable (FPT) algorithms when parameterized by solution size kk, enabling efficient for small kk. For example, the number of vertex covers of size at most kk in a graph can be achieved via dynamic programming over subsets of potential cover vertices, running in O(2kn3)O(2^k \cdot n^3) time by enumerating subsets and verifying coverage.

In and

The #P-completeness of the number of lattice vectors of a given highlights the hardness of exact counting in lattices, which remains challenging even on quantum computers, as no known achieves a significant speedup for this #P-complete task. This relates to the security of lattice-based , though primary hardness assumptions involve problems like the Short Integer Solution (SIS). In zero-knowledge proofs, #P-hardness assumptions enable constructions for verifying counting problems without revealing underlying data, extending classical ZK protocols to #P languages like approximating permanents. For instance, perfect zero-knowledge probabilistically checkable proofs (PZK-PCPs) have been developed for all #P languages, leveraging #P-completeness (e.g., of #SAT) for soundness while ensuring computational hiding. These are applied in cryptographic protocols requiring verifiable computation of counts, such as in zero-knowledge succinct arguments for counting-based statements in blockchain applications. Recent advances further base quantum-secure zero-knowledge on #P-hardness, using problems like approximating permanents of complex Gaussian matrices, which are #P-hard and resist known quantum speedups. In quantum settings, #P-hard problems yield one-way functions and pseudorandom generators via worst-to-average-case reductions, providing quantum-resistant security without relying on models.

Historical Development

Origins and Introduction

The ♯P was introduced by Leslie Valiant in his 1979 paper, where he defined it as the set of functions that count the number of accepting paths of nondeterministic Turing machines running in time, thereby extending the decision-oriented class NP to counting problems. This formalization provided a framework for analyzing the computational hardness of enumeration tasks, paralleling the role of NP in decision problems. Valiant's motivation stemmed from challenges in , particularly the need to capture the inherent difficulty of counting solutions to problems whose decision versions are in NP, with the computation of the permanent of a matrix serving as a prime exemplar due to its central role in combinatorial . The permanent, unlike the , lacks efficient algebraic simplifications and arises naturally in counting perfect matchings and other structures, highlighting a gap in existing complexity models that focused primarily on yes/no questions. This development built on the foundational work of and in 1971, who established for decision problems, prompting a shift toward function classes to address the added complexity of quantification in counting. Valiant demonstrated that the permanent function, even for (0,1)-matrices, belongs to ♯P and is complete for the class under parsimonious reductions, which preserve the number of solutions exactly, thus underscoring its prototypical hardness.

Key Milestones and Results

In the 1980s, significant progress was made in understanding the computational resources required for #P functions. A foundational result established that #P is contained in FPSPACE, the class of functions computable using polynomial space, demonstrated through a straightforward enumeration of nondeterministic paths while maintaining a counter in polynomial space. This inclusion, attributed to early analyses following the definition of #P, highlighted the space-bounded nature of counting computations. Additionally, the Valiant-Vazirani theorem provided a randomized polynomial-time reduction from SAT to Unique-SAT, implying hardness results for #P problems by showing that distinguishing instances with zero or one satisfying assignment is NP-hard under randomization, which connects to counting via self-reducibility techniques. The 1990s brought breakthroughs linking #P to broader complexity hierarchies. Toda's theorem proved that the polynomial hierarchy is contained in BP⋅⊕P, which in turn is contained in P^{#P}, establishing that #P oracles can simulate the entire and underscoring the power of counting relative to decision problems. Building on earlier work in for counting problems, Jerrum and Sinclair developed foundational randomized schemes for specific #P-complete functions like the permanent in the late and early 1990s, with a fully randomized scheme (FPRAS) for the permanent of nonnegative matrices achieved by Jerrum, Sinclair, and Vigoda, providing the first efficient estimator within a (1±ε) factor in expected time. In the 2000s and later, advancements focused on approximation hardness and connections to other paradigms. Researchers established strong inapproximability results for approximating counting problems, including variants of , showing NP-hardness within factors like 2^{log^{1-ε} n} using gap-producing reductions from PCP theorems, which influenced inapproximability bounds for #P problems. Quantum implications emerged through oracle separations, such as those placing not in relative to oracles, and results suggesting that if quantum computers could efficiently compute #P-complete functions, it would imply collapses in the , further motivating studies on separations between quantum, counting, and decision classes. Ongoing open problems include derandomizing #P-complete problems, such as exact counting for #SAT, which remains unresolved and is tied to proving superpolynomial lower bounds for general circuits, as progress would imply breakthroughs in derandomization and .
Add your contribution
Related Hubs
User Avatar
No comments yet.