Hubbry Logo
PSPACE-completePSPACE-completeMain
Open search
PSPACE-complete
Community hub
PSPACE-complete
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
PSPACE-complete
PSPACE-complete
from Wikipedia

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length (polynomial space) and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

Problems known to be PSPACE-complete include determining properties of regular expressions and context-sensitive grammars, determining the truth of quantified Boolean formulas, step-by-step changes between solutions of combinatorial optimization problems, and many puzzles and games.

Theory

[edit]

A problem is defined to be PSPACE-complete if it can be solved using a polynomial amount of memory (it belongs to PSPACE) and every problem in PSPACE can be transformed in polynomial time into an equivalent instance of the given problem.[1]

The PSPACE-complete problems are widely suspected to be outside the more famous complexity classes P (polynomial time) and NP (non-deterministic polynomial time), but that is not known.[2] It is known that they lie outside of the class NC, a class of problems with highly efficient parallel algorithms, because problems in NC can be solved in an amount of space polynomial in the logarithm of the input size, and the class of problems solvable in such a small amount of space is strictly contained in PSPACE by the space hierarchy theorem.

The transformations that are usually considered in defining PSPACE-completeness are polynomial-time many-one reductions, transformations that take a single instance of a problem of one type into an equivalent single instance of a problem of a different type. However, it is also possible to define completeness using Turing reductions, in which one problem can be solved in a polynomial number of calls to a subroutine for the other problem. It is not known whether these two types of reductions lead to different classes of PSPACE-complete problems.[3] Other types of reductions, such as many-one reductions that always increase the length of the transformed input, have also been considered.[4]

A version of the Berman–Hartmanis conjecture for PSPACE-complete sets states that all such sets look alike, in the sense that they can all be transformed into each other by polynomial-time bijections.[5]

Examples

[edit]

Formal languages

[edit]

Given a regular expression , determining whether it generates every string over its alphabet is PSPACE-complete.[6]

The first known PSPACE-complete problem was the word problem for deterministic context-sensitive grammars. In the word problem for context-sensitive grammars, one is given a set of grammatical transformations which can increase, but cannot decrease, the length of a sentence, and wishes to determine if a given sentence could be produced by these transformations. The technical condition of "determinism" (implying roughly that each transformation makes it obvious that it was used) ensures that this process can be solved in polynomial space, and Kuroda (1964) showed that every (possibly non-deterministic) program computable in linear space could be converted into the parsing of a context-sensitive grammar, in a way which preserves determinism.[7] In 1970, Savitch's theorem showed that PSPACE is closed under nondeterminism, implying that even non-deterministic context-sensitive grammars are in PSPACE.[1]

Logic

[edit]

A standard PSPACE-complete problem, used in many other PSPACE-completeness results, is the quantified Boolean formula problem, a generalization of the Boolean satisfiability problem. The quantified Boolean formula problem takes as input a Boolean expression, with all of its variables quantified either universally or existentially, for example: The output of the problem is the value of the quantified expression. Finding this value is PSPACE-complete.[1]

Reconfiguration

[edit]

Reconfiguration problems concern the connectivity of a state space of solutions to a combinatorial problem. For instance, testing whether two 4-colorings of a graph can be connected to each other by moves that change the color of one vertex at a time, maintaining at each step a valid 4-coloring, is PSPACE-complete,[8] even though the same problem for 3-colorings can be solved in polynomial time.[9] Another family of reconfiguration problems, used similarly to quantified Boolean formulas as the basis for PSPACE-completeness proofs of many other problems in this area, involve nondeterministic constraint logic, in which the states are orientations of a constraint graph subject to certain constraints on how many edges must be oriented inwards at each vertex, and in which the moves from state to state reverse the orientation of a single edge.[10]

Puzzles and games

[edit]

The quantified Boolean formula problem can be interpreted as a game by two players, a verifier and a falsifier. The players make moves that fill in values for the quantified variables, in the order they are nested, with the verifier filling in existentially quantified variables and the falsifier filling in universally quantified variables; the game is won by the verifier if the filled-in formula becomes true, and by the falsifier otherwise. A quantified formula is true if and only if the verifier has a winning strategy. Similarly, the problem of determining the winner or loser of many other combinatorial games turns out to be PSPACE-complete. Examples of games that are PSPACE-complete (when generalized so that they can be played on an board) are the games Hex and Reversi. Some other generalized games, such as chess, checkers (draughts), and Go are EXPTIME-complete because a game between two perfect players can be very long, so they are unlikely to be in PSPACE. But they will become PSPACE-complete if a polynomial bound on the number of moves is enforced.[11]

It is also possible for puzzles played by a single player to be PSPACE-complete. These often can be interpreted as reconfiguration problems,[10] and include the solitaire games Rush Hour, Mahjong, Atomix and Sokoban, and the mechanical computer Turing Tumble.[11]

PSPACE-completeness is based on complexity as a function of the input size , in the limit as grows without bound. Puzzles or games with a bounded number of positions such as chess on a conventional board cannot be PSPACE-complete, because they could be solved in constant time and space using a very large lookup table. To formulate PSPACE-complete versions of these games, they must be modified in a way that makes their number of positions unbounded, such as by playing them on an board instead. In some cases, such as for chess, these extensions are artificial.

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a is PSPACE-complete if it belongs to the PSPACE and is PSPACE-hard, meaning every problem in PSPACE can be efficiently reduced to it via a polynomial-time . PSPACE consists of all s solvable by a deterministic using only a polynomial amount of (i.e., O(nc)O(n^c) space for some constant c>0c > 0, where nn is the input length) and an unlimited amount of time. The notion of PSPACE-completeness was formalized through the identification of complete problems, with the canonical example being the quantified Boolean formulas problem (TQBF), which asks whether a fully quantified Boolean formula—alternating universal and existential quantifiers over Boolean variables followed by a quantifier-free formula—is true. Albert R. Meyer and Larry J. Stockmeyer proved in 1973 that TQBF is PSPACE-complete, establishing it as a benchmark for the class by showing both membership in PSPACE (via recursive evaluation using polynomial space) and PSPACE-hardness (via reductions from other PSPACE problems). TQBF generalizes the NP-complete (SAT) by incorporating quantifiers, reflecting the added expressive power of alternating existential and universal choices in nondeterministic computation. PSPACE-complete problems arise in various domains, notably in the analysis of two-player games with , where determining if the first player has a winning on an n×nn \times n board is often PSPACE-complete; examples include Hex and generalized . By Savitch's theorem, the nondeterministic space class NPSPACE equals PSPACE, implying that PSPACE-complete problems capture the full computational power of nondeterministic polynomial-space machines, which can be simulated deterministically using only quadratically more space. This equivalence underscores the robustness of PSPACE and highlights open questions, such as whether PSPACE can be solved in polynomial time (i.e., P = PSPACE), a problem central to the 's hierarchy.

Background

Space Complexity Basics

Space complexity measures the minimum amount of memory, or working tape space, required by a to decide a , as a function of the input length nn. Formally, for a deterministic MM, the space complexity SM(n)S_M(n) is the maximum number of tape cells visited by MM on any input of length nn, assuming MM halts on all inputs. This resource captures the memory needs beyond the read-only input tape, distinguishing it from time complexity, which focuses on computation steps. A key distinction exists between deterministic and nondeterministic space complexity. In the deterministic case, the machine follows a unique computation path, and space is the maximum used across all inputs. For nondeterministic Turing machines, space complexity is defined similarly but considers the maximum space used over all possible computation branches on inputs of length nn; a is accepted if there exists at least one accepting path within the space bound. Savitch's theorem establishes that nondeterministic space classes are no more powerful than their deterministic counterparts up to a quadratic factor: if a language is in (s(n))(s(n)), then it is in (s(n)2)(s(n)^2) for s(n)logns(n) \geq \log n. Simple examples illustrate space classes. The class , or deterministic logarithmic space, consists of languages decidable by deterministic Turing machines using O(logn)O(\log n) space; this suffices for problems like checking graph connectivity in directed graphs via pointer following, without storing the entire graph. Linear space, (O(n))(O(n)), allows machines to use space proportional to the input size, enabling recognition of context-free languages via pushdown automata simulations. To analyze space-bounded computations, the configuration graph model is used: a configuration comprises the machine's state, head positions, and tape contents, forming a graph where nodes represent configurations (at most exponential in the space bound) and edges denote single steps; in this graph determines acceptance. Space often proves more restrictive than time because bounded memory limits the number of distinct configurations—roughly nQΓS(n)n \cdot |Q| \cdot |\Gamma|^{S(n)} for state set QQ and tape alphabet Γ\Gamma—capping computation length at exponential in S(n)S(n). Nontrivial computations require S(n)lognS(n) \geq \log n, as logarithmic space is needed to track the input head position across nn cells. This lower bound ensures that constant-space machines can only perform trivial tasks, like scanning the input once without indexing.

The PSPACE Class

The PSPACE complexity class consists of all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of space, specifically O(nk)O(n^k) space for some constant k1k \geq 1, where nn is the input size. Formally, PSPACE=k1DSPACE(nk)\mathsf{PSPACE} = \bigcup_{k \geq 1} \mathsf{DSPACE}(n^k), where DSPACE(f(n))\mathsf{DSPACE}(f(n)) denotes the class of languages recognized by deterministic Turing machines using at most f(n)f(n) space. This class captures problems where the working memory is bounded polynomially, allowing potentially exponential running time since the machine can revisit configurations within the limited tape. An analogous class, NPSPACE\mathsf{NPSPACE}, is defined for nondeterministic Turing machines using polynomial space: NPSPACE=k1NSPACE(nk)\mathsf{NPSPACE} = \bigcup_{k \geq 1} \mathsf{NSPACE}(n^k). Savitch's theorem establishes that NPSPACEDSPACE(n2)\mathsf{NPSPACE} \subseteq \mathsf{DSPACE}(n^2), implying PSPACE=NPSPACE\mathsf{PSPACE} = \mathsf{NPSPACE}. The theorem's high-level intuition relies on simulating nondeterminism by checking reachability in the machine's configuration graph, where nodes represent possible states, head positions, and tape contents; for polynomial space, this graph has at most exponentially many nodes, and a deterministic recursive procedure can verify paths between configurations using quadratic space relative to the input size. Another equivalent characterization is via alternating Turing machines: PSPACE\mathsf{PSPACE} equals the class AP\mathsf{AP} of problems solvable by an alternating Turing machine in polynomial time, where alternation allows existential and universal quantification over moves. Problems in PSPACE\mathsf{PSPACE} include those not known to be complete for the class, such as certain formulations of . For example, deciding whether a given nn has a nontrivial factor at most n\sqrt{n}
Add your contribution
Related Hubs
User Avatar
No comments yet.