Hubbry Logo
Decision problemDecision problemMain
Open search
Decision problem
Community hub
Decision problem
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Decision problem
Decision problem
from Wikipedia
A decision problem has only two possible outputs (YES or NO) on any input.

In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question on a set of input values. An example of a decision problem is deciding whether a given natural number is prime. Another example is the problem, "given two numbers x and y, does x evenly divide y?"

A decision procedure for a decision problem is an algorithmic method that answers the yes-no question on all inputs, and a decision problem is called decidable if there is a decision procedure for it. For example, the decision problem "given two numbers x and y, does x evenly divide y?" is decidable since there is a decision procedure called long division that gives the steps for determining whether x evenly divides y and the correct answer, YES or NO, accordingly. Some of the most important problems in mathematics are undecidable, e.g. the halting problem.

The field of computational complexity theory categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. On the other hand, the field of recursion theory categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution.

Definition

[edit]

A decision problem is the formal language of all inputs for which the output (the answer to the yes-no question on a given input) is YES.[notes 1]

  • These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet.
  • For example, if every input can be encoded by the alphabet , then a decision problem is a subset .[notes 1]
  • For another example, using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the decision procedure of a decision problem is to compute the characteristic function of a subset of the natural numbers.

Examples

[edit]

A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient procedures of primality testing are known, the existence of any effective procedure is enough to establish decidability.

Decidability

[edit]
  • A decision problem is decidable or effectively solvable if the set of inputs for which the answer is YES is a recursive set.[notes 2]
  • A decision problem is partially decidable, semidecidable, solvable, or provable if the set of inputs for which the answer is YES is a recursively enumerable set.

Problems that are not decidable are undecidable, which means it is not possible to create an algorithm (efficient or not) that solves them. The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems.

Complete problems

[edit]

Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem P is said to be complete for a set of decision problems S if P is a member of S and every problem in S can be reduced to P. Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility.

Function problems

[edit]

Decision problems are closely related to function problems, which can have answers that are more complex than a simple YES or NO. A corresponding function problem is "given two numbers x and y, what is x divided by y?".

A function problem consists of a partial function f; the informal "problem" is to compute the values of f on the inputs for which it is defined.

Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function f is the set of pairs (x,y) such that f(x) = y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable in polynomial time (in which case running time is computed as a function of x alone). The function f(x) = 2x has this property.

Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.

Optimization problems

[edit]

Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the best answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming.

Function and optimization problems are often transformed into decision problems by considering the question of whether the output is equal to or less than or equal to a given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each N, to decide whether the graph has any tour with weight less than N. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.

Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a decision problem is a computational question that requires a yes-or-no answer, typically phrased as whether a given input satisfies a specified property or belongs to a particular set, and is formally analyzed to determine if an can solve it for all possible inputs. These problems form the foundation of studying what can and cannot be computed algorithmically, distinguishing between decidable problems (solvable by a that always halts with the correct answer) and undecidable ones (no such algorithm exists). The origins of decision problems trace back to David Hilbert's 1928 lecture, where he posed the (decision problem) as a challenge to find a mechanical procedure for verifying the provability of any statement in first-order predicate logic, as part of his broader program to formalize mathematics. In 1936, resolved this negatively by introducing —a formal consisting of an infinite tape, a reading/writing head, and a finite set of states—and using a diagonalization argument to prove that no general decision procedure exists for the , as it would imply solving the undecidable . Independently, developed and showed the same result, establishing the Church-Turing thesis that equates effective with solvability. Key examples of decision problems include the , which asks whether a Turing machine will halt (terminate) on a given input, proven undecidable by Turing through a self-referential contradiction; this demonstrates inherent limits on algorithmic prediction of program behavior. Other notable undecidable problems are (determining solvability of Diophantine equations, resolved negatively in 1970 by building on earlier work) and the (whether two words in a finitely presented group represent the same element). Decision problems also underpin , where solvable ones are classified by resources like time (e.g., P vs. NP) and space, influencing fields from to .

Fundamentals

Definition

In and , a decision problem is a question posed within a , where the input consists of a finite over a finite , and the expected output is a binary answer of "yes" or "no" indicating whether the input satisfies a specified property. This formulation ensures that decision problems are precisely defined, allowing for the study of algorithmic solvability without ambiguity in the response format. Formally, a decision problem DD is represented as a of Σ\Sigma^*, the set of all finite strings over a finite Σ\Sigma, where membership in DD corresponds to the "yes" instances of the problem, and non-membership to the "no" instances. This set-theoretic view aligns decision problems with the recognition of formal languages, facilitating analysis via models like Turing machines, which accept or reject inputs to decide membership. Unlike general computational problems, which may produce multi-valued outputs, continuous results, or require constructing solutions, decision problems are restricted to binary outcomes, enabling focused investigations into decidability and resource bounds. The concept was introduced in the 1930s by and as part of their work on the , Hilbert's challenge to determine the provability of mathematical statements algorithmically. Understanding decision problems presupposes familiarity with formal languages, where inputs are encoded as strings, and basic notions, such as algorithmic procedures for verification.

Examples

One prominent example of a decision problem is the , which asks whether a given will halt (i.e., terminate) on a specified input. In this formulation, the input consists of a description of the and its initial tape contents, and the expected output is yes if the machine eventually stops and no otherwise. Another classic instance arises in : primality testing, which determines whether a given positive n>1n > 1 is a . Here, the input is the integer nn, and the answer is yes if nn has no positive divisors other than 1 and itself, and no otherwise; while decidable, this problem posed significant computational challenges historically due to the need for efficient algorithms. In , the validity problem for asks whether a given is true in every possible model ( interpreting the ). The input is the , and the output is yes if it is valid (a tautology in all interpretations) and no if there exists a counter-model. This formulation, known as the , exemplifies decision problems in formal systems. From , the Hamiltonian cycle problem inquires whether an undirected graph contains a cycle that visits each vertex exactly once and returns to the starting vertex. The input is the graph's vertex and edge sets, with yes if such a cycle exists and no otherwise. These examples, drawn from , arithmetic, logic, and , demonstrate the breadth of decision problems as yes/no questions about specific instances.

Decidability

Decidable Problems

A decision problem is decidable if there exists a Turing machine that, given any input, halts in finite time and outputs the correct yes or no answer for every possible instance of the problem. This means the algorithm terminates with acceptance for yes-instances and rejection for no-instances, without looping indefinitely. The Church-Turing thesis provides the foundational justification for using Turing machines as the standard model of computation in this context; it informally asserts that any function that can be effectively computed by a human following an algorithm can be computed by a Turing machine. Under this thesis, decidability captures all problems solvable by mechanical procedures. To prove that a decision problem is decidable, one common method is direct construction of an algorithm, such as the , which determines whether a given greater than 1 is prime in deterministic polynomial time. Another approach is reduction: if a problem can be transformed in polynomial time to another known decidable problem, then it is also decidable, preserving the halting guarantee. A classic example is the acceptance problem for finite automata: given a deterministic finite automaton and a string, it is decidable whether the automaton accepts the string, since the finite state space allows simulation in linear time relative to the input length, and regular languages are precisely those recognized by such automata. states that every non-trivial of the languages recognized by Turing machines is undecidable. Trivial properties—those that either all or no such languages satisfy—are decidable, since the answer is independent of the specific machine. All decidable problems are computable, meaning they correspond to total recursive functions that halt on every input, though the runtime can range from constant to highly inefficient, depending on the problem. In contrast to undecidable problems like the , decidable ones guarantee termination and correctness for all inputs.

Undecidable Problems

A decision problem is undecidable if no exists that halts on every input and correctly outputs yes or no according to whether the input satisfies the problem's condition; on some inputs, any such machine must either give an incorrect answer or run indefinitely. This notion captures the fundamental limits of algorithmic solvability, distinguishing undecidability from mere computational inefficiency. The exemplifies undecidability: given a description of a and an input, determine whether the halts on that input. proved its undecidability in using a diagonalization argument. Suppose a hypothetical halting oracle H(M,i)H(M, i) exists that decides if MM halts on input ii. Construct a DD that, on input ee (encoding MeM_e), simulates H(Me,e)H(M_e, e): if HH says yes, DD loops forever; if no, DD halts. Now apply DD to its own encoding: if H(D,e)H(D, e) says yes (meaning DD halts on ee), then DD loops, a contradiction; if no, DD halts, another contradiction. Thus, no such HH exists. Rice's theorem, proved by Henry Gordon Rice in 1953, provides a broad class of undecidable problems. It states that for any non-trivial PP of the language recognized by —meaning PP holds for some but not all such languages—the problem of deciding whether a given recognizes a language in PP is undecidable. The proof reduces the to membership in PP by constructing machines that append irrelevant computations to known halting or non-halting behaviors while preserving the property. This theorem implies that properties like "does the machine accept any string?" or "is the language regular?" (for ) are undecidable, except for trivial cases like always accepting the empty language. Alonzo Church independently showed in 1936 that the —determining whether a formula is provable from given axioms—is undecidable, using the to encode computations and reducing it to his earlier result on the unsolvability of an analogous number-theoretic problem. Similarly, Emil Post demonstrated in 1946 that the is undecidable: given pairs of strings over an alphabet, decide if there exists a sequence of indices such that the concatenations from the top and bottom rows match. The proof reduces the to PCP by encoding machine simulations into string matches, where a solution corresponds to a halting computation. These undecidability results impose strict limits on and in , as no general can prove arbitrary properties of programs, requiring instead domain-specific approximations, bounded checks, or manual proofs to mitigate risks in critical systems.

Complexity Aspects

Complexity Classes

Decidable decision problems are classified into complexity classes based on the computational resources required to solve them, primarily time and space bounds on . The class DTIME(f(n))\mathsf{DTIME}(f(n)) consists of all decision problems solvable by a deterministic in at most O(f(n))O(f(n)) time steps on inputs of length nn . Similarly, DSPACE(f(n))\mathsf{DSPACE}(f(n)) includes problems solvable using O(f(n))O(f(n)) . These classes form the foundation of resource-bounded , distinguishing problems by efficiency rather than mere decidability. Key complexity classes for polynomial resources include P\mathsf{P}, the union over k1k \geq 1 of DTIME(nk)\mathsf{DTIME}(n^k), capturing problems solvable in polynomial time . The class NP\mathsf{NP} comprises problems where "yes" instances have polynomial-time verifiable certificates using nondeterministic Turing machines, formally the union over k1k \geq 1 of NTIME(nk)\mathsf{NTIME}(n^k) . PSPACE\mathsf{PSPACE} is the union over k1k \geq 1 of DSPACE(nk)\mathsf{DSPACE}(n^k), encompassing problems solvable with space, potentially requiring exponential time . These classes highlight trade-offs, such as PNPPSPACE\mathsf{P} \subseteq \mathsf{NP} \subseteq \mathsf{PSPACE}, with open questions like whether P=NP\mathsf{P} = \mathsf{NP}. Hierarchy theorems establish strict separations within these frameworks. The time hierarchy theorem proves that for suitable f(n)f(n), DTIME(f(n))DTIME(f(n)2)\mathsf{DTIME}(f(n)) \subsetneq \mathsf{DTIME}(f(n)^2), demonstrating that more time allows solving strictly harder problems . Analogous space hierarchy theorems show [DSPACE](/page/DSpace)(f(n))[DSPACE](/page/DSpace)(f(n)logf(n))\mathsf{[DSPACE](/page/DSpace)}(f(n)) \subsetneq \mathsf{[DSPACE](/page/DSpace)}(f(n) \log f(n)) for space-constructible f(n)f(n) . These results imply infinite strict inclusions, such as [P](/page/P′′)EXP\mathsf{[P](/page/P′′)} \subsetneq \mathsf{EXP}. Oracle machines extend Turing machines with access to an for a fixed , enabling relative computations between classes. The seminal relativization results construct oracles AA and BB such that PA=NPA\mathsf{P}^A = \mathsf{NP}^A relative to AA, but PBNPB\mathsf{P}^B \neq \mathsf{NP}^B relative to BB, indicating that relativizing proof techniques cannot resolve P\mathsf{P} vs. NP\mathsf{NP} . Such separations underscore limitations in diagonalization methods for class equalities. Modern extensions incorporate alternative computational models, such as s. The class [BQP](/page/BQP)\mathsf{[BQP](/page/BQP)} includes decision problems solvable with bounded error by a quantum Turing machine in polynomial time, potentially offering speedups over classical classes for certain problems like factoring . Undecidability marks the boundary beyond which no recursive can reach, as undecidable problems evade all resource bounds.

Complete Problems

In , a decision problem is said to be complete for a if it belongs to the class and every problem in that class can be reduced to it in polynomial time, making it a "hardest" problem within the class that serves as a benchmark for studying the class's properties. This notion of completeness was formalized by in 1971, who introduced Cook-Turing completeness using polynomial-time many-one reductions for the class NP, demonstrating that the (SAT) is NP-complete. These reductions preserve membership: an instance of a problem A reduces to an instance of B if solving B allows solving A in polynomial time, establishing equivalence in hardness. The canonical NP-complete problem is SAT, which asks whether there exists an assignment of truth values to variables in a Boolean formula that makes the formula true. Cook proved SAT's NP-completeness by showing that any verification problem reduces to SAT via a polynomial-time that constructs a formula encoding the machine's computation path. A restricted variant, 3-SAT, limits clauses to exactly three literals and is also NP-complete; Karp demonstrated this in 1972 by providing a polynomial-time from general SAT to 3-SAT, which introduces auxiliary variables to break larger clauses into conjunctions of three-literal clauses without altering satisfiability. This makes 3-SAT a standard example due to its simplicity and frequent use in algorithms and proofs. Karp reductions, also known as polynomial-time many-one , extend Cook's framework by showing that numerous combinatorial problems—such as the traveling salesman problem and —are NP-complete through chains of such reductions from SAT, highlighting the broad applicability of completeness in unifying disparate hard problems. For other classes, completeness follows analogous definitions: the quantified Boolean formulas (QBF) problem, which determines the truth value of a Boolean formula with alternating quantifiers over all variables, is PSPACE-complete, as shown by Larry Stockmeyer and Albert Meyer in 1973 via that encode computations. Similarly, EXP-complete problems include those like the acceptance problem for deterministic Turing machines bounded by exponential time, where hardness arises from encoding computations that require exponential resources. The implications of completeness are profound: if any NP-complete problem like SAT lies in , then the entire NP class equals , collapsing the and resolving the P=NP in the affirmative, a central open question in the field since Cook's work. This property holds generally for complete problems in other classes, such as or EXP, where solvability in a lower class would imply class equalities.

Function Problems

In computability theory, a function problem consists of specifying a function ff that maps inputs from a domain (typically strings or natural numbers) to outputs (such as strings, numbers, or tuples), where the task is to compute f(x)f(x) for a given input xx. A function problem is computable if there exists a Turing machine that, on input xx, halts and produces the correct output f(x)f(x) whenever xx is in the domain of ff. This definition extends the notion of decision problems, which are special cases of function problems where the output is restricted to a binary value (yes or no, often encoded as 0 or 1). Function problems may be partial, meaning f(x)f(x) is defined only for some inputs, or total, meaning ff is defined for all inputs in the domain. For partial functions, the corresponding must halt on inputs where ff is defined but may loop indefinitely otherwise. Total computable functions correspond to those computable by that always halt, forming the class of total recursive functions. Classic examples illustrate the distinction. In , the function problem requires outputting the prime factors of a given n>1n > 1, which is computable via exhaustive search but practically challenging for large nn. For the traveling salesman problem, the function version outputs the length of the shortest tour visiting all cities in a given weighted graph, again computable but resource-intensive. These examples highlight how function problems demand producing explicit outputs, unlike their decision counterparts (e.g., "does nn have a factor less than kk?"). The theoretical foundation for computable function problems lies in recursive function theory. Stephen Kleene formalized general recursive functions in the 1930s as those obtainable from basic functions (zero, successor, and projections) via composition, primitive recursion, and minimization, showing equivalence to Turing computability. Primitive recursive functions form a proper subclass, defined similarly but without minimization, ensuring total computability and capturing many intuitive arithmetic operations, as introduced by Kurt Gödel. In modern applications, particularly , function problems appear as one-way functions: total computable functions ff that are easy to evaluate but computationally infeasible to invert on average (i.e., finding xx given f(x)f(x) is hard). Such functions underpin , assuming their existence enables secure systems like protocols.

Optimization Problems

Optimization problems seek to find the best solution from a set of feasible solutions according to a specified objective function, such as minimizing cost or maximizing efficiency, in contrast to decision problems that only require a yes/no answer regarding the existence of a solution satisfying certain conditions. These problems arise in fields like and , where the goal is to optimize a measure over a combinatorial structure. In , optimization problems are frequently studied via their corresponding decision versions to classify hardness, as decision problems provide a cleaner framework for defining classes like NP. The decision version of an typically asks whether there exists a feasible solution whose objective value meets or exceeds a given threshold, such as "is there a solution with cost at most k?". Solving the decision problem in time often allows solving the in time via binary search over possible thresholds, establishing their equivalence in complexity up to polynomial factors. A classic example is the Traveling Salesman Problem (TSP), where the optimization variant requires finding the shortest Hamiltonian cycle in a weighted graph, while the decision variant queries whether a cycle of length at most k exists; the latter was proven NP-complete by reduction from the Hamiltonian Cycle problem. Similarly, the optimizes the maximum value under weight constraints, with its decision version—does a subset exist with value at least v and weight at most w?—also NP-complete. When the decision version is NP-complete, the is NP-hard, implying no efficient exact algorithm exists unless P = NP, motivating approximation algorithms and heuristics for practical instances.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.