Nondeterministic algorithm
View on Wikipediafrom Wikipedia
In computer science and computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
Different models of computation give rise to different reasons that an algorithm may be non-deterministic, and different ways to evaluate its performance or correctness:
- A concurrent algorithm can perform differently on different runs due to a race condition. This can happen even with a single-threaded algorithm when it interacts with resources external to it. In general, such an algorithm is considered to perform correctly only when all possible runs produce the desired results.
- A probabilistic algorithm's behavior depends on a random number generator called by the algorithm. These are subdivided into Las Vegas algorithms, for which (like concurrent algorithms) all runs must produce correct output, and Monte Carlo algorithms which are allowed to fail or produce incorrect results with low probability. The performance of such an algorithm is often measured probabilistically, for instance using an analysis of its expected time.
- In computational complexity theory, nondeterminism is often modeled using an explicit mechanism for making a nondeterministic choice, such as in a nondeterministic Turing machine. For these models, a nondeterministic algorithm is considered to perform correctly when, for each input, there exists a run that produces the desired result, even when other runs produce incorrect results. This existential power makes nondeterministic algorithms of this sort more efficient than known deterministic algorithms for many problems. The P versus NP problem encapsulates this conjectured greater efficiency available to nondeterministic algorithms. Algorithms of this sort are used to define complexity classes based on nondeterministic time and nondeterministic space complexity. They may be simulated using nondeterministic programming, a method for specifying nondeterministic algorithms and searching for the choices that lead to a correct run, often using a backtracking search.
The notion of nondeterminism was introduced by Robert W. Floyd in 1967.[1]
References
[edit]- ^ Robert W.Floyd (October 1967). "Nondeterministic Algorithms". Journal of the ACM. 14 (4): 636–644. doi:10.1145/321420.321422. S2CID 1990464.
Further reading
[edit]- Cormen, Thomas H. (2009). Introduction to Algorithms (3rd ed.). MIT Press. ISBN 978-0-262-03384-8.
- "Nondeterministic algorithm". National Institute of Standards and Technology. Retrieved July 7, 2013.
- "Non-deterministic Algorithms". New York University Computer Science. Retrieved July 7, 2013.
Nondeterministic algorithm
View on Grokipediafrom Grokipedia
Fundamentals
Definition
A nondeterministic algorithm is a theoretical model of computation defined as a decision procedure in which, at designated branching points, the process can arbitrarily select among multiple possible next steps without a predefined rule for the choice. This allows the algorithm to explore several execution paths concurrently in concept, contrasting with deterministic algorithms that produce a single, predictable sequence of operations for any given input. The nondeterministic choice is often modeled as an additional primitive operation, akin to a "choice" function that picks an element from a set of options instantaneously.[6] The motivation for introducing nondeterministic algorithms stems from the need to conceptualize solutions to problems where deterministic methods face efficiency barriers due to the requirement to sequentially evaluate extensive possibilities. By permitting parallel conceptual exploration of branches, nondeterminism provides a framework for analyzing computational limits, particularly for tasks involving search or optimization where multiple candidate paths must be considered. This approach highlights scenarios where verification of a path is feasible, even if exhaustive search is not, thereby aiding in the theoretical study of algorithmic feasibility.[7] Key terminology includes nondeterministic choices (or branching points), which are the loci where the algorithm diverges into alternatives; the computation tree, representing the full structure of all feasible execution paths originating from the initial state; and the guessing mechanism, an oracle-like ability to select promising branches without computational cost or specification. Intuitively, this can be visualized through a simple text-based state transition diagram, where nondeterminism introduces multiplicity:q0 (initial state)
|
+-- nondet choice --> q1 (path 1)
| |
| +-- deterministic steps --> accept/reject
|
+-- nondet choice --> q2 (path 2)
|
+-- deterministic steps --> accept/reject
In this diagram, from state $ q_0 $, the algorithm branches to either $ q_1 $ or $ q_2 $, with subsequent steps following deterministically until a decision is reached; acceptance occurs if any branch leads to an accepting state.[1][7]
Historical origins
The concept of nondeterminism in computational models originated with Alan Turing's 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem," where he described "choice machines" as variants of his deterministic automatic machines, allowing multiple possible next configurations at ambiguous points to model partial determination in computation.[8] These choice machines facilitated discussions on the limits of computability, including undecidability proofs, by incorporating external "guessing" at choice points without altering the class of computable functions.[8] In the late 1950s, the idea gained formal traction through Michael O. Rabin and Dana Scott's 1959 paper "Finite Automata and Their Decision Problems," which introduced nondeterministic finite automata as a tool for simplifying the description of regular languages, proving their equivalence in power to deterministic finite automata despite the branching behavior.[9] This work, conducted in the context of recursive function theory and decision problems, marked the inception of nondeterminism as a distinct computational paradigm, emphasizing its utility in automata theory for handling multiple transition paths efficiently.[9] The 1960s saw nondeterminism extend from finite-state models to more powerful Turing-complete frameworks, with researchers exploring nondeterministic Turing machines to analyze computational resources in recursive functions and algorithm design.[10] This evolution addressed limitations in deterministic models for expressing complex decision processes, bridging automata theory's finite nondeterminism to full Turing machine generality without expanding beyond recursively enumerable languages.[10] A pivotal milestone occurred in 1971 with Stephen A. Cook's paper "The Complexity of Theorem-Proving Procedures," which formalized nondeterministic Turing machines in the study of time-bounded computation, defining the class NP as problems solvable by such machines in polynomial time and establishing the foundations of NP-completeness.[4] This formulation shifted focus toward complexity theory, highlighting nondeterminism's role in capturing "guessing" verifiers for hard problems.[4]Operational principles
Execution model
In the execution model of nondeterministic algorithms, computation proceeds through a branching process at designated choice points, where the algorithm theoretically spawns multiple parallel paths simultaneously, each corresponding to a possible transition or decision. This nondeterminism allows the algorithm to explore all feasible options without committing to a single path, mimicking an oracle that "guesses" the correct choice at each step. Unlike deterministic algorithms, which follow a unique sequence, this model enables the simultaneous evaluation of alternatives, facilitating efficient verification of solutions in problems where finding them directly is challenging.[11] The full computation unfolds as a tree structure, known as the computation tree, rooted at the initial configuration with the input. Each internal node represents a state with nondeterministic choices, branching into child nodes for each possible next configuration, while leaves denote terminal outcomes of acceptance or rejection. The algorithm accepts an input if at least one complete path from root to leaf reaches an accepting state, regardless of the outcomes of other paths; otherwise, it rejects if all paths lead to rejection. This tree-based exploration captures the inherent parallelism of nondeterminism, where the total number of paths may grow exponentially, but the model abstracts away the need to traverse them sequentially.[12] Time complexity in this model is measured by the length of the longest computation path (or, equivalently, the maximum time bound within which an accepting path must exist if the input is accepted), rather than the total size of the computation tree or the sum of all path lengths. This definition provides an apparent "exponential speedup" for verification tasks, as the nondeterministic machine can certify a yes-instance in time proportional to the verifying path, without exploring unproductive branches in the theoretical analysis. For instance, in polynomial-time nondeterministic computation, acceptance requires an accepting path of length at most a polynomial in the input size.[11] A representative pseudocode example illustrates this model for a simple verification problem, such as checking if a guessed value satisfies a condition (e.g., finding a factor of a number ):NONDETERMINISTIC_FACTOR(n):
GUESS d in {2, ..., sqrt(n)} // Nondeterministic choice: branch over possible divisors
IF n mod d == 0 THEN
ACCEPT // Verify and accept if true
ELSE
REJECT // Reject this path
// The algorithm accepts if any path accepts
Here, the "GUESS" operation spawns branches for each possible , with verification occurring deterministically along each path; acceptance occurs if at least one branch succeeds.[11]