Recent from talks
Nothing was collected or created yet.
Nondeterministic programming
View on WikipediaThis article needs additional citations for verification. (April 2017) |
A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at run time between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them.
One method of choice is embodied in backtracking systems (such as Amb,[1][circular reference] or unification in Prolog), in which some alternatives may "fail," causing the program to backtrack and try other alternatives. If all alternatives fail at a particular choice point, then an entire branch fails, and the program will backtrack further, to an older choice point. One complication is that, because any choice is tentative and may be remade, the system must be able to restore old program states by undoing side-effects caused by partially executing a branch that eventually failed.
Another method of choice is reinforcement learning, embodied in systems such as Alisp.[2] In such systems, rather than backtracking, the system keeps track of some measure of success and learns which choices often lead to success, and in which situations (both internal program state and environmental input may affect the choice). These systems are suitable for applications to robotics and other domains in which backtracking would involve attempting to undo actions performed in a dynamic environment, which may be difficult or impractical.
See also
[edit]References
[edit]- ^ "Structure and Interpretation of Computer Programs".
- ^ David Andre; Stuart J. Russell (July 2002). "State abstraction for programmable reinforcement learning agents". Eighteenth National Conference on Artificial Intelligence: 119–125. ISBN 978-0-262-51129-2.
Nondeterministic programming
View on Grokipediaamb form selects among alternatives (e.g., (amb 1 2 3)), and the system uses depth-first search with chronological backtracking to find values satisfying predicates like (prime? (+ i j)).[1] This abstraction hides search details, facilitating applications in parsing, theorem proving, and constraint satisfaction, though it demands failure-handling primitives like require and an-element-of to prune invalid paths.[1]
In concurrent and parallel programming, nondeterminism emerges from the unpredictable scheduling of threads or processes, potentially causing varying memory update orders and results across runs due to race conditions or interleavings.[4] Programmers mitigate this through synchronization tools like mutexes for mutual exclusion or transactional memory for atomic operations, ensuring deterministic behavior where needed, such as in parallel algorithms like Gaussian elimination.[4] Theoretical foundations, including denotational semantics and algebraic specifications, further analyze nondeterministic behaviors to verify correctness in these settings.[5]
Definition and Fundamentals
Core Concepts
Nondeterministic programming is a computational paradigm that permits programs to generate multiple valid outputs or execution paths in response to the same input, by incorporating nondeterministic branches at designated choice points. This approach models scenarios involving inherent uncertainty, multiple solutions, or parallel exploration, allowing programmers to specify what constitutes a valid result without dictating the precise sequence of decisions. A significant development in nondeterministic programming was Edsger W. Dijkstra's 1975 introduction of guarded commands, which emphasizes abstract specification over rigid control flow, facilitating more concise expressions of problems like search and optimization.[6][7] Choice points serve as the foundational elements of nondeterminism, marking locations in the program where execution splits into multiple possible continuations, each representing a distinct alternative. These branches are treated as conceptually parallel, enabling the program to consider all viable options without committing to a single deterministic path. The absence of a prescribed order among alternatives underscores the paradigm's flexibility, as any successful branch suffices to yield a correct outcome.[7] A simple abstract model of a nondeterministic choice operator involves selecting an arbitrary element from a predefined set of options to advance the computation, without favoring any particular order or probability distribution. For instance, given a collection of potential values, the operator nondeterministically picks one, evaluates the ensuing path, and accepts it if valid. This construct captures the paradigm's core by abstracting away the mechanics of selection, focusing instead on the multiplicity of possibilities. In nondeterministic programming, failure denotes the invalidation of a particular execution path—such as when conditions are unmet or goals unachievable—prompting the discard of that branch and the pursuit of alternatives. Successful solutions emerge from this exploratory process, where exhaustive or guided traversal ensures all viable paths are considered until a valid result is obtained or proven absent. This treatment of failure as a navigational cue, rather than an endpoint, aligns with the paradigm's goal-directed nature.[8]Distinction from Deterministic Programming
Deterministic programming refers to computational approaches where, given the same input and execution environment, a program invariably produces the same output by following a uniquely defined sequence of operations and fixed execution paths.[9] This predictability stems from the absence of ambiguity in control flow, ensuring reproducibility essential for debugging, testing, and verification in most conventional software development.[10] In contrast, nondeterministic programming introduces choices or branches where multiple execution paths are possible for the same input, potentially yielding varying outputs across runs without relying on randomness.[9] Key differences include predictability, where deterministic programs guarantee consistent results while nondeterministic ones may explore alternatives, sacrificing reproducibility for broader solution spaces; efficiency in exploration, as nondeterminism conceptually allows parallel-like shortcutting of exhaustive searches by selecting promising paths without committing upfront; and modeling of real-world uncertainty, enabling representations of ambiguous scenarios like multiple valid solutions in constraint satisfaction.[10] These distinctions highlight nondeterminism's utility in paradigms requiring search or inference, though it complicates reasoning about program behavior.[9] Historically, nondeterminism emerged in computing as a response to the limitations of deterministic models in addressing computationally intensive problems, particularly the recognition that many practical tasks—such as those later classified as NP-complete—could be efficiently verifiable but exhaustively hard to solve deterministically.[10] Introduced formally in automata theory by Rabin and Scott in 1959 and extended to algorithms by Floyd in 1967, it provided a theoretical framework for handling complexity classes like NP, where nondeterministic machines could "guess" solutions in polynomial time.[11] This conceptual shift influenced programming paradigms, allowing developers to abstract away exhaustive enumeration for problems intractable under deterministic constraints.[10] A simple illustrative scenario is sorting a list of elements. In deterministic programming, an algorithm like merge sort follows a fixed divide-and-conquer path, repeatedly merging sublists in a predefined manner to produce the sorted output reliably, with time complexity O(n log n).[12] Nondeterministically, the program might generate permutations by nondeterministically inserting elements into positions (e.g., via choice operators in functional logic languages), exploring all possible arrangements until identifying the sorted one, conceptually bypassing sequential checks but potentially enumerating up to n! paths in the worst case.[13] This approach underscores nondeterminism's power for solution discovery over guaranteed efficiency. Probabilistic algorithms, which incorporate randomness for approximation, are related but distinct from nondeterministic algorithms, focusing on expected performance rather than non-random choice among alternatives.[14]Theoretical Background
Nondeterministic Automata and Machines
Nondeterministic finite automata (NFAs) serve as a foundational model in automata theory, extending deterministic finite automata by allowing multiple possible transitions from a given state on a given input symbol. Formally, an NFA is defined as a 5-tuple , where is a finite set of states, is the input alphabet, is the initial state, is the set of accepting states, and the transition function maps each state-symbol pair to a subset of states, potentially empty or containing multiple elements. This nondeterminism permits the automaton to "branch" into several possible next states simultaneously for the same input, contrasting with the single successor in deterministic models. The concept was introduced by Rabin and Scott in their seminal work on finite automata, where they demonstrated that NFAs can recognize the same class of languages as deterministic finite automata.[11] The equivalence between NFAs and deterministic finite automata (DFAs) is established through the powerset construction, which transforms an NFA into an equivalent DFA by treating subsets of the NFA's states as single states in the DFA. In this construction, the DFA's state set is the power set , and its transition function is defined as for a state set and symbol . The initial state is , and accepting states are those subsets intersecting . Although this construction can yield up to states, it proves that nondeterminism does not increase expressive power for regular languages, only potentially affecting construction efficiency. Rabin and Scott outlined this result in their 1959 paper, highlighting its implications for decision problems in automata theory.[11] Nondeterministic Turing machines (NTMs) generalize Turing machines to incorporate nondeterminism, providing a model for computations where multiple transition choices are possible at each step. An NTM is defined similarly to a standard Turing machine as a 7-tuple , but with the transition function , allowing zero or more possible moves (write symbol, move direction, next state) for each state-tape symbol pair. A computation on input begins in the initial configuration with and tape containing ; it accepts if there exists at least one finite computation path reaching an accepting state, and rejects otherwise. This "exists" semantics captures nondeterministic choice, enabling succinct descriptions of certain decision problems. The formalization of NTMs in the context of time-bounded computations was advanced by Cook in his 1971 paper defining complexity classes based on such machines.[15] Simulating nondeterminism on a deterministic Turing machine involves systematically exploring the computation tree of all possible paths, typically via breadth-first search to bound the depth. If an NTM runs in time on inputs of length , the branching factor leads to at most paths of length , requiring a deterministic simulator to check each for acceptance. Thus, the simulation incurs an exponential time overhead, formalized as running in time on a deterministic Turing machine. This overhead underscores the theoretical power of nondeterminism while showing its computability equivalence to determinism. Arora and Barak detail this simulation in their comprehensive treatment of complexity models, emphasizing its role in relating nondeterministic and deterministic resources.[16] These automata and machine models underpin key complexity classes, such as NP, where problems are verifiable in polynomial time via nondeterministic computation.[15]Role in Computational Complexity
Nondeterministic programming draws from the theoretical foundations of nondeterministic Turing machines (NTMs), which underpin key complexity classes by allowing multiple computational paths in parallel, effectively modeling "guessing" mechanisms for solution verification. The class NP, or nondeterministic polynomial time, consists of decision problems where a proposed solution can be verified in polynomial time by an NTM.[15] This definition, introduced by Stephen Cook, captures problems solvable by NTMs in polynomial time, where acceptance occurs if at least one computation path halts in an accepting state within that bound.[15] A central open question in computational complexity is whether P = NP, where P denotes the class of problems solvable in polynomial time by deterministic Turing machines. If P equals NP, then nondeterminism provides no additional power beyond what deterministic computation can achieve efficiently; otherwise, NP problems may require exponential time deterministically despite efficient nondeterministic verification. Nondeterminism facilitates this by simulating an oracle that guesses correct solutions, enabling polynomial-time checks that would otherwise demand exhaustive search.[15] Related classes include co-NP, comprising the complements of NP languages—problems where "no" instances have short verifiable proofs—and NPSPACE, the nondeterministic analog of PSPACE, defined as the union of NSPACE(n^k) for constant k, which equals PSPACE by Savitch's theorem.[17][18] A canonical example is the Boolean satisfiability problem (SAT), proven NP-complete by Cook, meaning every NP problem reduces to SAT in polynomial time, highlighting nondeterminism's role in capturing the hardest problems within the class.[15] In programming contexts, nondeterministic models from complexity theory justify the use of heuristics and approximation algorithms for NP-hard problems, as deterministic exact solutions are often computationally intractable, even if nondeterministic verification is efficient. This theoretical insight encourages practical strategies like branch-and-bound or genetic algorithms to navigate search spaces, acknowledging that while NP problems admit polynomial verification, solving them deterministically may incur exponential costs without resolving P vs. NP.[19]Mechanisms of Implementation
Backtracking Techniques
Backtracking is a core technique for implementing nondeterministic programming on deterministic machines, enabling the exploration of multiple alternative paths in a search space by making choices, testing constraints, and retracting invalid decisions to pursue other options. This method simulates nondeterminism through depth-first search with pruning, systematically generating and abandoning partial solutions until a valid one is found or all possibilities are exhausted. It is widely applied in constraint satisfaction problems, where nondeterministic choices correspond to selecting values for variables subject to interdependencies.[20] The backtracking algorithm proceeds recursively: at each step, it selects a variable, tries possible values (nondeterministic choices), checks if the partial assignment satisfies constraints, and either extends the solution or backtracks by undoing the choice and trying the next alternative. Failure occurs when no value works for a variable, prompting return to the previous choice point; success is reached when all variables are assigned without violations. This process builds a search tree where branches represent choice sequences, and pruning eliminates subtrees leading to inevitable failure, reducing computational overhead compared to brute-force enumeration. Essential components include a state representation, typically an array or structure holding partial assignments; a choice function that enumerates candidate values for the current variable from its domain; a constraint verification function that tests local consistency with prior assignments; and extension/backtrack operations to add or remove assignments while maintaining the state. These elements allow backtracking to mimic nondeterministic execution by deferring commitment to choices until constraints force resolution. A representative example is the N-Queens problem, which requires placing N queens on an N×N chessboard such that no two share the same row, column, or diagonal, illustrating backtracking's role in nondeterministic search for combinatorial configurations. The algorithm proceeds row by row, nondeterministically selecting a column for the current queen and backtracking if it conflicts with prior placements. The following pseudocode outlines a basic implementation:function isSafe(board, row, col, N):
for i from 0 to row-1:
if board[i] == col or abs(board[i] - col) == row - i:
return false
return true
function solveNQueens(board, row, N):
if row == N:
print board // A solution found
return true
for col from 0 to N-1:
if isSafe(board, row, col, N):
board[row] = col
if solveNQueens(board, row + 1, N):
return true
board[row] = -1 // Backtrack
return false
function isSafe(board, row, col, N):
for i from 0 to row-1:
if board[i] == col or abs(board[i] - col) == row - i:
return false
return true
function solveNQueens(board, row, N):
if row == N:
print board // A solution found
return true
for col from 0 to N-1:
if isSafe(board, row, col, N):
board[row] = col
if solveNQueens(board, row + 1, N):
return true
board[row] = -1 // Backtrack
return false
board of size N with -1 values and calls solveNQueens(board, 0, N) to start placement from row 0.[21]
To mitigate exponential growth in the search space, optimizations like forward checking and constraint propagation are integrated into backtracking. Forward checking, after assigning a value to a variable, immediately removes inconsistent values from the domains of unassigned future variables, pruning infeasible branches early without full propagation. Constraint propagation extends this by iteratively enforcing consistency across all constraints in the network, such as arc consistency, to further reduce domains and detect inconsistencies proactively. These techniques significantly improve efficiency, as demonstrated in empirical studies on constraint problems where they reduce backtracks by orders of magnitude compared to naive backtracking.[22]
Choice and Failure Operators
In nondeterministic programming, the choose operator enables the selection of a value from a finite set of alternatives without specifying which one, thereby generating multiple possible execution streams that explore different computational paths. This operator abstracts the notion of nondeterminism by simulating foresight, where it preferentially selects options that lead to successful outcomes, avoiding paths that would otherwise terminate in failure. For instance, in implementations using continuations or macros, choose binds a variable to one element from a list, such as(choose '(1 2 3)), and maintains a stack of alternatives for later exploration if needed.[23][24]
The fail operator complements choose by aborting the current execution path upon detecting an invalid state, prompting backtracking to the most recent choice point to retry with an alternative value. It acts as a constraint enforcer, signaling that the present branch of computation cannot yield a valid result and thus pruning the search space. In practice, fail triggers the restoration of the previous context, allowing the system to resume from an earlier choose invocation with the next option in its set. This mechanism ensures that only viable paths are pursued, effectively filtering out inconsistencies during evaluation.[23]
A prominent realization of these concepts is the ambiguous choice operator, or amb, introduced by John McCarthy and later elaborated in Scheme implementations, where it nondeterministically picks from a list of expressions and fails upon encountering inconsistency. Unlike a simple random selector, amb systematically explores alternatives in a depth-first manner, using failure to backtrack and select the next option until a consistent solution emerges or all paths are exhausted. For example, (amb 1 2 3) evaluates to one of the values, with subsequent amb calls building compound choices that represent combinations of possibilities.[1][25]
Semantically, programs employing these operators can be modeled as trees of possibilities, where each node represents a choice point branching into multiple streams, and a computation succeeds if at least one complete path from root to leaf satisfies all constraints without invoking fail. This tree structure captures the exploratory nature of nondeterminism, with backtracking realizing the traversal by undoing bindings and retrying alternatives upon failure, thus equating program success to the existence of a valid path in the possibility space.[1][25][23]
Programming Languages and Examples
Logic Programming Paradigms
Logic programming paradigms embody nondeterminism through declarative specifications of facts and rules, where computation arises from logical inference rather than explicit control flow. Prolog, a foundational logic programming language, exemplifies this by employing unification to match queries against clauses and backtracking to explore alternative resolutions when a path fails, thereby generating all possible solutions to a given query.[26] This nondeterministic behavior allows Prolog to naturally handle search problems by producing multiple bindings for variables in response to a single query, simulating choice points without imperative loops.[27] Developed in the early 1970s by Alain Colmerauer and Philippe Roussel at the University of Aix-Marseille in France, Prolog originated from efforts to process natural languages using formal logic, with the first implementation completed in 1972 by Roussel in ALGOL-W.[28] Its design has profoundly influenced artificial intelligence, particularly in expert systems and knowledge representation, and database query languages, inspiring deductive systems like Datalog for rule-based querying over relational data. Key features enhancing its nondeterministic capabilities include definite clause grammars (DCGs), which extend Prolog's syntax for parsing sequences by integrating nondeterministic choice into grammar rules, and the cut operator (!), which prunes the search tree by committing to the first successful unification and preventing backtracking over prior choices.[29][30] A representative example of Prolog's nondeterminism in action is a Sudoku solver, where empty cells are filled through backtracking over possible values, generating valid solutions via repeated unification attempts. The following simplified code uses pure backtracking to solve a 9x9 Sudoku puzzle, with nondeterminism arising from themember/2 predicate selecting values from 1 to 9 for empty cells, failing and retrying until constraints on rows, columns, and 3x3 blocks are satisfied (helper predicates like find_empty/2, set_cell/5, all_filled/1, row_valid/1, column_valid/1, and box_valid/1 are omitted for brevity but implement finding the next empty position, updating the board, and checking for duplicates using memberchk/2):
% Solve Sudoku for a board (list of lists), outputting the solution
solve_sudoku(Board, Solution) :-
find_empty(Board, Pos),
fill(Pos, Board, NewBoard),
solve_sudoku(NewBoard, Solution).
solve_sudoku(Board, Solution) :-
all_filled(Board),
Solution = Board.
% Try value in position
fill((Row, Col), Board, NewBoard) :-
member(Val, [1,2,3,4,5,6,7,8,9]),
set_cell(Row, Col, Val, Board, TempBoard),
valid(TempBoard),
NewBoard = TempBoard.
% Constraints: no duplicates in row, column, box
valid(Board) :-
row_valid(Board),
column_valid(Board),
box_valid(Board).
% Example partial board query: solve_sudoku([[_,_,_,_,_,_,_,_,_], ...], Solution).
% Multiple ; presses in the query yield alternative solutions if the puzzle has them, demonstrating nondeterministic outputs.
% Solve Sudoku for a board (list of lists), outputting the solution
solve_sudoku(Board, Solution) :-
find_empty(Board, Pos),
fill(Pos, Board, NewBoard),
solve_sudoku(NewBoard, Solution).
solve_sudoku(Board, Solution) :-
all_filled(Board),
Solution = Board.
% Try value in position
fill((Row, Col), Board, NewBoard) :-
member(Val, [1,2,3,4,5,6,7,8,9]),
set_cell(Row, Col, Val, Board, TempBoard),
valid(TempBoard),
NewBoard = TempBoard.
% Constraints: no duplicates in row, column, box
valid(Board) :-
row_valid(Board),
column_valid(Board),
box_valid(Board).
% Example partial board query: solve_sudoku([[_,_,_,_,_,_,_,_,_], ...], Solution).
% Multiple ; presses in the query yield alternative solutions if the puzzle has them, demonstrating nondeterministic outputs.
Goal-Directed and Generator-Based Languages
Goal-directed evaluation in the Icon programming language enables nondeterministic computation by allowing expressions to act as generators that produce sequences of values on demand, with automatic backtracking upon failure to explore alternative paths until a successful result is found or all options are exhausted.[32] This mechanism integrates seamlessly into the language's expression-based syntax, where generators suspend execution to yield partial results and resume to produce subsequent ones, supporting iterative search without explicit recursion or loops in many cases. Developed in the late 1970s by Ralph and Madge Griswold, Icon's approach contrasts with traditional deterministic languages by treating failure as a signal to retry with alternative values, making it suitable for problems involving multiple solutions, such as string matching or combinatorial generation. A representative example of Icon's nondeterministic generators is thegenfactors procedure, which yields the prime factors of a given integer by leveraging the built-in prime() generator to test divisors incrementally. The code is as follows:
procedure genfactors(i, j) #: generate prime factors of [integer](/page/Integer)
local [p](/page/P′′)
i := [integer](/page/Integer)(i) | runerr(101, i)
/j := i
every [p](/page/P′′) := prime() do {
if [p](/page/P′′) > j | [p](/page/P′′) * [p](/page/P′′) > i then break
while i % [p](/page/P′′) = 0 do {
suspend [p](/page/P′′)
i /:= [p](/page/P′′)
}
if i = 1 then break
}
if i > 1 then suspend i
end
procedure genfactors(i, j) #: generate prime factors of [integer](/page/Integer)
local [p](/page/P′′)
i := [integer](/page/Integer)(i) | runerr(101, i)
/j := i
every [p](/page/P′′) := prime() do {
if [p](/page/P′′) > j | [p](/page/P′′) * [p](/page/P′′) > i then break
while i % [p](/page/P′′) = 0 do {
suspend [p](/page/P′′)
i /:= [p](/page/P′′)
}
if i = 1 then break
}
if i > 1 then suspend i
end
prime() nondeterministically generates successive prime numbers, and suspend p yields each factor found while dividing out multiples, allowing the procedure to produce the complete factorization (e.g., for 24, it yields 2, 2, 2, 3) through successive invocations without recomputing prior results.[33] This generator-based backtracking exemplifies how Icon handles nondeterminism procedurally, producing all valid solutions lazily.
The Curry programming language extends functional logic paradigms by integrating Haskell-like higher-order functions and lazy evaluation with Prolog-style nondeterministic search, enabling declarative specifications of multiple solutions via narrowing and residuation.[34] In Curry, nondeterminism is encapsulated in operations like the choice operator (?), which selects among alternatives and backtracks on failure, allowing functions to return sets of results for the same input.[35] This combination supports equational reasoning in functional code while incorporating logical variables and unification for search, as detailed in foundational work by Michael Hanus and collaborators.[36]
Oz, implemented in the Mozart system, incorporates declarative nondeterminism through concurrent constraint programming, where search is expressed via choice statements and finite domain constraints that resolve multiple bindings automatically without explicit control flow. This approach emphasizes single-threaded exploration of solution spaces in a multiparadigm setting, as explored in Van Roy and Haridi's comprehensive treatment of Oz's models.
