Recent from talks
Nothing was collected or created yet.
SLD resolution
View on WikipediaSLD resolution (Selective Linear Definite clause resolution) is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.
The SLD inference rule
[edit]Given a goal clause, represented as the negation of a problem to be solved:
with selected literal , and an input definite clause:
whose positive literal (atom) unifies with the atom of the selected literal , SLD resolution derives another goal clause, in which the selected literal is replaced by the negative literals of the input clause and the unifying substitution is applied:
In the simplest case, in propositional logic, the atoms and are identical, and the unifying substitution is vacuous. However, in the more general case, the unifying substitution is necessary to make the two literals identical.
The origin of the name "SLD"
[edit]The name "SLD resolution" was given by Maarten van Emden for the unnamed inference rule introduced by Robert Kowalski.[1] Its name is derived from SL resolution,[2] which is both sound and refutation complete for the unrestricted clausal form of logic. "SLD" stands for "SL resolution with Definite clauses".
In both, SL and SLD, "L" stands for the fact that a resolution proof can be restricted to a linear sequence of clauses:
where the "top clause" is an input clause, and every other clause is a resolvent one of whose parents is the previous clause . The proof is a refutation if the last clause is the empty clause.
In SLD, all of the clauses in the sequence are goal clauses, and the other parent is an input clause. In SL resolution, the other parent is either an input clause or an ancestor clause earlier in the sequence.
In both SL and SLD, "S" stands for the fact that the only literal resolved upon in any clause is one that is uniquely selected by a selection rule or selection function. In SL resolution, the selected literal is restricted to one which has been most recently introduced into the clause. In the simplest case, such a last-in-first-out selection function can be specified by the order in which literals are written, as in Prolog. However, the selection function in SLD resolution is more general than in SL resolution and in Prolog. There is no restriction on the literal that can be selected.
The computational interpretation of SLD resolution
[edit]In clausal logic, an SLD refutation demonstrates that the input set of clauses is unsatisfiable. In logic programming, however, an SLD refutation also has a computational interpretation. The top clause can be interpreted as the denial of a conjunction of subgoals . The derivation of clause from is the derivation, by means of backward reasoning, of a new set of sub-goals using an input clause as a goal-reduction procedure. The unifying substitution both passes input from the selected subgoal to the body of the procedure and simultaneously passes output from the head of the procedure to the remaining unselected subgoals. The empty clause is simply an empty set of subgoals, which signals that the initial conjunction of subgoals in the top clause has been solved.
SLD resolution strategies
[edit]SLD resolution implicitly defines a search tree of alternative computations, in which the initial goal clause is associated with the root of the tree. For every node in the tree and for every definite clause in the program whose positive literal unifies with the selected literal in the goal clause associated with the node, there is a child node associated with the goal clause obtained by SLD resolution.
A leaf node, which has no children, is a success node if its associated goal clause is the empty clause. It is a failure node if its associated goal clause is non-empty but its selected literal unifies with no positive literal of definite clauses in the program.
SLD resolution is non-deterministic in the sense that it does not determine the search strategy for exploring the search tree. Prolog searches the tree depth-first, one branch at a time, using backtracking when it encounters a failure node. Depth-first search is very efficient in its use of computing resources, but is incomplete if the search space contains infinite branches and the search strategy searches these in preference to finite branches: the computation does not terminate. Other search strategies, including breadth-first, best-first, and branch-and-bound search are also possible. Moreover, the search can be carried out sequentially, one node at a time, or in parallel, many nodes simultaneously.
SLD resolution is also non-deterministic in the sense, mentioned earlier, that the selection rule is not determined by the inference rule, but is determined by a separate decision procedure, which can be sensitive to the dynamics of the program execution process.
The SLD resolution search space is an or-tree, in which different branches represent alternative computations. In the case of propositional logic programs, SLD can be generalised so that the search space is an and-or tree, whose nodes are labelled by single literals, representing subgoals, and nodes are joined either by conjunction or by disjunction. In the general case, where conjoint subgoals share variables, the and-or tree representation is more complicated.
Example
[edit]Given the logic program in the Prolog language:
q :- p.
p.
and the top-level goal:
q.
the search space consists of a single branch, in which q is reduced to p which is reduced to the empty set of subgoals, signalling a successful computation. In this case, the program is so simple that there is no role for the selection function and no need for any search.
In clausal logic, the program is represented by the set of clauses:
and top-level goal is represented by the goal clause with a single negative literal:
The search space consists of the single refutation:
where represents the empty clause.
If the following clause were added to the program:
q :- r.
then there would be an additional branch in the search space, whose leaf node r is a failure node. In Prolog, if this clause were added to the front of the original program, then Prolog would use the order in which the clauses are written to determine the order in which the branches of the search space are investigated. Prolog would try this new branch first, fail, and then backtrack to investigate the single branch of the original program and succeed.
If the clause
p :- p.
were now added to the program, then the search tree would contain an infinite branch. If this clause were tried first, then Prolog would go into an infinite loop and not find the successful branch.
SLDNF
[edit]SLDNF[3] is an extension of SLD resolution to deal with negation as failure. In SLDNF, goal clauses can contain negation as failure literals, say of the form , which can be selected only if they contain no variables. When such a variable-free literal is selected, a subproof (or subcomputation) is attempted to determine whether there is an SLDNF refutation starting from the corresponding unnegated literal as top clause. The selected subgoal succeeds if the subproof fails, and it fails if the subproof succeeds.
See also
[edit]References
[edit]- ^ Robert Kowalski Predicate Logic as a Programming Language Memo 70, Department of Artificial Intelligence, University of Edinburgh. 1973. Also in Proceedings IFIP Congress, Stockholm, North Holland Publishing Co., 1974, pp. 569-574.
- ^ Robert Kowalski and Donald Kuehner, Linear Resolution with Selection Function Artificial Intelligence, Vol. 2, 1971, pp. 227-60.
- ^ Krzysztof Apt and Maarten van Emden, Contributions to the Theory of Logic Programming, Journal of the Association for Computing Machinery. Vol, 1982 - portal.acm.org
- Jean Gallier, SLD-Resolution and Logic Programming chapter 9 of Logic for Computer Science: Foundations of Automatic Theorem Proving, 2003 online revision (free to download), originally published by Wiley, 1986
- John C. Shepherdson, SLDNF-Resolution with Equality, Journal of Automated Reasoning 8: 297-306, 1992; defines semantics with respect to which SLDNF-resolution with equality is sound and complete
External links
[edit]- [1] Definition from the Free On-Line Dictionary of Computing
SLD resolution
View on GrokipediaFoundations
Definition and Properties
SLD resolution is the core inference mechanism employed in first-order logic programming with definite clause programs, which are finite sets of definite Horn clauses. A Horn clause is defined as a disjunction of literals containing at most one positive literal, while a definite Horn clause (or definite clause) is a Horn clause with exactly one positive literal as the head and a (possibly empty) conjunction of positive literals in the body.[7] SLD resolution serves as a refinement of the general resolution principle, specifically a special case of SL-resolution adapted for definite clauses, utilizing backward chaining to derive refutations from an initial goal clause expressed as a conjunction of atoms in the form , where each is an atom.[7][8] Developed in the 1970s as a foundational element of logic programming, SLD resolution enables the operational execution of definite programs by systematically reducing goals through resolution steps, thereby supporting the procedural interpretation of logical specifications.[9] It is inherently restricted to definite clauses, ensuring that clause bodies consist solely of positive literals, which aligns with the positive logic underlying definite programs and facilitates computability.[7] SLD resolution possesses key properties of soundness and refutation-completeness with respect to Horn clauses. Soundness guarantees that any derived fact or answer substitution obtained via an SLD-refutation is a logical consequence of the program, preserving semantic correctness. Refutation-completeness ensures that for any unsatisfiable set of Horn clauses—meaning any provable theorem—there exists an SLD-refutation demonstrating the inconsistency, thereby allowing the procedure to establish all derivable theorems within definite logic programs.[7]Historical Development
SLD resolution emerged in the early 1970s as a refinement of earlier resolution-based theorem proving techniques, building on the foundational work of J. Alan Robinson, who introduced the general resolution principle in 1965 to enable machine-oriented automated deduction from first-order logic formulas. This general method, while complete, was computationally inefficient for practical applications, prompting researchers to develop specialized variants like linear input resolution to focus on ordered derivations and improve efficiency in programming contexts.[4] A pivotal step toward SLD resolution came from efforts to provide a procedural interpretation of logical clauses, particularly Horn clauses, for computational purposes. In 1969, Robert Kowalski and Patrick J. Hayes explored semantic trees in automatic theorem proving, introducing ideas of procedural attachment that linked logical inference to executable procedures, laying groundwork for interpreting resolution steps as program execution. Building on this, Kowalski and Donald Kuehner formalized SL resolution in 1971 as a selective linear variant of resolution, incorporating a selection function to restrict inferences to relevant literals and enhance focus in top-down derivations from definite clauses.[4] The inference rule underlying SLD resolution was introduced by Robert Kowalski in 1974, and the name "SLD resolution," standing for Selective Linear Definite clause resolution, was given by Maarten H. van Emden in a 1974 memo at the University of Edinburgh.[6] This development coincided with Alain Colmerauer's creation of Prolog in 1972, during which Kowalski's visit to Marseille influenced the adoption of a top-down resolution strategy based on SLD principles for natural language processing tasks.[10] The first implementation of SLD resolution occurred in the initial Prolog interpreter written in 1972 by Colmerauer and Philippe Roussel using Algol-W on an IBM 360, demonstrating its viability as an operational semantics for logic programs.[11] By 1973, refinements to Prolog incorporated SLD resolution more robustly, and David H.D. Warren's DEC-10 Prolog implementation in the mid-1970s further entrenched it as the theoretical foundation for logic programming languages, enabling efficient backtracking and unification in practical systems. This evolution marked a shift from general theorem proving to declarative programming, where SLD resolution provided a sound and complete mechanism for computing answers from logic programs.Inference Mechanism
The SLD Inference Rule
The SLD inference rule, central to SLD resolution in logic programming, enables the systematic reduction of goals using definite clauses from a program , which consists of Horn clauses of the form where and each are atomic formulas and . A goal is represented as a conjunction , where the are literals (atomic formulas in this context, as negation is not handled). The rule selects a literal from the current goal and resolves it against a head of a clause in .[7] Formally, the rule is stated as follows: Given goal and clause (with variables renamed apart to avoid clashes), if exists, then the derived goal is where the empty body () simply removes after substitution. This derivation step replaces the selected literal with the clause body under the most general unifier, preserving logical equivalence while reducing the goal. The process relies on unification to match the selected literal with the clause head.[7] An SLD derivation is a finite or infinite sequence starting from an initial goal , where each is obtained from by applying the SLD rule with some selection of literal and choice of clause. The linear nature of the rule ensures that only one new clause is introduced per step, focusing the search along a single path of goal reductions. Success occurs via an SLD refutation: a finite derivation ending in the empty goal , with the computed answer substitution being the composition restricted to the variables of ; failure arises if no unifiable clause exists for any remaining literal before reaching .[7] This inference mechanism, introduced by Kowalski to interpret predicate logic procedurally, underpins the operational semantics of languages like Prolog, ensuring goal-directed computation through repeated resolution steps.[5]Unification Process
In unification, the process seeks a substitution that renders two terms and identical, such that . A most general unifier (MGU) is the least restrictive such substitution, meaning any other unifier satisfies for some substitution . This concept ensures that unification preserves generality in logical inferences, avoiding overly specific bindings that could limit further derivations. The unification algorithm, originally formulated for resolution-based systems, operates by iteratively resolving differences between terms. It begins by initializing an empty substitution and identifying the disagreement set—the pairs of corresponding atomic subterms that differ. For each disagreement pair involving a variable and a non-variable term , if does not occur in (via the occurs check, which prevents cycles like ), the algorithm binds to and applies this substitution to the terms, updating the disagreement set. The process repeats until no disagreements remain or a conflict arises (e.g., distinct constants). This yields the MGU if successful, and the occurs check is crucial for terminating in finite time and avoiding infinite structures.[12] A representative example illustrates the process: unifying and proceeds by disagreeing on the first argument ( vs. ) and second ( vs. ). Binding resolves the first, then the second, with no occurs check violation, producing the MGU . Such steps operate over the Herbrand universe, the set of all ground terms constructible from the function symbols in the language, ensuring unification aligns with the Herbrand base for semantic completeness. Disagreement sets guide efficient scanning, often implemented via pointer-based methods in practice. In SLD resolution, unification is applied at each inference step to match a selected goal literal with a program clause head, yielding an MGU that instantiates the resolvent. Substitutions from successive steps are composed, where applies first followed by , accumulating bindings to form the overall computed answer while maintaining consistency. This composition, along with the occurs check, underpins the soundness of SLD derivations in definite logic programs.[13]Operational Aspects
Computational Interpretation
In logic programming, SLD resolution interprets definite clause programs computationally via backward chaining, where an initial goal representing a query is reduced step by step until it is either satisfied or fails, simulating a proof search from the query backward to the program's clauses as axioms.[9] This process aligns the declarative semantics of Horn clauses with a procedural mechanism for theorem proving and query answering, as originally proposed in the context of predicate logic as a programming language.[9] The execution model of SLD resolution starts with an initial goal clause , typically a conjunction of atoms to be derived from the program . At each step, a literal is selected from the current goal , and a clause from is chosen such that there exists a most general unifier for and ; the next goal is then .[14] This reduction continues until the goal becomes the empty clause (denoted ), indicating success and a refutation, or until no further resolution is possible, indicating failure. Along a successful derivation, substitutions are accumulated, and the computed answer substitution is their composition , which binds variables in the original query to values satisfying it relative to .[15] The search space of all possible SLD derivations from forms the SLD tree, with nodes representing goals and edges corresponding to resolution steps under a fixed selection rule for literals.[3] SLD resolution is refutation-complete for definite programs, meaning that if has a refutation (i.e., is a logical consequence of ), then the SLD tree contains at least one finite successful branch leading to , ensuring that an exhaustive fair search will find a proof if one exists.[16] This completeness property underpins the reliability of SLD as a query evaluation mechanism in systems like Prolog, though practical implementations often employ specific search strategies to navigate the tree efficiently.[17]Resolution Strategies
In SLD resolution, the search process explores the SLD derivation tree, where nodes represent goals or subgoals, and branches arise from the selection of literals within a goal and the choice of applicable clauses from the program. This tree structure encapsulates all possible derivations for a given query, with paths leading to success (empty goal), failure, or infinite loops depending on the program's properties. The efficiency and completeness of resolution depend on the strategy used to traverse this tree, as well as the selection functions that guide literal and clause choices.[7][3][1] Common strategies include depth-first search, which is the default in systems like Prolog and employs chronological backtracking to explore branches sequentially. In this approach, the search commits to the deepest path first, selecting the leftmost literal in the goal (per Prolog's standard selection function) and trying clauses in top-down program order before backtracking upon failure. While space-efficient, depth-first search risks non-termination on programs with infinite branches, such as left-recursive rules, potentially missing solutions or looping indefinitely.[7][3][1] Breadth-first search, by contrast, explores all goals at a given depth before proceeding deeper, ensuring completeness by systematically examining the entire finite SLD tree without favoring any path prematurely. However, it is memory-intensive, as it requires storing all pending subgoals, making it impractical for large programs. Iterative deepening combines elements of both by performing successive depth-first searches with increasing depth bounds, achieving completeness akin to breadth-first while maintaining the space efficiency of depth-first; this strategy is asymptotically optimal in time and space for brute-force searches in tree-structured spaces.[7][3][1] Selection functions play a crucial role in these strategies, determining which literal in the current goal is resolved next—typically the leftmost for simplicity and efficiency in Prolog—and which clauses are considered, often in the order they appear in the program. For completeness, strategies must be fair, meaning every finite branch in the SLD tree is eventually explored, avoiding biases that could omit solutions; unfair selections, like those in standard depth-first, may fail this in infinite trees despite soundness.[7][3][1]Extensions and Variants
SLDNF Resolution
SLDNF resolution extends the SLD resolution mechanism to accommodate negation as failure in normal logic programs, where clauses may include negative literals of the form not(P), with P being a conjunction of atoms. In this framework, positive literals are resolved using standard SLD steps, while a negative literal not(P) succeeds if the subgoal P fails to derive, meaning no refutation exists for P under SLD resolution. This operational treatment of negation allows logic programming systems like Prolog to handle incomplete information by inferring the absence of facts when exhaustive search yields no proof.[18] The SLDNF inference rule mirrors SLD for positive goals but introduces branching for negation. Specifically, when a negative literal not(A) is selected—provided A is ground to avoid floundering—the resolution attempts an SLD derivation for A; the negation succeeds (and the overall goal proceeds) only if this derivation finitely fails, yielding an empty refutation tree, whereas it fails if any successful derivation for A is found. For non-ground negative literals, SLDNF requires them to be ground before selection to prevent floundering, where the computation halts without committing to success or failure. This rule operationalizes negation as a search for failure rather than classical negation, enabling practical computation in top-down evaluators. SLDNF resolution is sound with respect to Clark's completion semantics, correctly deriving only logical consequences of the program's completion, where predicates are interpreted as if-then equivalences and negation as failure aligns with the closed world assumption. However, it is not complete for arbitrary normal programs due to issues like floundering and loops in negation; completeness holds for stratified programs, where negation dependencies form a layered structure without recursion through negation, ensuring the least Herbrand model coincides with the perfect model. In such cases, SLDNF captures the stable model semantics for ground queries without floundering, alternating between success and failure sets in a fixed-point manner. These properties make SLDNF the foundational operational semantics for negation in logic programming, though it requires careful program design to avoid incompleteness.[18][19]Modern Extensions
Modern extensions of SLD resolution have addressed limitations in handling negation and recursion in logic programs, particularly the incompleteness of SLDNF for non-stratified negation, by integrating advanced semantics and inference mechanisms.[20] The stable model semantics, introduced by Gelfond and Lifschitz in 1988, provides a declarative foundation for non-monotonic reasoning in logic programs, enabling SLD-like procedures to compute stable models as fixed points of a reduct transformation, which extends the least model semantics of definite programs to disjunctive and normal programs.[21] This integration allows resolution-based solvers to identify multiple stable models, supporting answer set programming (ASP) where SLD resolution variants, such as credulous resolution, perform goal-directed search over answer sets without full grounding.[22] In disjunctive logic programs, which generalize normal programs by allowing disjunctions in rule heads, SLD-like resolution has been extended through procedures such as near-Horn Prologs. These methods incorporate case analysis to handle disjunctive heads, mimicking SLD's linear derivation while ensuring completeness under the well-founded semantics for disjunctive programs.[23] For instance, the D-SLS resolution strategy builds on SLD by delaying loops and managing unfounded sets, providing a top-down evaluation that avoids the inefficiencies of bottom-up approaches in disjunctive settings.[24] To mitigate infinite loops and redundant computations in recursive queries, tabling techniques via SLG resolution have become a cornerstone extension, implemented in systems like XSB Prolog. SLG resolution, a linear variant of SLS resolution, memoizes subgoals and answers during SLD-style derivations, computing the well-founded model for general logic programs by suspending and resuming calls as needed.[25] This approach ensures termination for a broader class of programs, including those with negation, and supports advanced features like call-answer subsumption for efficiency.[26] Constraint Handling Rules (CHR) represent another extension, augmenting SLD resolution with multi-headed guarded rules for constraint simplification and propagation in logic programming. CHR programs execute via committed-choice semantics that interleave with Prolog's SLD resolution, allowing dynamic constraint stores to handle domains like numerical constraints or custom solvers without altering the core unification process.[27] This integration enables CHR to serve as a host language for implementing constraint logic programming extensions, where rules fire on the current store during SLD derivations.[28] Recent developments have hybridized SLD resolution with probabilistic and machine learning paradigms to tackle uncertainty and scalability. In probabilistic logic programming, systems like ProbLog extend SLD resolution to compute marginal probabilities over logic programs by annotating facts with probabilities and using SLD derivations to enumerate explanations, as in the ProbLog inference engine which lifts SLD trees into probabilistic queries.[29] For neural theorem proving, post-2020 works integrate deep learning with SLD, such as DeepProofLog, which employs stochastic SLD resolution to define distributions over proofs, guiding neural networks in learning resolution strategies for automated reasoning tasks.[30] These hybrids enhance SLD's applicability in uncertain domains like knowledge graph completion, where neural guidance prunes search spaces while preserving logical soundness.[31]Applications and Examples
Illustrative Examples
To illustrate the mechanics of SLD resolution, consider a basic logic program consisting of the definite clauseparent(X, Y) ← mother(X, Y). and the unit clause mother(ann, bob). The goal ← parent(ann, bob). is processed by selecting the literal parent(ann, bob) and unifying it with the clause head parent(X, Y), yielding the substitution θ = {X/ann, Y/bob}.[3] The resolvent is then ← mother(ann, bob)., obtained by applying θ to the clause body.[3] This new goal unifies directly with the fact mother(ann, bob)., resulting in the empty goal and a successful derivation with computed answer substitution { }.[3]
A step-by-step trace of this SLD derivation forms a linear path in the SLD tree:
- Initial goal:
G_0 = ← parent(ann, bob). - Select literal
parent(ann, bob)and resolve with clauseparent(X, Y) ← mother(X, Y).using θ_1 = {X/ann, Y/bob}, yieldingG_1 = ← mother(ann, bob). - Select literal
mother(ann, bob)and resolve with unit clausemother(ann, bob).using θ_2 = { }, yielding the empty goalG_2 = ← .
← parent(ann, sue). selects parent(ann, sue) and unifies with the clause head parent(X, Y) using θ = {X/ann, Y/sue}, producing G_1 = ← mother(ann, sue). No program clause unifies with mother(ann, sue)., so the derivation terminates in failure with no computed substitution.[3]
To demonstrate resolution strategies, particularly depth-first search with backtracking, extend the program to include an additional clause parent(X, Y) ← father(X, Y). along with facts mother(ann, bob). and father(ann, sue). For the goal ← parent(ann, X)., depth-first traversal first selects parent(ann, X) and resolves with parent(X, Y) ← mother(X, Y). using θ_1 = {X/ann, Y/X} (renaming variables apart), yielding G_1 = ← mother(ann, X). This unifies with mother(ann, bob). using θ_2 = {X/bob}, succeeding with partial answer {X/bob}.[3] Backtracking then returns to the selection point, trying the alternative clause parent(X, Y) ← father(X, Y). with θ_3 = {X/ann, Y/X}, yielding G_2 = ← father(ann, X). Unification with father(ann, sue). gives θ_4 = {X/sue}, producing another success with {X/sue}.[3] This process explores the SLD tree depth-first, enumerating all solutions via systematic backtracking over clause choices.[3]
