Recent from talks
Nothing was collected or created yet.
Resolution (logic)
View on WikipediaIn mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
The resolution rule can be traced back to Davis and Putnam (1960);[1] however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness.[2]
The clause produced by a resolution rule is sometimes called a resolvent.
Resolution in propositional logic
[edit]Resolution rule
[edit]The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following, is taken to be the complement to ). The resulting clause contains all the literals that do not have complements. Formally:
where
- all , , and are literals,
- the dividing line stands for "entails".
The above may also be written as:
Or schematically as:
We have the following terminology:
- The clauses and are the inference's premises
- (the resolvent of the premises) is its conclusion.
- The literal is the left resolved literal,
- The literal is the right resolved literal,
- is the resolved atom or pivot.
The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms.[3]
When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology.
Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause).
is equivalent to
A resolution technique
[edit]When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form.[4] The steps are as follows.
- All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected.
- The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S, of clauses.[4]
- For example, gives rise to the set .
- The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the clause contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to S, and is considered for further resolution inferences.
- If after applying a resolution rule the empty clause is derived, the original formula is unsatisfiable (or contradictory), and hence it can be concluded that the initial conjecture follows from the axioms.
- If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base.
One instance of this algorithm is the original Davis–Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvents.
This description of the resolution technique uses a set S as the underlying data-structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause. Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent.
A simple example
[edit]
In plain language: Suppose is false. In order for the premise to be true, must be true. Alternatively, suppose is true. In order for the premise to be true, must be true. Therefore, regardless of falsehood or veracity of , if both premises hold, then the conclusion is true.
Resolution in first-order logic
[edit]Resolution rule can be generalized to first-order logic to:[5]
where is a most general unifier of and , and and have no common variables.
Example
[edit]The clauses and can apply this rule with as unifier.
Here x is a variable and b is a constant.
Here we see that
- The clauses and are the inference's premises
- (the resolvent of the premises) is its conclusion.
- The literal is the left resolved literal,
- The literal is the right resolved literal,
- is the resolved atom or pivot.
- is the most general unifier of the resolved literals.
Informal explanation
[edit]In first-order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term logic:
- All Greeks are Europeans.
- Homer is a Greek.
- Therefore, Homer is a European.
Or, more generally:
- Therefore,
To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF). In this form, all quantification becomes implicit: universal quantifiers on variables (X, Y, ...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions.
- Therefore,
So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple:
- Find two clauses containing the same predicate, where it is negated in one clause but not in the other.
- Perform a unification on the two predicates. (If the unification fails, you made a bad choice of predicates. Go back to the previous step and try again.)
- If any unbound variables which were bound in the unified predicates also occur in other predicates in the two clauses, replace them with their bound values (terms) there as well.
- Discard the unified predicates, and combine the remaining ones from the two clauses into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
- ¬P(X)
in the first clause, and in non-negated form
- P(a)
in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the substitution
- X ↦ a
Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion:
- Q(a)
For another example, consider the syllogistic form
- All Cretans are islanders.
- All islanders are liars.
- Therefore all Cretans are liars.
Or more generally,
- ∀X P(X) → Q(X)
- ∀X Q(X) → R(X)
- Therefore, ∀X P(X) → R(X)
In CNF, the antecedents become:
- ¬P(X) ∨ Q(X)
- ¬Q(Y) ∨ R(Y)
(The variable in the second clause was renamed to make it clear that variables in different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion:
- ¬P(X) ∨ R(X)
Factoring
[edit]The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above. The resulting inference rule is refutation-complete,[6] in that a set of clauses is unsatisfiable if and only if there exists a derivation of the empty clause using only resolution, enhanced by factoring.
An example for an unsatisfiable clause set for which factoring is needed to derive the empty clause is:
Since each clause consists of two literals, so does each possible resolvent. Therefore, by resolution without factoring, the empty clause can never be obtained. Using factoring, it can be obtained e.g. as follows:[7]
Non-clausal resolution
[edit]Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form.[8][9][10][11][12][13]
These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form,[10]: 98 and sometimes save resolution steps.[13]: 425
Non-clausal resolution in propositional logic
[edit]For propositional logic, Murray[9]: 18 and Manna and Waldinger[10]: 98 use the rule
- ,
where denotes an arbitrary formula, denotes a formula containing as a subformula, and is built by replacing in every occurrence of by ; likewise for . The resolvent is intended to be simplified using rules like , etc. In order to prevent generating useless trivial resolvents, the rule shall be applied only when has at least one "negative" and "positive"[14] occurrence in and , respectively. Murray has shown that this rule is complete if augmented by appropriate logical transformation rules.[10]: 103
Traugott uses the rule
- ,
where the exponents of indicate the polarity of its occurrences. While and are built as before, the formula is obtained by replacing each positive and each negative occurrence of in with and , respectively. Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent. Traugott proved his rule to be complete, provided are the only connectives used in formulas.[12]: 398–400
Traugott's resolvent is stronger than Murray's.[12]: 395 Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution. However, formulas may grow longer when a small is replaced multiple times with a larger and/or .[12]: 398
Propositional non-clausal resolution example
[edit]As an example, starting from the user-given assumptions
the Murray rule can be used as follows to infer a contradiction:[15]
For the same purpose, the Traugott rule can be used as follows :[12]: 397
From a comparison of both deductions, the following issues can be seen:
- Traugott's rule may yield a sharper resolvent: compare (5) and (10), which both resolve (1) and (2) on .
- Murray's rule introduced 3 new disjunction symbols: in (5), (6), and (7), while Traugott's rule did not introduce any new symbol; in this sense, Traugott's intermediate formulas resemble the user's style more closely than Murray's.
- Due to the latter issue, Traugott's rule can take advantage of the implication in assumption (4), using as the non-atomic formula in step (12). Using Murray's rules, the semantically equivalent formula was obtained as (7), however, it could not be used as due to its syntactic form.
Non-clausal resolution in first-order logic
[edit]For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas and of and , respectively. If is the most general unifier of and , then the generalized resolvent is . While the rule remains sound if a more special substitution is used, no such rule applications are needed to achieve completeness.[citation needed]
Traugott's rule is generalized to allow several pairwise distinct subformulas of and of , as long as have a common most general unifier, say . The generalized resolvent is obtained after applying to the parent formulas, thus making the propositional version applicable. Traugott's completeness proof relies on the assumption that this fully general rule is used;[12]: 401 it is not clear whether his rule would remain complete if restricted to and .[16]
Paramodulation
[edit]Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It generates all "equal" versions of clauses, except reflexive identities. The paramodulation operation takes a positive from clause, which must contain an equality literal. It then searches an into clause with a subterm that unifies with one side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation is to reduce the system to atoms, reducing the size of the terms when substituting.[17]
Implementations
[edit]See also
[edit]- Condensed detachment — an earlier version of resolution
- Inductive logic programming
- Inverse resolution
- Logic programming
- Method of analytic tableaux
- SLD resolution
Notes
[edit]- ^ Davis, Martin; Putnam, Hilary (1960). "A Computing Procedure for Quantification Theory". J. ACM. 7 (3): 201–215. doi:10.1145/321033.321034. S2CID 31888376. Here: p. 210, "III. Rule for Eliminating Atomic Formulas".
- ^ Robinson 1965
- ^ D.E. Knuth, The Art of Computer Programming 4A: Combinatorial Algorithms, part 1, p. 539
- ^ a b Leitsch 1997, p. 11 "Before applying the inference method itself, we transform the formulas to quantifier-free conjunctive normal form."
- ^ Arís, Enrique P.; González, Juan L.; Rubio, Fernando M. (2005). Lógica Computacional. Ediciones Paraninfo, S.A. ISBN 9788497321822.
- ^ Russell, Stuart J.; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. p. 350. ISBN 978-0-13-604259-4.
- ^ Duffy, David A. (1991). Principles of Automated Theorem Proving. Wiley. ISBN 978-0-471-92784-6. See p. 77. The example here is slightly modified to demonstrate a not-trivial factoring substitution. For clarity, the factoring step, (5), is shown separately. In step (6), the fresh variable was introduced to enable unifiability of (5) and (6), needed for (7).
- ^ Wilkins, D. (1973). QUEST: A Non-Clausal Theorem Proving System (Master's Thesis). University of Essex.
- ^ a b Murray, Neil V. (February 1979). A Proof Procedure for Quantifier-Free Non-Clausal First Order Logic (Technical report). Electrical Engineering and Computer Science, Syracuse University. 39. (Cited from Manna, Waldinger, 1980 as: "A Proof Procedure for Non-Clausal First-Order Logic", 1978)
- ^ a b c d Manna, Zohar; Waldinger, Richard (January 1980). "A Deductive Approach to Program Synthesis". ACM Transactions on Programming Languages and Systems. 2: 90–121. doi:10.1145/357084.357090. S2CID 14770735.
- ^ Murray, N.V. (1982). "Completely Non-Clausal Theorem Proving". Artificial Intelligence. 18: 67–85. doi:10.1016/0004-3702(82)90011-x.
- ^ a b c d e f Traugott, J. (1986). "Nested Resolution". 8th International Conference on Automated Deduction. CADE 1986. LNCS. Vol. 230. Springer. pp. 394–403. doi:10.1007/3-540-16780-3_106. ISBN 978-3-540-39861-5.
- ^ a b Schmerl, U.R. (1988). "Resolution on Formula-Trees". Acta Informatica. 25 (4): 425–438. doi:10.1007/bf02737109. S2CID 32702782. Summary
- ^ These notions, called "polarities", refer to the number of explicit or implicit negations found above . For example, occurs positive in and in , negative in and in , and in both polarities in .
- ^ "" is used to indicate simplification after resolution.
- ^ Here, "" denotes syntactical term equality modulo renaming
- ^ Nieuwenhuis, Robert; Rubio, Alberto (2001). "7. Paramodulation-Based Theorem Proving" (PDF). In Robinson, Alan J.A.; Voronkov, Andrei (eds.). Handbook of Automated Reasoning. Elsevier. pp. 371–444. ISBN 978-0-08-053279-0.
References
[edit]- Robinson, J. Alan (1965). "A Machine-Oriented Logic Based on the Resolution Principle". Journal of the ACM. 12 (1): 23–41. doi:10.1145/321250.321253. S2CID 14389185.
- Leitsch, Alexander (1997). The Resolution Calculus. Texts in Theoretical Computer Science. An EATCS Series. Springer. ISBN 978-3-642-60605-2.
- Gallier, Jean H. (1986). Logic for Computer Science: Foundations of Automatic Theorem Proving. Harper & Row.
- Lee, Chin-Liang Chang, Richard Char-Tung (1987). Symbolic logic and mechanical theorem proving. Academic Press. ISBN 0-12-170350-9.
{{cite book}}: CS1 maint: multiple names: authors list (link)
External links
[edit]- Alex Sakharov. "Resolution Principle". MathWorld.
- Alex Sakharov. "Resolution". MathWorld.
Resolution (logic)
View on GrokipediaPropositional Logic
Resolution Rule
In propositional logic, clauses are represented as disjunctions of literals, where a literal is either a propositional atom or its negation .[1] The resolution rule is a binary inference rule that takes two such clauses containing complementary literals—a literal and its negation —and derives a single resolvent clause by removing the complementary pair and combining the remaining disjunctions.[1][5] Formally, given clauses and , where is a literal and and are disjunctions of literals (possibly empty), the rule infers the resolvent : [1] This resolvent is unique and obtained by resolving away the complementary literals and , ensuring no duplication of literals in the resulting clause.[1][5] Semantically, the resolution rule is sound: if the two parent clauses are satisfied under some interpretation, then the resolvent is also satisfied. Equivalently, the rule preserves unsatisfiability in that if a set of clauses containing the parents is unsatisfiable, then the set augmented with the resolvent remains unsatisfiable.[1][5] This follows from the validity of the inference, as assuming the resolvent false leads to a contradiction in at least one parent under any truth assignment.[1]Proof Procedure
The resolution proof procedure in propositional logic operates as a refutation system, aimed at demonstrating the unsatisfiability of a set of clauses derived from a knowledge base and the negation of a query formula. To establish that a knowledge base entails a formula (denoted ), the procedure first negates and incorporates it into , then seeks to derive a contradiction in the form of the empty clause. This method leverages the resolution rule to systematically generate new clauses until either the empty clause is produced—proving unsatisfiability and thus entailment—or the set of clauses reaches saturation, meaning no further resolutions are possible, indicating satisfiability.[6][7] The procedure begins with converting the formulas in to clausal normal form, which is a conjunction of disjunctions of literals (clauses). This involves multiple steps: first, transform each formula to negation normal form (NNF) by eliminating implications ( becomes ), equivalences ( becomes ), and pushing negations inward using De Morgan's laws (e.g., becomes ) while removing double negations ( becomes ); second, distribute disjunctions over conjunctions to achieve conjunctive normal form (CNF), such as converting to . If the conversion yields the empty clause directly, the set is unsatisfiable, corresponding to being a tautology. The resulting clauses form the initial set , from which the iterative resolution process proceeds: repeatedly select pairs of clauses containing complementary literals (a literal and its negation), apply the resolution rule to derive a new clause by combining the remaining literals, and add it to until the empty clause appears or saturation is reached.[5][7][6] Resolution derivations are structured as trees or directed acyclic graphs (DAGs), where leaf nodes represent initial clauses from , internal nodes denote resolvents derived from their parent clauses via resolution, and the root is the empty clause if a refutation exists. Each derived clause expands the proof space, with the search space encompassing all possible resolvents generatable from ; this space is finite for propositional logic but can grow exponentially in the number of literals, necessitating efficient implementation in automated theorem provers. The procedure typically employs binary resolution, where each inference step resolves exactly two clauses, though strategies like linear resolution— which chains resolutions sequentially from a single path of ancestors—can prune the search space for more directed proofs.[5][7]Example
To illustrate resolution in propositional logic, consider a refutation proof to show that the knowledge base consisting of , , and entails .[5] The clausal normal form of the knowledge base is:- Clause 1:
- Clause 2:
- Clause 3:
- Clause 4:
-
Resolve Clause 1 () and Clause 2 ().
The complementary literals are and .
Resolvent (Clause 5): . -
Resolve Clause 5 () and Clause 3 ().
The complementary literals are and .
Resolvent (Clause 6): . -
Resolve Clause 6 () and Clause 4 ().
The complementary literals are and .
Resolvent: (empty clause).
- Root: Clauses 1, 2, 3, 4
- Branch 1: Resolve 1 and 2 → Clause 5
- Branch 2: Resolve 5 and 3 → Clause 6
- Branch 3: Resolve 6 and 4 →
- Branch 2: Resolve 5 and 3 → Clause 6
- Branch 1: Resolve 1 and 2 → Clause 5
First-Order Logic
Clausal Normal Form
In first-order logic, a clause is defined as a disjunction of literals, where each literal is either an atomic formula or the negation of an atomic formula, and the variables in the clause are implicitly universally quantified.[8] This representation allows clauses to serve as the fundamental units for resolution-based inference, treating variables as universally closed to ensure the formula holds for all interpretations of those variables.[9] The universal closure of a formula, obtained by prefixing universal quantifiers over all free variables, is essential prior to transformation, as it guarantees the resulting clauses capture the full semantic content without existential commitments beyond Skolemization.[10] To transform a first-order formula into clausal normal form (CNF), which is a conjunction of clauses, the process follows a sequence of syntactic equivalences. First, negate the formula if proving unsatisfiability (for refutation), eliminate implications by replacing with , and remove equivalences by rewriting as .[11] Next, apply negation normal form by pushing negations inward using De Morgan's laws and quantifier dualities, such as and .[8] Then, achieve prenex normal form by moving all quantifiers to the front, renaming variables to avoid clashes during the process.[10] Following prenex form, Skolemize to eliminate existential quantifiers: replace each existentially quantified variable with a Skolem constant (if no enclosing universals) or a Skolem function depending on the preceding universal variables, ensuring the formula's satisfiability is preserved.[11] Drop the remaining universal quantifiers, as variables are understood to be universally bound. Finally, convert to CNF by distributing conjunctions over disjunctions, yielding a set of clauses such as .[8] In propositional logic, this reduces to the special case without quantifiers or Skolem functions.[9] The Herbrand interpretation plays a crucial role in the semantics of clausal normal form by providing a ground-based framework for evaluating satisfiability. For a set of clauses over a signature, the Herbrand universe consists of all ground terms constructible from the constants and functions, while the Herbrand base is the set of all ground atoms over this universe; an interpretation is then a subset of the base assigning truth values to ground atoms.[12] This grounds the variable-containing clauses by considering all possible instantiations, enabling resolution's completeness via Herbrand's theorem, which equates unsatisfiability of the clauses to the absence of a Herbrand model.[13]Unification
In first-order logic, unification is the process of finding a substitution that makes two terms or literals syntactically identical.[14] This substitution replaces variables with terms, enabling the matching of expressions that differ only in their variable bindings. Unification is crucial for handling variables in resolution-based inference, allowing generalization over specific instances.[14] A substitution is a most general unifier (mgu) of two terms and if applying makes , and every other unifier of and satisfies for some substitution .[14] The mgu is unique up to variable renaming and represents the most general way to unify the terms without introducing unnecessary specializations. For example, consider unifying the terms and , where and are constants; the mgu is , as it substitutes with and with to yield for both.[14] If the second term were with a constant (arity 0), unification would fail due to the arity mismatch between (arity 1) and .[15] The unification algorithm, introduced by Robinson, operates on a set of disagreement pairs and builds the mgu incrementally through three main operations: decomposition, elimination, and occur-check.[14] It can be described recursively as follows:- Base cases: If both terms are identical constants or the same variable, they unify with the empty substitution. If one is a variable not occurring in the other term, substitute the other term for the variable.
- Decomposition: If both terms are applications of the same function symbol with the same arity, recursively unify corresponding arguments.
- Elimination: Apply the current substitution to simplify terms by replacing variables with their bindings.
- Occur-check: If a variable is to be unified with a term containing , fail to prevent infinite structures (e.g., ).
unify(s, t) (returning the mgu or failure) is:
function unify(s, t):
if s == t: return {}
if is_variable(s):
if occurs(s, t): return fail
return {s: t}
if is_variable(t):
if occurs(t, s): return fail
return {t: s}
if functor(s) != functor(t) or [arity](/page/Arity)(s) != [arity](/page/Arity)(t): return fail
σ = {}
for i in 1 to [arity](/page/Arity)(s):
σ_i = unify(arg_i(s), arg_i(t))
if σ_i == fail: return fail
σ = compose(σ, apply(σ_i, σ))
return σ
function unify(s, t):
if s == t: return {}
if is_variable(s):
if occurs(s, t): return fail
return {s: t}
if is_variable(t):
if occurs(t, s): return fail
return {t: s}
if functor(s) != functor(t) or [arity](/page/Arity)(s) != [arity](/page/Arity)(t): return fail
σ = {}
for i in 1 to [arity](/page/Arity)(s):
σ_i = unify(arg_i(s), arg_i(t))
if σ_i == fail: return fail
σ = compose(σ, apply(σ_i, σ))
return σ
Factoring and Resolution
In first-order resolution, factoring is an inference rule that eliminates redundancy in a clause by unifying and collapsing repeated literals of the same polarity. Specifically, given a clause , where and are literals that unify with most general unifier (mgu) , the factored clause is derived. This step ensures that clauses contain at most one instance of each unifiable literal before resolution, preventing the generation of trivial or duplicate resolvents. For example, the clause factors to under the substitution .[3] The resolution rule in first-order logic extends the propositional version by incorporating unification to handle variables. Given two clauses and , where and are literals that unify with mgu , the resolvent clause is inferred, provided factoring has been applied to and first to standardize the literals. This binary inference combines the remaining disjuncts after substituting variables to make the resolved literals complementary, enabling refutation proofs through repeated applications until the empty clause is derived. Unification, as a prerequisite, finds the most general substitution that equates terms, distinguishing first-order resolution from its propositional counterpart.[2] Ordered resolution refines the basic rule to improve efficiency by restricting inferences based on a well-founded term ordering and literal selection. In this variant, resolution occurs only if the literal in the first premise is maximal with respect to among all atoms in that clause, and no selected atoms appear in its non-resolved part; similarly, must be selected (or maximal if none are) in the second premise. Literal selection is defined by a function that assigns to each clause a (possibly empty) multiset of its negative literals, guiding the prover to resolve primarily on these to reduce the search space while preserving completeness.[3] Unlike propositional resolution, which operates on ground clauses without variables and thus requires no unification or factoring, first-order resolution demands these mechanisms to manage variable instantiations and avoid redundant inferences from non-ground clauses. Factoring addresses literal repetition arising from variable differences, while unification enables generalization across terms, making the method complete for first-order unsatisfiability.[2]Example
To illustrate resolution in first-order logic, consider a refutation proof for the goal given the knowledge base consisting of and . The clausal normal form of the knowledge base is:- Clause 1:
- Clause 2:
- Clause 3:
-
Resolve Clause 3 () and Clause 2 ().
The complementary literals are and .
The most general unifier (MGU) is .
Applying to the remaining literal in Clause 2 yields .
Resolvent (Clause 4): . -
Resolve Clause 4 () and Clause 1 ().
The complementary literals are and .
The MGU is the empty substitution .
Applying yields no remaining literals.
Resolvent: (empty clause).
