Recent from talks
Nothing was collected or created yet.
List of rules of inference
View on WikipediaThis is a list of rules of inference, logical laws that relate to mathematical formulae.
Introduction
[edit]Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules.
Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation
indicates such a subderivation from the temporary assumption to .
Rules for propositional calculus
[edit]Rules for negations
[edit]- Reductio ad absurdum (related to the law of excluded middle)
Rules for conditionals
[edit]- Modus ponens (a type of Conditional Elimination)
- Modus tollens (a type of Conditional Elimination)
Rules for conjunctions
[edit]Rules for disjunctions
[edit]Rules for biconditionals
[edit]Rules of classical predicate calculus
[edit]In the following rules, is exactly like except for having the term wherever has the free variable .
Restriction 1: is a variable which does not occur in .
Restriction 2: is not mentioned in any hypothesis or undischarged assumptions.
Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in .
Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in .
Restriction 1: is a variable which does not occur in .
Restriction 2: There is no occurrence, free or bound, of in .
Restriction 3: is not mentioned in any hypothesis or undischarged assumptions.
Rules of substructural logic
[edit]The following are special cases of universal generalization and existential elimination; these occur in substructural logics, such as linear logic.
- Rule of weakening (or monotonicity of entailment) (aka no-cloning theorem)
- Rule of contraction (or idempotency of entailment) (aka no-deleting theorem)
Table: Rules of Inference
[edit]The rules above can be summed up in the following table.[1] The "Tautology" column shows how to interpret the notation of a given rule.
All rules use the basic logic operators. A complete table of "logic operators" is shown by a truth table, giving definitions of all the possible (16) truth functions of 2 boolean variables (p, q):
| p | q | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| T | T | F | F | F | F | F | F | F | F | T | T | T | T | T | T | T | T | ||
| T | F | F | F | F | F | T | T | T | T | F | F | F | F | T | T | T | T | ||
| F | T | F | F | T | T | F | F | T | T | F | F | T | T | F | F | T | T | ||
| F | F | F | T | F | T | F | T | F | T | F | T | F | T | F | T | F | T |
where T = true and F = false, and, the columns are the logical operators:
- 0, false, Contradiction;
- 1, NOR, Logical NOR (Peirce's arrow);
- 2, Converse nonimplication;
- 3, ¬p, Negation;
- 4, Material nonimplication;
- 5, ¬q, Negation;
- 6, XOR, Exclusive disjunction;
- 7, NAND, Logical NAND (Sheffer stroke);
- 8, AND, Logical conjunction;
- 9, XNOR, If and only if, Logical biconditional;
- 10, q, Projection function;
- 11, if/then, Material conditional;
- 12, p, Projection function;
- 13, then/if, Converse implication;
- 14, OR, Logical disjunction;
- 15, true, Tautology.
Each logic operator can be used in an assertion about variables and operations, showing a basic rule of inference. Examples:
- The column-14 operator (OR), shows Addition rule: when p=T (the hypothesis selects the first two lines of the table), we see (at column-14) that p∨q=T.
- We can see also that, with the same premise, another conclusions are valid: columns 12, 14 and 15 are T.
- The column-8 operator (AND), shows Simplification rule: when p∧q=T (first line of the table), we see that p=T.
- With this premise, we also conclude that q=T, p∨q=T, etc. as shown by columns 9–15.
- The column-11 operator (IF/THEN), shows Modus ponens rule: when p→q=T and p=T only one line of the truth table (the first) satisfies these two conditions. On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true.
Machines and well-trained people use this look at table approach to do basic inferences, and to check if other inferences (for the same premises) can be obtained.[citation needed]
Example 1
[edit]Consider the following assumptions: "If it rains today, then we will not go on a canoe today. If we do not go on a canoe trip today, then we will go on a canoe trip tomorrow. Therefore (Mathematical symbol for "therefore" is ), if it rains today, we will go on a canoe trip tomorrow". To make use of the rules of inference in the above table we let be the proposition "If it rains today", be "We will not go on a canoe today" and let be "We will go on a canoe trip tomorrow". Then this argument is of the form:
Example 2
[edit]Consider a more complex set of assumptions: "It is not sunny today and it is colder than yesterday". "We will go swimming only if it is sunny", "If we do not go swimming, then we will have a barbecue", and "If we will have a barbecue, then we will be home by sunset" lead to the conclusion "We will be home by sunset." Proof by rules of inference: Let be the proposition "It is sunny today", the proposition "It is colder than yesterday", the proposition "We will go swimming", the proposition "We will have a barbecue", and the proposition "We will be home by sunset". Then the hypotheses become and . Using our intuition we conjecture that the conclusion might be . Using the Rules of Inference table we can prove the conjecture easily:
| Step | Reason |
|---|---|
| 1. | Hypothesis |
| 2. | Simplification using Step 1 |
| 3. | Hypothesis |
| 4. | Modus tollens using Step 2 and 3 |
| 5. | Hypothesis |
| 6. | Modus ponens using Step 4 and 5 |
| 7. | Hypothesis |
| 8. | Modus ponens using Step 6 and 7 |
See also
[edit]References
[edit]- ^ Kenneth H. Rosen: Discrete Mathematics and its Applications, Fifth Edition, p. 58.
List of rules of inference
View on GrokipediaFundamentals
Definition and Scope
A rule of inference is a valid argument form that permits the derivation of new propositions from given premises within a formal logical system, serving as a foundational mechanism for deductive reasoning.[5] These rules function as templates or schemas that ensure the resulting conclusion preserves the truth of the premises when applied correctly.[6] To apply them, one must first understand basic elements such as propositions, which are declarative statements that can be true or false; logical connectives, including negation (¬), conjunction (∧), disjunction (∨), implication (→), and biconditional (↔); and, in more expressive systems, quantifiers like the universal (∀) and existential (∃) that bind variables to predicates.[7] Rules of inference operate on the syntactic level, providing schema-based derivations through formal proofs in a deductive system, where a conclusion is mechanically obtainable from premises via specified steps.[8] This syntactic approach differs from semantic entailment, which evaluates whether the truth of the premises necessitates the truth of the conclusion across all possible interpretations or models of the language, without reference to proof procedures.[5] In well-designed logical systems, soundness ensures that syntactic derivations yield semantically valid results, while completeness guarantees that all semantic entailments can be captured syntactically.[5] The scope of rules of inference encompasses both classical logics—such as propositional and first-order predicate logic—and non-classical variants, including intuitionistic logic, which rejects certain classical principles like the law of excluded middle, and substructural logics that omit or restrict structural rules like contraction or weakening.[9][10] Beyond theoretical foundations, these rules are essential in practical applications like automated theorem proving, where computational systems apply inference rules systematically to verify mathematical statements.[11] They also underpin AI reasoning mechanisms, enabling knowledge-based systems to perform valid deductions in domains such as expert systems and automated planning.[12]Historical Context
The origins of rules of inference trace back to the 4th century BCE, when Aristotle developed syllogisms as foundational patterns for deductive reasoning in ancient Greek philosophy. In his Prior Analytics, Aristotle systematically classified valid syllogistic moods, such as the first-figure Barbara syllogism ("All men are mortal; Socrates is a man; therefore, Socrates is mortal"), demonstrating how conclusions follow necessarily from categorical premises sharing a common term. This work represented the earliest comprehensive effort to identify and validate inference structures, influencing logical thought for over two millennia. In the late 19th and early 20th centuries, the formalization of logic advanced significantly through axiomatic systems pioneered by Gottlob Frege, Bertrand Russell, and David Hilbert, shifting inference rules toward symbolic and rigorous mathematical foundations. Frege's Begriffsschrift (1879) introduced a symbolic notation and inference principles, including a generalized modus ponens, to express and derive truths in predicate logic without reliance on natural language ambiguities. Russell, collaborating with Alfred North Whitehead in Principia Mathematica (1910–1913), employed axiomatic methods with primitive rules like modus ponens and universal generalization to reduce mathematics to logic, incorporating type theory to avoid paradoxes. Concurrently, Hilbert's axiomatic approach, as outlined in Grundzüge der theoretischen Logik (1928, with Wilhelm Ackermann), emphasized finite inference rules and consistency proofs, formalizing propositional and predicate logics through schemas like implication introduction. A pivotal shift occurred in the 1930s with Gerhard Gentzen's introduction of natural deduction systems, which prioritized inference rules over extensive axiomatic lists to mirror intuitive reasoning processes. In his Untersuchungen über das logische Schließen (1934–1935), Gentzen formulated introduction and elimination rules for logical connectives, such as those for implication (from A and A → B infer B) and conjunction, proving properties like cut-elimination to ensure derivations used only necessary subformulas. This framework emphasized the constructive nature of proofs, influencing subsequent developments in proof theory. Post-1950s, rules of inference extended into computer science via the Curry–Howard correspondence, which established an isomorphism between logical proofs and computational programs. Haskell Curry's foundational work on combinatory logic in the 1930s–1960s, culminating in Foundations of Mathematical Logic (1963), linked deduction to functional abstraction, while William Howard's 1969 manuscript (published 1980) formalized the "propositions as types" principle, mapping natural deduction rules to typed lambda calculus terms. This correspondence highlighted how inference rules underpin programming languages, enabling type checking as proof validation. During this era, classical propositional and predicate logics were fully formalized as axiomatic and natural deduction systems.[13]Propositional Logic
Negation Rules
In propositional logic, negation (¬) is a unary connective that inverts the truth value of a proposition, and its associated inference rules facilitate deriving conclusions involving falsity or contradiction within classical systems. These rules are sound and complete in classical propositional logic, meaning they preserve truth and can derive all valid inferences from axioms or assumptions. Central to negation rules is the treatment of contradiction (⊥) as deriving any proposition, reflecting the explosion principle where inconsistency implies everything. Double negation elimination allows inferring a proposition P from its double negation ¬¬P, formally ¬¬P ⊢ P. This rule is valid in classical logic because ¬¬P is logically equivalent to P, as verified by truth tables where both expressions share the same truth value across all assignments: if P is true, ¬¬P is true; if P is false, ¬¬P is false. The equivalence holds due to the bivalence of classical logic, where propositions are strictly true or false, enabling the elimination without loss of validity. Ex falso quodlibet, or the principle of explosion, states that from a contradiction ⊥, any proposition P can be inferred: ⊥ ⊢ P. This rule, also expressed as deriving ¬P from ⊥ in some contexts, captures the idea that falsehood implies anything, preventing triviality in consistent systems. It is a cornerstone of classical deduction, derivable from the disjunction introduction rule combined with disjunctive syllogism, and is rejected in paraconsistent logics to handle inconsistencies non-explosively. Negation introduction, known as reductio ad absurdum, permits inferring ¬P if assuming P leads to a contradiction: P ⊢ ⊥ therefore ⊢ ¬P. This indirect proof method assumes the negation's opposite and derives absurdity to affirm the negation, valid under classical assumptions where contradiction entails falsehood. The law of excluded middle, P ∨ ¬P, underpins this and other negation rules by asserting every proposition is true or false, ensuring no "middle" ground and enabling exhaustive case analysis in proofs. De Morgan's laws provide equivalences for negating compound propositions: ¬(P ∧ Q) ⊢ ¬P ∨ ¬Q, and its converse ¬P ∨ ¬Q ⊢ ¬(P ∧ Q), with the dual for disjunction ¬(P ∨ Q) ⊢ ¬P ∧ ¬Q. These bidirectional rules allow pushing negation inward, distributing it over conjunction and disjunction, and are tautologies in classical logic confirmed by truth tables showing identical truth values for equivalent forms.Implication Rules
Implication rules in propositional logic govern inferences involving the conditional connective , which represents hypothetical reasoning where the truth of the antecedent implies the truth of the consequent. These rules enable the derivation of new propositions from established conditionals and facts, forming the backbone of deductive arguments in classical logic. They are sound, meaning they preserve truth: if the premises are true, the conclusion must be true.[14]Modus Ponens
Modus ponens, also known as the rule of detachment, allows the inference of the consequent from a conditional and the antecedent . Formally, given and , conclude . This rule is justified semantically by its corresponding tautology , which holds in all truth assignments.[14] To verify via truth table, consider the premises and ; the only case where both are true occurs when is true and is true, ensuring the conclusion is true. The full truth table is: No row shows the premises true and the conclusion false, confirming validity.[15] For example, if "If it rains, the ground is wet" () and "It rains" (), then "The ground is wet" ().[14]Modus Tollens
Modus tollens permits inferring the negation of the antecedent from a conditional and the negation of the consequent . Formally, given and , conclude . This is a valid inference form, supported by the tautology .[14] The rule relies on the contrapositive equivalence , which is established by truth table equivalence where both expressions share the same truth values except when is true and is false.[14] Applying modus ponens to the contrapositive yields the result. For instance, if "If it rains, the ground is wet" () and "The ground is not wet" (), then "It does not rain" ().[14]Hypothetical Syllogism
Hypothetical syllogism, or the chain rule, infers a transitive conditional from two linked conditionals. Formally, given and , conclude . This follows from the tautology , which is true under all interpretations since the antecedent's truth forces the consequent via substitution.[14] Semantically, if implies and implies , then cannot be true without being true, preserving the implication. An example is: "If it rains, clouds form" (); "If clouds form, it is overcast" (); therefore, "If it rains, it is overcast" ().[14]Constructive Dilemma
Constructive dilemma combines conditionals with a disjunction to yield a disjunctive conclusion. Formally, given and , conclude . This is a valid rule of inference, derivable from disjunction introduction and modus ponens applied to cases.[14] The justification stems from exhaustive case analysis: if holds, then by the first conditional; if holds, then by the second, yielding the disjunction in either scenario. For example: "(If it rains, stay in) and (if sunny, go out)" (); "It is raining or sunny" (); therefore, "Stay in or go out" ().[14]Absorption
Absorption strengthens a conditional by incorporating the antecedent into the consequent. Formally, given , conclude . This is justified by the tautology , which holds because assuming true and true forces true, making true.[16] The rule reflects that the original implication already commits to under , so conjoining adds no new falsifying possibility. An illustrative case: From "If studying, one passes" (), infer "If studying, one studies and passes" ().[16]Conjunction Rules
In propositional logic, conjunction rules facilitate the combination of propositions using the connective ∧ (logical "and") and the subsequent manipulation of such combinations. These rules form the core of inference involving conjunctive statements, enabling the synthesis of multiple premises into a single assertion and the extraction of individual components for further reasoning. They are fundamental in natural deduction systems, where they ensure soundness and completeness relative to classical truth-functional semantics. The conjunction introduction rule (∧I), a basic synthesis rule, permits inferring the conjunction of two propositions from their separate establishment. Formally, given premises and , one may conclude : This rule reflects the intuitive notion that both propositions must hold for their joint assertion to be valid, and it was originally formulated as part of Gerhard Gentzen's natural deduction framework in 1934.[17] Conjunction elimination (∧E), also known as simplification, allows extracting either conjunct from a given conjunction as a standalone premise. This yields two symmetric inferences: from , one may derive : or symmetrically via . These elimination rules enable the decomposition of complex premises, streamlining proofs by isolating components, and originate from the same natural deduction system developed by Gentzen.[17] De Morgan's law for conjunction establishes a key equivalence relating negation, conjunction, and disjunction: the negation of a disjunction is logically equivalent to the conjunction of the negations, . As an inference rule, this permits deriving from , with the converse holding as well. These laws, which highlight the duality between conjunction and disjunction under negation, were first articulated by Augustus De Morgan in his 1847 treatise on formal logic.[18] The exportation rule addresses nested implications involving conjunctions, allowing the restructuring of a conditional with a conjunctive antecedent into an iterated implication: . This equivalence supports the redistribution of conjuncts in hypothetical reasoning, preserving logical validity. It is a standard derived rule in propositional logic systems, valid under classical semantics.[19] Under truth-functional semantics, the conjunction connective ∧ is defined such that is true precisely when both and are true, and false otherwise; this binary truth condition underpins the validity of all conjunction rules in classical propositional logic.[7] De Morgan's laws further illustrate the interplay between conjunction and disjunction through negation.Disjunction Rules
Disjunction rules in propositional logic govern the use of the disjunction connective (∨), which asserts that at least one of the disjuncts is true. These rules facilitate the introduction of disjunctions into proofs, their elimination through case analysis or negation, and transformations involving negation, enabling the construction of valid arguments by handling alternatives.[20] Disjunction introduction allows the derivation of a disjunction from one of its disjuncts. Formally, from premise , one may infer for any formula ; symmetrically, from , infer . This rule reflects the inclusive nature of disjunction, where adding an alternative does not alter the truth if the original disjunct holds.[21] Disjunctive syllogism provides a method for eliminating one disjunct via negation. Given premises and , the conclusion follows, as the falsity of forces to be true to satisfy the disjunction. This rule is a derived inference in natural deduction systems and is fundamental for resolving binary choices in arguments.[22] The destructive dilemma extends disjunctive reasoning to hypothetical scenarios. With premises and , one infers , effectively dismantling the implications by denying their consequents and disjoining the negated antecedents. This rule is particularly useful in refuting paired hypotheses through alternative denials.[23] De Morgan's law for disjunction relates negation to conjunction and disjunction. Specifically, from , one derives ; conversely, from , infer . These equivalences, treated as bidirectional inference rules, allow pushing negation inward, transforming conjunctive negations into disjunctive forms and vice versa.[24] Case analysis, or disjunction elimination, supports proof strategies by resolving disjunctions through exhaustive consideration. Assuming , if is derived from alone and also from alone, then follows unconditionally. This rule underpins dilemma resolution in proofs, contrasting with conjunction's joint affirmation by emphasizing alternatives.[25]Biconditional Rules
In propositional logic, the biconditional connective, denoted ↔, expresses logical equivalence between two propositions P and Q, asserting that P is true if and only if Q is true. This symmetric relation captures mutual implication, distinguishing it from the unidirectional implication (→). Rules of inference for the biconditional enable the derivation of equivalences from established implications and the extraction of implications from equivalences, forming a core component of natural deduction systems.[26] The biconditional introduction rule allows the inference of P ↔ Q from the conjunction of the two directional implications: (P → Q) ∧ (Q → P) ⊢ P ↔ Q. This rule formalizes the construction of an equivalence by verifying both directions of implication, ensuring the propositions hold under identical conditions. Complementing this, the biconditional elimination rule permits the extraction of either implication from the biconditional: P ↔ Q ⊢ P → Q and P ↔ Q ⊢ Q → P. These elimination steps decompose the equivalence into its constituent conditionals, facilitating further deductions in proofs.[26] The biconditional can also be defined in terms of other connectives through logical equivalence: P ↔ Q ≡ (P ∧ Q) ∨ (¬P ∧ ¬Q). This formulation highlights the biconditional as true precisely when both propositions share the same truth value—either both true or both false—providing a truth-functional basis without relying on primitive ↔. In natural deduction, conditional proof supports biconditional derivations by structuring subproofs: to establish P ↔ Q, one assumes P and derives Q (yielding P → Q), then assumes Q and derives P (yielding Q → P), after which biconditional introduction applies. This method leverages assumption-based reasoning to build the required implications incrementally.[27][28] Validation of these rules aligns with the biconditional's truth table, which confirms ↔ as the equality of truth values:| P | Q | P ↔ Q |
|---|---|---|
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | T |
Predicate Logic
Quantifier Introduction and Elimination
In first-order predicate logic, the rules of inference for quantifiers allow the manipulation of universal (∀) and existential (∃) quantifiers to extend reasoning from specific instances to general statements or vice versa, building upon propositional logic by incorporating variables and domains. These rules ensure soundness and completeness in natural deduction systems, where universal introduction and elimination handle generalizations and instantiations for ∀, while existential introduction and elimination manage assertions of existence and their consequences for ∃.[30][31] Universal instantiation, also known as universal elimination, permits deriving a specific instance from a universally quantified statement. Formally, from the premise ∀x P(x), one infers P(c), where c is any term in the language that is free for substitution in place of x to avoid variable capture. This rule applies without restrictions on c beyond ensuring the substitution yields a well-formed formula, allowing the universal claim to be applied to particular objects in the domain. For example, from ∀x (Human(x) → Mortal(x)), one may infer Mortal(Socrates) by substituting the constant Socrates for x.[32][31][30] Existential generalization, or existential introduction, reverses this process for existence claims by introducing ∃ from a specific instance. Formally, from P(c), one infers ∃x P(x), where c is a term substitutable for x without causing binding issues. This rule asserts that if a property holds for some particular object, then there exists at least one object with that property, applicable to any valid term c. For instance, from Mortal(Socrates), one derives ∃x Mortal(x). The substitution must preserve the formula's structure, ensuring no unintended variable bindings occur.[32][31] Universal introduction, or universal generalization, enables inferring a universal quantifier from a statement about an arbitrary variable. Formally, assuming a set of premises Γ where the variable x is not free in any formula in Γ, and deriving P(x) from Γ, one infers ∀x P(x). The key restriction is that x must be treated as arbitrary, meaning it does not appear free in undischarged assumptions, preventing dependence on specific values. This rule formalizes induction over the domain, as in deriving ∀x (Human(x) → Mortal(x)) from a proof of Human(x) → Mortal(x) where x is arbitrary and unbound in premises. Variable restrictions ensure x remains a placeholder without prior bindings.[32][30][31] Existential elimination, also called existential instantiation, extracts consequences from an existence claim by introducing a fresh witness. Formally, given ∃x P(x) and a subproof assuming P(y) (where y is a fresh variable not occurring free in Q, the undischarged assumptions, P(x), or ∃x P(x)), deriving Q, one infers Q. The freshness of y prevents capture errors and ensures the derivation does not rely on a specific identity for the existent object. For example, from ∃x (Human(x) ∧ Mortal(x)), assume Human(y) ∧ Mortal(y) and derive ∃z Mortal(z) (e.g., by existential generalization on Mortal(y)), then infer ∃z Mortal(z), with y chosen anew to avoid conflicts. This scoped use of y maintains the rule's validity across derivations.[32][30][31][33] Central to all quantifier rules are scope and binding conventions to prevent variable capture, where a substituted term's free variables become unintentionally bound by an outer quantifier. In formal systems, substitutions require terms to be "free for" the variable, meaning no free variable in the term is captured by a quantifier in the formula's scope. For instance, substituting a term containing a free variable z into P(x) yielding ∀z Q(x, z) would capture z if not checked, invalidating the inference; thus, rules mandate renaming bound variables or using fresh names to resolve such conflicts. These safeguards ensure the rules preserve logical equivalence and avoid paradoxes in predicate logic derivations.[30][31]Predicate-Specific Inference Rules
In predicate logic, equality is treated as a special binary predicate symbol, subject to specific inference rules that ensure its reflexive and substitutive properties. The equality introduction rule, also known as reflexivity, states that for any term , one may infer without premises. This rule establishes the foundational property that every entity is identical to itself, forming the basis for all subsequent equalities in derivations.[34] Complementing this is the equality elimination rule, or substitution rule, which permits replacing one side of an established equality with the other within any predicate or term. Formally, given premises and a formula where is any predicate containing the term , one infers . This rule supports the indiscernibility of identicals by allowing consistent substitution across atomic formulas, preserving truth in interpretations. For instance, if and , then . In systems without function symbols, substitution applies directly to variables or constants; with functions, it extends to nested terms while respecting binding scopes.[34][35] Leibniz's law provides a schematic generalization of equality elimination, capturing the principle of the indiscernibility of identicals across all predicates. It is expressed as the axiom schema , where ranges over all predicate symbols in the language. This schema ensures that identical terms share all properties definable by the predicates, serving as a foundational axiom in first-order theories with equality. Derived rules from this schema allow substitution in complex formulas, such as replacing with in quantified contexts under the equality , provided no variable capture occurs. Leibniz's law underpins the congruence properties of equality, making it essential for proving theorems in mathematics and formal verification.[36][37] In classical predicate logic with fixed domains, a Barcan-like formula holds tautologically under the assumption of uniform domains, as the bound variables and are merely placeholders with identical scopes. This validity arises because classical interpretations maintain constant domains across the entire structure, rendering variable renaming inconsequential and ensuring the implication is always true regardless of the predicate . While the full Barcan formula originates in modal extensions to handle varying domains—where it corresponds to the condition that accessible worlds do not introduce new individuals—its classical counterpart reinforces the stability of quantification over predicates without modal operators.[38] Rules involving function symbols and term substitution are central to Herbrand semantics, which interprets first-order logic via ground terms without reference to external structures. In a Herbrand universe, the set of all ground terms is generated by applying function symbols to constants and themselves, forming a countable domain where each function of arity maps -tuples of terms to new terms. The substitution rule in this context allows replacing variables in formulas with terms from the Herbrand universe, enabling the instantiation of universal quantifiers to ground instances for theorem proving. For example, given a function and term , denotes a unique ground term, and substitutions like transform formulas while preserving satisfiability in Herbrand models. These rules facilitate resolution-based deduction by unifying terms through most general unifiers, ensuring completeness for unsatisfiability checks in clausal form. Quantifiers enable the application of predicates over these terms, but the substitution mechanics are predicate-independent.[39]Non-Classical Logics
Substructural Logic Rules
Substructural logics constitute a class of non-classical logical systems that modify or omit certain structural rules of classical sequent calculus, such as weakening, contraction, and exchange, to model resource-sensitive reasoning or relevance constraints.[10] These restrictions prevent unrestricted manipulation of assumptions, ensuring that premises are used in a controlled manner, which contrasts with classical logic where such rules allow free addition, duplication, or reordering of hypotheses.[10] The weakening rule, which permits deriving from by adding irrelevant assumptions , is rejected in many substructural logics to enforce relevance between premises and conclusions.[10] In relevant logic, this restriction ensures that every premise contributes to the derivation; for instance, classical theorems like fail because bears no relevance to . Anderson and Belnap formalized this in their foundational work, emphasizing that entailment requires shared content between antecedent and consequent. Similarly, in linear logic, weakening's absence treats logical premises as consumable resources, preventing their gratuitous introduction to reflect finite computational or physical constraints.[40] Contraction, the rule allowing from by merging duplicate assumptions, is also often absent, prohibiting automatic reuse of premises without explicit licensing.[10] Linear logic exemplifies this by disallowing premise duplication, which avoids paradoxes arising from infinite reuse, such as those in self-referential contexts; instead, the modality (of course) introduces reusability for specific formulas, enabling controlled resource renewal in proofs.[40] Girard introduced this mechanism to refine classical logic while preserving its expressive power for concurrent processes.[40] The exchange rule, or permutation, permits reordering premises in multi-premise inferences, such as deriving from .[10] While present in many substructural systems, it is omitted in non-commutative variants like the Lambek calculus, where premise order models syntactic composition in linguistics, ensuring does not imply .[10] In relevant logic, implication introduction imposes a relevance condition: to derive from assumption , the derivation of must utilize , preventing irrelevant discharges that weaken inferential connections. This rule, central to systems like R, maintains logical discipline by requiring premise involvement, as detailed in Anderson and Belnap's analysis of entailment. Cut-elimination theorems in substructural sequent calculi adapt Gentzen's classical result by accounting for restricted structural rules, often yielding analytic proofs without the cut rule .[10] In linear logic, cut-elimination holds despite the absence of weakening and contraction, facilitating normalization in resource-aware proofs.[40] For relevant logics, sequent systems like those in FL (full Lambek) demonstrate cut-admissibility under relevance constraints, with algebraic studies confirming elimination in contraction-free settings.[41]Intuitionistic Logic Rules
Intuitionistic logic, developed as a formal system for constructive mathematics, employs rules of inference that emphasize the existence of explicit proofs or constructions for propositions, diverging from classical logic by rejecting non-constructive principles.[42] The Brouwer-Heyting-Kolmogorov (BHK) interpretation provides the foundational semantic understanding, where a proof of a proposition corresponds to a mental construction: for implication , it is a method transforming any proof of into a proof of ; for negation , it is a construction leading to a contradiction from any supposed proof of ; and for disjunction , it requires deciding which disjunct holds via a proof of or .[42] This interpretation, articulated by L. E. J. Brouwer in his early works on intuitionism, formalized by Arend Heyting in 1930, and refined by Andrey Kolmogorov in his problem-solving calculus, ensures that inferences are valid only if they preserve constructivity.[42] The rules for implication introduction and elimination in intuitionistic logic mirror those in classical logic, maintaining compatibility for basic hypothetical reasoning while adhering to constructive constraints. Implication introduction allows deriving by assuming and deriving within that scope, discharging the assumption upon conclusion. Implication elimination, or modus ponens, permits inferring from and . However, intuitionistic logic excludes Peirce's law, , which is classically valid but non-constructive, as it does not provide an effective method for deriving without assuming the antecedent's proof conditions.[43] Negation is treated as an implication to falsehood, defined as , where represents absurdity or contradiction. Negation introduction proceeds by assuming and deriving , yielding ; elimination from and yields . This definition precludes double negation elimination, as in general, since a construction showing the inconsistency of (i.e., a proof that leads to ) does not constructively establish itself—only that is not impossible.[43][42] Disjunction elimination requires exhaustive constructive cases: from , a proof of assuming , and a proof of assuming , one infers . Under the BHK interpretation, this rule demands an effective decision procedure to select the appropriate case based on the proof of the disjunction, ensuring the inference yields a uniform construction for .[43] Intuitionistic logic rejects the law of excluded middle, , as a general theorem, since it would assert that for any proposition, either it or its negation is constructively provable—a claim Brouwer deemed unfounded without prior resolution of the problem. This rejection aligns with the constructive ethos, avoiding assumptions of decidability for all statements.[42]Applications and Examples
Summary Table of Key Rules
The following table provides a concise overview of selected key rules of inference in propositional, predicate, and non-classical logics, presented in natural deduction style for clarity.[44]| Rule Name | Schema | Logic Type | Brief Description |
|---|---|---|---|
| Modus Ponens (Implication Elimination) | Propositional | Infers the consequent from the antecedent and an implication.[44] | |
| Conjunction Introduction | Propositional | Forms a conjunction from its separate conjuncts.[44] | |
| Conjunction Elimination | (or ) | Propositional | Extracts a conjunct from a conjunction.[44] |
| Disjunction Introduction | (or from ) | Propositional | Introduces a disjunction by adding a disjunct.[44] |
| Disjunction Elimination | Propositional | Infers a common conclusion from a disjunction and subproofs for each case.[44] | |
| Negation Introduction | Subproof: | Propositional | Infers the negation of a hypothesis leading to contradiction (reductio ad absurdum).[44] |
| Ex Falso Quodlibet | (any ) | Propositional | Derives any formula from a contradiction.[44] |
| Universal Instantiation | Predicate | Applies a universal quantifier to a specific term.[44] | |
| Universal Generalization | Subproof (arbitrary ): | Predicate | Generalizes an instance to a universal quantification under suitable conditions.[44] |
| Existential Instantiation | , subproof ( fresh) | Predicate | Extracts a witness from an existential to derive a conclusion.[44] |
| Existential Generalization | Predicate | Forms an existential from a specific instance.[44] | |
| Intuitionistic Implication Introduction | Subproof: (constructive discharge) | Non-classical (Intuitionistic) | Infers an implication only via a constructive proof from antecedent to consequent.[9] |
| Intuitionistic Negation Elimination | If and , then (for any ) | Non-classical (Intuitionistic) | Derives any conclusion from premises entailing a formula and its negation, preserving constructivity.[9] |
| Weakening (Structural) | implies | Non-classical (Substructural, e.g., Relevance) | Adds irrelevant premises (rejected in relevance logics to enforce premise relevance).[10] |
| Contraction (Structural) | implies | Non-classical (Substructural, e.g., Linear) | Reuses a premise multiple times (absent in linear logic to model resource sensitivity).[10] |
