Recent from talks
Nothing was collected or created yet.
Metalogic
View on WikipediaMetalogic is the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems.[1] Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths.[2]
The basic objects of metalogical study are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the branch of mathematical logic that is known as model theory, and the study of deductive systems is the branch that is known as proof theory.
Overview
[edit]Formal language
[edit]A formal language is an organized set of symbols, the symbols of which precisely define it by shape and place. Such a language therefore can be defined without reference to the meanings of its expressions; it can exist before any interpretation is assigned to it—that is, before it has any meaning. First-order logic is expressed in some formal language. A formal grammar determines which symbols and sets of symbols are formulas in a formal language.
A formal language can be formally defined as a set A of strings (finite sequences) on a fixed alphabet α. Some authors, including Rudolf Carnap, define the language as the ordered pair <α, A>.[3] Carnap also requires that each element of α must occur in at least one string in A.
Formation rules
[edit]Formation rules (also called formal grammar) are a precise description of the well-formed formulas of a formal language. They are synonymous with the set of strings over the alphabet of the formal language that constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).
Formal systems
[edit]A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.
A formal system can be formally defined as an ordered triple <α,,d>, where d is the relation of direct derivability. This relation is understood in a comprehensive sense such that the primitive sentences of the formal system are taken as directly derivable from the empty set of sentences. Direct derivability is a relation between a sentence and a finite, possibly empty set of sentences. Axioms are so chosen that every first place member of d is a member of and every second place member is a finite subset of .
A formal system can also be defined with only the relation d. Thereby can be omitted and α in the definitions of interpreted formal language, and interpreted formal system. However, this method can be more difficult to understand and use.[3]
Formal proofs
[edit]A formal proof is a sequence of well-formed formulas of a formal language, the last of which is a theorem of a formal system. The theorem is a syntactic consequence of all the well formed formulae that precede it in the proof system. For a well formed formula to qualify as part of a proof, it must result from applying a rule of the deductive apparatus of some formal system to the previous well formed formulae in the proof sequence.
Interpretations
[edit]An interpretation of a formal system is the assignment of meanings to the symbols and truth-values to the sentences of the formal system. The study of interpretations is called Formal semantics. Giving an interpretation is synonymous with constructing a model.
Important distinctions
[edit]Metalanguage–object language
[edit]In metalogic, formal languages are sometimes called object languages. The language used to make statements about an object language is called a metalanguage. This distinction is a key difference between logic and metalogic. While logic deals with proofs in a formal system, expressed in some formal language, metalogic deals with proofs about a formal system which are expressed in a metalanguage about some object language.
Syntax–semantics
[edit]In metalogic, 'syntax' has to do with formal languages or formal systems without regard to any interpretation of them, whereas, 'semantics' has to do with interpretations of formal languages. The term 'syntactic' has a slightly wider scope than 'proof-theoretic', since it may be applied to properties of formal languages without any deductive systems, as well as to formal systems. 'Semantic' is synonymous with 'model-theoretic'.
Use–mention
[edit]In metalogic, the words use and mention, in both their noun and verb forms, take on a technical sense in order to identify an important distinction.[2] The use–mention distinction (sometimes referred to as the words-as-words distinction) is the distinction between using a word (or phrase) and mentioning it. Usually it is indicated that an expression is being mentioned rather than used by enclosing it in quotation marks, printing it in italics, or setting the expression by itself on a line. The enclosing in quotes of an expression gives us the name of an expression, for example:
"Metalogic" is the name of this article.
This article is about metalogic.
Type–token
[edit]The type-token distinction is a distinction in metalogic, that separates an abstract concept from the objects which are particular instances of the concept. For example, the particular bicycle in your garage is a token of the type of thing known as "The bicycle." Whereas, the bicycle in your garage is in a particular place at a particular time, that is not true of "the bicycle" as used in the sentence: "The bicycle has become more popular recently." This distinction is used to clarify the meaning of symbols of formal languages.
History
[edit]Metalogical questions have been asked since the time of Aristotle.[4] However, it was only with the rise of formal languages in the late 19th and early 20th century that investigations into the foundations of logic began to flourish. In 1904, David Hilbert observed that in investigating the foundations of mathematics that logical notions are presupposed, and therefore a simultaneous account of metalogical and metamathematical principles was required. Today, metalogic and metamathematics are largely synonymous with each other, and both have been substantially subsumed by mathematical logic in academia. A possible alternate, less mathematical model may be found in the writings of Charles Sanders Peirce and other semioticians.
Results
[edit]Results in metalogic consist of such things as formal proofs demonstrating the consistency, completeness, and decidability of particular formal systems.
Major results in metalogic include:
- Proof of the uncountability of the power set of the natural numbers (Cantor's theorem 1891)
- Löwenheim–Skolem theorem (Leopold Löwenheim 1915 and Thoralf Skolem 1919)
- Proof of the consistency of truth-functional propositional logic (Emil Post 1920)
- Proof of the semantic completeness of truth-functional propositional logic (Paul Bernays 1918),[5] (Emil Post 1920)[2]
- Proof of the syntactic completeness of truth-functional propositional logic (Emil Post 1920)[2]
- Proof of the decidability of truth-functional propositional logic (Emil Post 1920)[2]
- Proof of the consistency of first-order monadic predicate logic (Leopold Löwenheim 1915)
- Proof of the semantic completeness of first-order monadic predicate logic (Leopold Löwenheim 1915)
- Proof of the decidability of first-order monadic predicate logic (Leopold Löwenheim 1915)
- Proof of the consistency of first-order predicate logic (David Hilbert and Wilhelm Ackermann 1928)
- Proof of the semantic completeness of first-order predicate logic (Gödel's completeness theorem 1930)
- Proof of the cut-elimination theorem for the sequent calculus (Gentzen's Hauptsatz 1934)
- Proof of the undecidability of first-order predicate logic (Church's theorem 1936)
- Gödel's first incompleteness theorem 1931
- Gödel's second incompleteness theorem 1931
- Tarski's undefinability theorem (Gödel and Tarski in the 1930s)
See also
[edit]References
[edit]- ^ Harry Gensler, Introduction to Logic, Routledge, 2001, p. 336.
- ^ a b c d e Hunter, Geoffrey (1996) [1971]. Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of California Press (published 1973). ISBN 9780520023567. OCLC 36312727. (accessible to patrons with print disabilities)
- ^ a b Rudolf Carnap (1958) Introduction to Symbolic Logic and its Applications, p. 102.
- ^ Smith, Robin (2022), "Aristotle's Logic", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Winter 2022 ed.), Metaphysics Research Lab, Stanford University, retrieved 2023-08-28
- ^ Hao Wang, Reflections on Kurt Gödel
External links
[edit]
Media related to Metalogic at Wikimedia Commons- Dragalin, A.G. (2001) [1994], "Meta-logic", Encyclopedia of Mathematics, EMS Press
Metalogic
View on GrokipediaIntroduction
Definition and Scope
Metalogic is the branch of mathematical logic that examines the properties of formal logical systems, focusing on their syntactic structures, semantic interpretations, and deductive mechanisms. It investigates key attributes such as completeness (whether all valid statements are provable), consistency (absence of contradictions), and decidability (whether there exists an algorithm to determine truth or validity). This field provides a rigorous framework for understanding the foundations and limitations of logical inference without participating in the inferences themselves.[3][4] The scope of metalogic primarily includes proof theory, which analyzes the deductive validity and structure of proofs within formal systems; model theory, which explores semantic validity through interpretations and structures; and recursion theory, which studies the computability of logical procedures and decision problems. These areas form the metatheoretical core of metalogic, emphasizing abstract analysis over practical application in reasoning. By treating logical systems as objects of study, metalogic reveals their inherent capabilities and boundaries, such as the extent to which syntax aligns with semantics. In relation to logic proper, metalogic operates at a higher level of abstraction, using metalinguages—often more expressive than the object languages they describe—to articulate properties of logical systems. This hierarchical approach allows metalogic to scrutinize the rules, axioms, and consequences of logics without embedding them in everyday discourse. The term "metalogic" was first used in the early 1930s by Alfred Tarski and Jan Łukasiewicz of the Lwów–Warsaw school, with Tarski employing it in his 1931 paper "On Definable Sets of Real Numbers I" to describe investigations into definability and foundational rigor.[3][4]Branches of Metalogic
Metalogic encompasses several interrelated branches that investigate the properties of formal logical systems from different perspectives. The primary subfields are proof theory, model theory, and recursion theory (also known as computability theory). These branches address syntactic, semantic, and algorithmic aspects of logic, respectively, providing tools to analyze derivability, interpretation, and decidability within formal systems.[5][6][7] Proof theory examines the structure and properties of proofs in formal systems, focusing on concepts such as derivability, consistency, and normalization. It studies how theorems are derived from axioms using inference rules, emphasizing the syntactic manipulation of formulas without reference to external interpretations. Key concerns include the consistency of systems—ensuring no contradictions can be derived—and normalization, which involves reducing proofs to canonical forms to reveal their essential logical content. This branch originated with efforts to formalize mathematical proofs and has evolved to include sequent calculi and natural deduction systems for analyzing proof complexity.[5] Model theory explores the semantic interpretations of logical languages through models, which are structures that assign meanings to symbols and satisfy formulas. It investigates validity (truth in all models) and satisfiability (truth in at least one model), using tools like the Löwenheim-Skolem theorem to show that countable theories have models of various cardinalities. Central to this field is the relationship between a theory's axioms and the class of models they describe, enabling the classification of structures up to elementary equivalence. Model theory thus provides a framework for understanding how logical sentences constrain possible worlds or interpretations.[6] Recursion theory analyzes the computability and decidability of problems arising in logical systems, often linking them to models like Turing machines. It classifies sets and functions as recursive (computable) or recursively enumerable, addressing questions such as whether the theorems of a system form a decidable set. This branch connects logic to theoretical computer science by studying degrees of unsolvability and the halting problem, revealing inherent limitations in algorithmic verification of logical properties. Recursion theory formalizes the intuitive notion of mechanical computation, showing, for instance, that certain predicate problems in arithmetic are undecidable.[8][7] These branches interconnect to yield deeper metatheoretic insights: proof theory aligns with the syntactic derivation of formulas, model theory with their semantic evaluation, and recursion theory with the algorithmic feasibility of both. For example, the compactness theorem—a fundamental result stating that a theory is satisfiable if every finite subset is—bridges proof theory and model theory by relying on the completeness of first-order logic for its syntactic proof, while its semantic version uses model constructions; recursion theory further informs its computability implications, as checking finite subsets is decidable but the full theorem is not. Such interconnections enable analyses of meta-properties like the independence of axioms or the existence of non-standard models.[5][6][9] Applications of these branches extend beyond pure logic. In proof theory, techniques underpin automated theorem proving, where systems like resolution or tableau methods derive conclusions mechanically, aiding verification in software and hardware design. Model theory supports database query optimization by modeling relational structures, while recursion theory informs complexity classifications in algorithms, such as determining the undecidability of certain program equivalence problems. These uses highlight metalogic's role in computational and foundational advancements.[5][6][7]Foundational Concepts
Formal Languages
In metalogic, a formal language provides the symbolic foundation for logical systems by specifying a collection of finite strings constructed from a fixed set of primitive symbols, without assigning any interpretive meaning to those symbols.[10] These languages enable the precise expression of mathematical and logical statements through syntactic combinations alone.[10] The alphabet of a formal language consists of primitive symbols divided into three main categories: logical symbols, non-logical symbols, and auxiliary symbols.[10] Logical symbols include connectives such as negation (), conjunction (), disjunction (), and implication (), as well as quantifiers like the universal quantifier () and existential quantifier ().[10] Non-logical symbols comprise individual constants (e.g., , ), individual variables (e.g., , ), predicate symbols (e.g., for binary predicates denoting relations), and function symbols (e.g., for unary functions).[10] Auxiliary symbols, such as left and right parentheses ( and ), serve to indicate grouping and structure.[10] The vocabulary of the language arises from combining these symbols to form basic building blocks: terms and atomic formulas.[10] Terms are constructed recursively starting from individual constants and variables, with function symbols applied to other terms (e.g., or ).[10] Atomic formulas are then obtained by applying predicate symbols to appropriate numbers of terms (e.g., for a binary predicate) or by asserting equality between terms (e.g., ).[10] A representative example is the first-order language with equality for arithmetic, which includes non-logical symbols such as the constant , unary successor function , binary addition , binary multiplication , and the binary equality predicate , alongside the standard logical and auxiliary symbols. This setup allows for the symbolic representation of arithmetic expressions like , focusing solely on their structural composition. The full set of valid expressions in such languages, known as well-formed formulas, emerges from applying specific formation rules to these elements, as detailed in subsequent discussions of syntax.[10]Syntax and Formation Rules
In metalogic, syntax concerns the formal rules governing the construction of expressions in a logical language, ensuring that only structurally valid strings are recognized as meaningful components of the system. These rules operate independently of any interpretive meaning, focusing solely on the arrangement of symbols from the language's alphabet to produce well-formed expressions. This syntactic framework is essential for metatheoretic investigations, such as analyzing the consistency or completeness of formal systems, by providing a precise basis for identifying valid logical structures.[2][11] Formation rules define terms and formulas recursively, starting from basic elements and building complex expressions through inductive steps. For terms, the rules typically include: variables (such as ) and constants (such as ) as base terms; if are terms and is an -place function symbol, then is a term; and nothing else qualifies as a term. This recursive structure ensures that all terms are generated systematically from simpler ones, preventing arbitrary symbol combinations. In first-order logic, for instance, a term like arises by applying the function symbol to the variable and constant .[2][11] Formulas are similarly defined recursively, beginning with atomic formulas such as where is a predicate symbol and the are terms, or equality statements . Inductive rules then specify: if is a formula, then is a formula; if and are formulas, then , , , and are formulas; and if is a formula and is a variable, then and are formulas, with closure under these operations. These rules guarantee that every formula can be decomposed into atomic components via repeated application of connectives and quantifiers in reverse.[2][11] Well-formed formulas (WFFs), also known as formulas of the language, are precisely those strings generated by the formation rules, distinguishing them from ill-formed strings that violate syntactic constraints and thus lack any formal status. For example, in propositional logic, atomic propositions serve as base WFFs; if and are WFFs, then , , , and similar compounds are WFFs; and only such constructions qualify. The string is a WFF under these rules, as it applies the implication connective to atomic propositions, whereas or is ill-formed due to incomplete or mismatched structure, ensuring unambiguous parsing without ambiguity in scope or association. This distinction is crucial for syntactic validity, as ill-formed expressions cannot participate in proofs or derivations.[2][11] Syntax can be formalized using phrase-structure grammars or Backus-Naur Form (BNF) notation to specify the hierarchical generation of expressions. In BNF for propositional logic, for instance:<formula> ::= <atom> | "¬" <formula> | "(" <formula> "→" <formula> ")"
<atom> ::= "p" | "q" | "r" (or other propositional variables)
<formula> ::= <atom> | "¬" <formula> | "(" <formula> "→" <formula> ")"
<atom> ::= "p" | "q" | "r" (or other propositional variables)
<formula> expand to produce valid strings, mirroring the inductive construction process. Such grammatical representations facilitate automated parsing and syntactic analysis in metalogic.[2]
In metalogic, these syntactic elements enable the objective study of expression validity through structural criteria alone, allowing properties like decidability of WFF recognition or the enumeration of all formulas to be examined without reference to truth or interpretation. This independence supports key metatheorems, such as those on the compactness of logical languages, by treating syntax as a purely combinatorial object amenable to mathematical induction and arithmetization.[2][11]
Formal Systems and Deductive Apparatus
A formal system in metalogic is defined as a triple , where is a formal language consisting of well-formed formulas, is a set of axioms (specific sentences or axiom schemas assumed to be true without proof), and is a set of inference rules that specify how to derive new formulas from existing ones.[12] This structure provides the deductive apparatus for generating theorems mechanically within the system, independent of any interpretive semantics.[13] Axioms serve as the foundational starting points of the system, often expressed as axiom schemas to cover a range of instances. For example, in classical propositional logic, a common axiom schema is , which captures a basic implication pattern.[14] Inference rules, by contrast, are finitary and effective procedures that transform premises into conclusions; they must involve only finitely many premises and be decidable in principle. A standard example is modus ponens, which allows inference of from and . In first-order logic, more general rules like universal generalization permit inferring from under appropriate conditions.[15] Different formal systems vary in the balance between axioms and rules. Hilbert-style systems emphasize a large set of axioms with few inference rules, such as modus ponens and generalization, as developed in the axiomatic approach of Hilbert and Bernays for propositional and predicate logic.[16] Natural deduction systems, introduced by Gentzen, reverse this by using few or no axioms but many introduction and elimination rules for connectives, mimicking informal reasoning patterns. Sequent calculus, also from Gentzen, represents deductions as sequents (e.g., ) with structural rules for managing assumptions and conclusions, facilitating proofs of cut-elimination. A key syntactic property of formal systems is consistency, defined as the absence of any derivable contradiction: there exists no formula such that both and are theorems of the system.[17] This notion is purely deductive, relying on the axioms and rules without reference to models or interpretations, and serves as a prerequisite for the system's reliability in avoiding triviality.[17]Proofs and Theorems
In formal systems of logic, a proof is defined as a finite sequence of formulas where each formula is either an axiom of the system, an assumption from a given set, or follows from preceding formulas in the sequence via an application of the system's inference rules.[18] This syntactic construction ensures that proofs are mechanically verifiable objects within the deductive apparatus, independent of any interpretive meaning.[19] For instance, in Hilbert-style systems, the sole inference rule is often modus ponens, which allows deriving from and if both appear earlier in the sequence.[20] A theorem is a special case of a provable formula, namely one that can be derived without any assumptions, denoted by the turnstile symbol , indicating that follows solely from the axioms of the system.[21] Theorems represent the unconditionally derivable truths of the formal system, forming its core body of established results.[22] In contrast, a derivation generalizes this to proofs from a nonempty set of assumptions , written , where each formula in the sequence must justify its inclusion either as an element of , an axiom, or a consequence of prior steps.[21] This notation captures conditional provability, allowing the exploration of logical consequences under hypothetical premises.[22] A representative example of such a derivation appears in propositional logic, where one proves using a Hilbert-style system for implicational logic with axioms (K) and (S), along with modus ponens. The proof proceeds as a finite sequence (using in place of for specificity):- (axiom S with , , )
- (axiom K with , )
- (from 1 and 2 by modus ponens)
- (axiom K with , )
- (from 3 and 4 by modus ponens)
