Recent from talks
Nothing was collected or created yet.
Well-formed formula
View on Wikipedia| Part of a series on |
| Formal languages |
|---|
In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language.[1]
The abbreviation wff is pronounced "woof", or sometimes "wiff", "weff", or "whiff".[12]
A formal language can be identified with the set of formulas in the language. A formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic.
Introduction
[edit]A key use of formulas is in propositional logic and predicate logic such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask "is φ true?", once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, and the final formula in the sequence is what is proven.
Although the term "formula" may be used for written marks (for instance, on a piece of paper or chalkboard), it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. This distinction between the vague notion of "property" and the inductively defined notion of well-formed formula has roots in Weyl's 1910 paper "Über die Definitionen der mathematischen Grundbegriffe".[13] Thus the same formula may be written more than once, and a formula might in principle be so long that it cannot be written at all within the physical universe.
Formulas themselves are syntactic objects. They are given meanings by interpretations. For example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula.
Propositional calculus
[edit]The formulas of propositional calculus, also called propositional formulas,[14] are expressions such as . Their definition begins with the arbitrary choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses "(" and ")", all of which are assumed to not be in V. The formulas will be certain expressions (that is, strings of symbols) over this alphabet.
The formulas are inductively defined as follows:
- Each propositional variable is, on its own, a formula.
- If φ is a formula, then ¬φ is a formula.
- If φ and ψ are formulas, and • is any binary connective, then ( φ • ψ) is a formula. Here • could be (but is not limited to) the usual operators ∨, ∧, →, or ↔.
This definition can also be written as a formal grammar in Backus–Naur form, provided the set of variables is finite:
<alpha set> ::= p | q | r | s | t | u | ... (the arbitrary finite set of propositional variables)
<form> ::= <alpha set> | ¬<form> | (<form>∧<form>) | (<form>∨<form>) | (<form>→<form>) | (<form>↔<form>)
Using this grammar, the sequence of symbols
- (((p → q) ∧ (r → s)) ∨ (¬q ∧ ¬s))
is a formula, because it is grammatically correct. The sequence of symbols
- ((p → q)→(qq))p))
is not a formula, because it does not conform to the grammar.
A complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules (akin to the standard mathematical order of operations) are assumed among the operators, making some operators more binding than others. For example, assuming the precedence (from most binding to least binding) 1. ¬ 2. → 3. ∧ 4. ∨. Then the formula
- (((p → q) ∧ (r → s)) ∨ (¬q ∧ ¬s))
may be abbreviated as
- p → q ∧ r → s ∨ ¬q ∧ ¬s
This is, however, only a convention used to simplify the written representation of a formula. If the precedence was assumed, for example, to be left-right associative, in following order: 1. ¬ 2. ∧ 3. ∨ 4. →, then the same formula above (without parentheses) would be rewritten as
- (p → (q ∧ r)) → (s ∨ (¬q ∧ ¬s))
Predicate logic
[edit]The definition of a formula in first-order logic is relative to the signature of the theory at hand. This signature specifies the constant symbols, predicate symbols, and function symbols of the theory at hand, along with the arities of the function and predicate symbols.
The definition of a formula comes in several parts. First, the set of terms is defined recursively. Terms, informally, are expressions that represent objects from the domain of discourse.
- Any variable is a term.
- Any constant symbol from the signature is a term
- an expression of the form f(t1,...,tn), where f is an n-ary function symbol, and t1,...,tn are terms, is again a term.
The next step is to define the atomic formulas.
- If t1 and t2 are terms then t1=t2 is an atomic formula
- If R is an n-ary predicate symbol, and t1,...,tn are terms, then R(t1,...,tn) is an atomic formula
Finally, the set of formulas is defined to be the smallest set containing the set of atomic formulas such that the following holds:
- is a formula when is a formula
- and are formulas when and are formulas;
- is a formula when is a variable and is a formula;
- is a formula when is a variable and is a formula (alternatively, could be defined as an abbreviation for ).
If a formula has no occurrences of or , for any variable , then it is called quantifier-free. An existential formula is a formula starting with a sequence of existential quantification followed by a quantifier-free formula.
Atomic and open formulas
[edit]An atomic formula is a formula that contains no logical connectives nor quantifiers, or equivalently a formula that has no strict subformulas. The precise form of atomic formulas depends on the formal system under consideration; for propositional logic, for example, the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term.
According to some terminology, an open formula is formed by combining atomic formulas using only logical connectives, to the exclusion of quantifiers.[15] This is not to be confused with a formula which is not closed.
Closed formulas
[edit]A closed formula, also ground formula or sentence, is a formula in which there are no free occurrences of any variable. If A is a formula of a first-order language in which the variables v1, …, vn have free occurrences, then A preceded by ∀v1 ⋯ ∀vn is a universal closure of A.
Properties applicable to formulas
[edit]- A formula A in a language is valid if it is true for every interpretation of .
- A formula A in a language is satisfiable if it is true for some interpretation of .
- A formula A of the language of arithmetic is decidable if it represents a decidable set, i.e. if there is an effective method which, given a substitution of the free variables of A, says that either the resulting instance of A is provable or its negation is.
Usage of the terminology
[edit]In earlier works on mathematical logic (e.g. by Church[16]), formulas referred to any strings of symbols and among these strings, well-formed formulas were the strings that followed the formation rules of (correct) formulas.
Several authors simply say formula.[17][18][19][20] Modern usages (especially in the context of computer science with mathematical software such as model checkers, automated theorem provers, interactive theorem provers) tend to retain of the notion of formula only the algebraic concept and to leave the question of well-formedness, i.e. of the concrete string representation of formulas (using this or that symbol for connectives and quantifiers, using this or that parenthesizing convention, using Polish or infix notation, etc.) as a mere notational problem.
The expression "well-formed formulas" (WFF) also crept into popular culture. WFF is part of an esoteric pun used in the name of the academic game "WFF 'N PROOF: The Game of Modern Logic", by Layman Allen,[21] developed while he was at Yale Law School (he was later a professor at the University of Michigan). The suite of games is designed to teach the principles of symbolic logic to children (in Polish notation).[22] Its name is an echo of whiffenpoof, a nonsense word used as a cheer at Yale University made popular in The Whiffenpoof Song and The Whiffenpoofs.[23]
See also
[edit]Notes
[edit]- ^ Formulas are a standard topic in introductory logic, and are covered by all introductory textbooks, including Enderton (2001), Gamut (1990), and Kleene (1967)
- ^ Gensler, Harry (2002-09-11). Introduction to Logic. Routledge. p. 35. ISBN 978-1-134-58880-0.
- ^ Hall, Cordelia; O'Donnell, John (2013-04-17). Discrete Mathematics Using a Computer. Springer Science & Business Media. p. 44. ISBN 978-1-4471-3657-6.
- ^ Agler, David W. (2013). Symbolic Logic: Syntax, Semantics, and Proof. Rowman & Littlefield. p. 41. ISBN 978-1-4422-1742-3.
- ^ Simpson, R. L. (2008-03-17). Essentials of Symbolic Logic - Third Edition. Broadview Press. p. 14. ISBN 978-1-77048-495-5.
- ^ Laderoute, Karl (2022-10-24). A Pocket Guide to Formal Logic. Broadview Press. p. 59. ISBN 978-1-77048-868-7.
- ^ Maurer, Stephen B.; Ralston, Anthony (2005-01-21). Discrete Algorithmic Mathematics, Third Edition. CRC Press. p. 625. ISBN 978-1-56881-166-6.
- ^ Martin, Robert M. (2002-05-06). The Philosopher's Dictionary - Third Edition. Broadview Press. p. 323. ISBN 978-1-77048-215-9.
- ^ Date, Christopher (2008-10-14). The Relational Database Dictionary, Extended Edition. Apress. p. 211. ISBN 978-1-4302-1042-9.
- ^ Date, C. J. (2015-12-21). The New Relational Database Dictionary: Terms, Concepts, and Examples. "O'Reilly Media, Inc.". p. 241. ISBN 978-1-4919-5171-2.
- ^ Simpson, R. L. (1998-12-10). Essentials of Symbolic Logic. Broadview Press. p. 12. ISBN 978-1-55111-250-3.
- ^ All sources supported "woof". The sources cited for "wiff", "weff", and "whiff" gave these pronunciations as alternatives to "woof". The Gensler source gives "wood" and "woofer" as examples of how to pronounce the vowel in "woof".
- ^ W. Dean, S. Walsh, The Prehistory of the Subsystems of Second-order Arithmetic (2016), p.6
- ^ First-order logic and automated theorem proving, Melvin Fitting, Springer, 1996 [1]
- ^ Handbook of the history of logic, (Vol 5, Logic from Russell to Church), Tarski's logic by Keith Simmons, D. Gabbay and J. Woods Eds, p568 [2].
- ^ Alonzo Church, [1996] (1944), Introduction to mathematical logic, page 49
- ^ Hilbert, David; Ackermann, Wilhelm (1950) [1937], Principles of Mathematical Logic, New York: Chelsea
- ^ Hodges, Wilfrid (1997), A shorter model theory, Cambridge University Press, ISBN 978-0-521-58713-6
- ^ Barwise, Jon, ed. (1982), Handbook of Mathematical Logic, Studies in Logic and the Foundations of Mathematics, Amsterdam: North-Holland, ISBN 978-0-444-86388-1
- ^ Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press, ISBN 978-0-19-850048-3
- ^ Ehrenburg 2002
- ^ More technically, propositional logic using the Fitch-style calculus.
- ^ Allen (1965) acknowledges the pun.
References
[edit]- Allen, Layman E. (1965), "Toward Autotelic Learning of Mathematical Logic by the WFF 'N PROOF Games", Mathematical Learning: Report of a Conference Sponsored by the Committee on Intellective Processes Research of the Social Science Research Council, Monographs of the Society for Research in Child Development, 30 (1): 29–41
- Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge University Press, ISBN 978-0-521-00758-0
- Ehrenberg, Rachel (Spring 2002). "He's Positively Logical". Michigan Today. University of Michigan. Archived from the original on 2009-02-08. Retrieved 2007-08-19.
- Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-238452-3
- Gamut, L.T.F. (1990), Logic, Language, and Meaning, Volume 1: Introduction to Logic, University Of Chicago Press, ISBN 0-226-28085-3
- Hodges, Wilfrid (2001), "Classical Logic I: First-Order Logic", in Goble, Lou (ed.), The Blackwell Guide to Philosophical Logic, Blackwell, ISBN 978-0-631-20692-7
- Hofstadter, Douglas (1980), Gödel, Escher, Bach: An Eternal Golden Braid, Penguin Books, ISBN 978-0-14-005579-5
- Kleene, Stephen Cole (2002) [1967], Mathematical logic, New York: Dover Publications, ISBN 978-0-486-42533-7, MR 1950307
- Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6
External links
[edit]- Well-Formed Formula for First Order Predicate Logic - includes a short Java quiz.
- Well-Formed Formula at ProvenMath
Well-formed formula
View on GrokipediaFundamentals
Definition and Motivation
A well-formed formula (wff), also known as a formula, is a finite sequence of symbols drawn from the alphabet of a formal language that satisfies the inductive syntactic rules prescribed for constructing valid expressions in that language. These rules define the smallest set of strings qualifying as wffs, excluding any ill-formed sequences that violate the formation criteria, thereby establishing a precise grammatical structure for logical expressions.[7][5] The motivation for wffs stems from the need to formalize reasoning processes in a way that eliminates the ambiguities inherent in natural language, ensuring that logical expressions can be parsed and evaluated unambiguously to support rigorous deduction. For example, without syntactic constraints, a string like "C ∨ E & I" might be interpreted as either (C ∨ E) & I or C ∨ (E & I), potentially leading to conflicting meanings in arguments or proofs; wffs resolve this by mandating parentheses and connective precedence to enforce a unique structure. This precision is vital for applications in mathematical proofs, automated theorem proving, and computational logic systems, where ill-formed expressions could invalidate derivations or computations.[8][7] In deductive systems, wffs constitute the foundational units for axioms, theorems, and inference steps, enabling the construction of proofs through rule-based derivations that maintain syntactic validity throughout. Systems like Hilbert-style axiomatic frameworks and natural deduction rely on wffs to define what counts as a legitimate premise or conclusion, thereby guaranteeing the soundness of logical inferences. Wffs thus play a central role in both propositional and predicate logics as the core elements for expressing and verifying complex statements.[5][9]Syntax in Formal Languages
In formal languages, well-formed formulas (wffs) are defined through a formal grammar consisting of a signature—an alphabet of symbols including logical connectives, parentheses, and atomic elements—and a set of inductive rules that specify how to construct valid expressions. This grammar ensures that only strings adhering to the syntactic structure are considered wffs, distinguishing them from ill-formed sequences that might resemble logical expressions but violate the rules. The inductive approach builds wffs recursively, starting from basic components and applying operations to generate more complex structures, providing a precise mechanism for syntax in logical systems.[5] The recursive definition of wffs typically includes a base case where atomic formulas serve as the foundational elements, followed by inductive steps that combine existing wffs using connectives. For instance, if and are wffs, then expressions such as , , and similar compounds formed with other connectives are also wffs. This structure can be formalized using Backus-Naur Form (BNF) notation for clarity:<formula> ::= <atom> | ( <formula> <connective> <formula> ) | (¬ <formula>)
<formula> ::= <atom> | ( <formula> <connective> <formula> ) | (¬ <formula>)
<atom> represents the base atomic formulas, <connective> denotes binary operators like ∧ or ∨, and the notation captures the recursive nesting essential to the grammar. Such definitions ensure that all wffs are generated systematically, with parentheses enforcing operator precedence and scope.[10]
A key property of this syntactic framework is the uniqueness of parsing: every wff corresponds to a unique syntactic tree, which decomposes the expression unambiguously into its atomic and connective components. This is achieved through algorithms that iteratively reduce the formula by matching balanced parentheses and applying the reverse of the inductive rules, guaranteeing no structural ambiguity in interpretation. For example, the tree structure reveals the hierarchical application of connectives, facilitating consistent analysis across formal systems. This parsing uniqueness underpins the reliability of logical derivations and extends briefly to applications like propositional connectives or predicate quantifiers as specific instances of the general syntax.[5]
Propositional Logic
Atomic Propositions
In propositional logic, atomic propositions, also referred to as atomic formulas or proposition letters, are the basic, indivisible units that form the foundation of all well-formed formulas (wffs). These are simple symbols, typically denoted by uppercase letters such as , or sometimes lowercase with subscripts like , representing declarative statements without any internal logical structure.[11][12] Atomic propositions stand for elementary assertions that are either true or false but cannot be decomposed further within the logical system, such as "It is raining" or "5 is a prime number."[13] Their truth value is primitive and does not depend on other propositions, making them the starting point for logical analysis.[4] According to the syntax of propositional logic, every single atomic proposition qualifies as a well-formed formula by the base rule of formation. For example, is a wff, while the concatenation is not, as it violates the requirement for a connective to link multiple atoms properly.[12] These atoms are then combined using logical connectives to construct more complex wffs, enabling the expression of intricate relationships.[11]Connectives and Construction Rules
In propositional logic, well-formed formulas (WFFs) beyond atomic propositions are constructed by applying logical connectives recursively to existing WFFs, ensuring syntactic validity through precise rules.[13] The standard connectives include negation (¬), conjunction (∧), disjunction (∨), implication (→), and biconditional (↔), each combining one or two subformulas to form more complex expressions.[5] These operations allow for the building of compound propositions that represent logical relationships, such as "not P" for negation or "P and Q" for conjunction.[14] The construction of WFFs follows inductive rules, where the set of WFFs is closed under the application of connectives:- If is a WFF, then is a WFF (negation).[13]
- If and are WFFs, then is a WFF (conjunction).[5]
- If and are WFFs, then is a WFF (disjunction).[14]
- If and are WFFs, then is a WFF (implication).[13]
- If and are WFFs, then is a WFF (biconditional).[5]
- is a WFF (atomic proposition).[14]
- is a WFF, by the negation rule applied to .[13]
- is a WFF (atomic proposition).[14]
- is a WFF, by the disjunction rule applied to and .[5]
Predicate Logic
Terms and Predicates
In predicate logic, terms serve as the fundamental expressions that denote objects or elements within the domain of discourse. The syntax for terms is defined recursively: individual variables, such as or , are terms; individual constants, such as or , are terms; and if are terms and is an -ary function symbol, then is a term.[15][16] This recursive structure allows terms to build complex expressions representing composite objects, such as where is a binary function applied to the variable and constant .[15] Predicates, or more precisely predicate symbols, represent relations or properties over these terms and form the basis of atomic formulas in the syntax. A predicate symbol of arity combines with terms to yield an atomic formula , which asserts that the relation holds for those terms.[16] The arity specifies the number of arguments required, for instance, a unary predicate applies to a single term like the variable , while a binary predicate relates two terms such as variables and .[16] Every such atomic formula constitutes a well-formed formula (wff) at the basic level.[15] Representative examples illustrate this syntax: the atomic formula uses the binary predicate with constants and as terms, expressing a relation between specific individuals; similarly, employs the binary predicate (often denoted as ) with the variable and constant .[16] These atomic components form the foundation upon which more complex wffs are constructed using connectives and quantifiers.[15]Quantifiers and Formation Rules
In predicate logic, well-formed formulas (wffs) are extended beyond atomic predicates by incorporating quantifiers, which allow for the expression of generality over variables. The universal quantifier, denoted ∀, and the existential quantifier, denoted ∃, operate on a variable and a subformula to form new wffs. Specifically, if is a wff and is a variable, then and are wffs, where the quantifier binds the variable within the scope of .[17][18] These quantified formulas integrate with the connectives from propositional logic through recursive application, enabling complex structures that combine quantification and logical operations. For instance, if and are atomic formulas, then is a wff, formed by applying the implication connective to the quantified subformula. This recursive construction ensures that any valid propositional combination of wffs, including those with quantifiers, yields a new wff.[17][19] The scope of a quantifier is the subformula immediately following it, typically delimited by parentheses, within which the quantified variable is bound and its occurrences are interpreted relative to the quantifier. Variables within this scope are bound by the quantifier, distinguishing them from free variables that remain unbound; this binding is crucial for determining whether a formula is open or closed. For example, in the wff , the universal quantifier binds throughout the implication, expressing that every entity satisfying also satisfies .[18][20] The syntactic structure of such a quantified wff can be represented by a parse tree, which illustrates the hierarchical binding and scope: ∀x
|
(Dog(x) → Mammal(x))
/ \
Dog(x) Mammal(x)
| |
x x
∀x
|
(Dog(x) → Mammal(x))
/ \
Dog(x) Mammal(x)
| |
x x
