Hubbry Logo
Well-formed formulaWell-formed formulaMain
Open search
Well-formed formula
Community hub
Well-formed formula
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Well-formed formula
Well-formed formula
from Wikipedia

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language.[1]

The abbreviation wff is pronounced "woof", or sometimes "wiff", "weff", or "whiff".[12]

A formal language can be identified with the set of formulas in the language. A formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic.

Introduction

[edit]

A key use of formulas is in propositional logic and predicate logic such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask "is φ true?", once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, and the final formula in the sequence is what is proven.

Although the term "formula" may be used for written marks (for instance, on a piece of paper or chalkboard), it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. This distinction between the vague notion of "property" and the inductively defined notion of well-formed formula has roots in Weyl's 1910 paper "Über die Definitionen der mathematischen Grundbegriffe".[13] Thus the same formula may be written more than once, and a formula might in principle be so long that it cannot be written at all within the physical universe.

Formulas themselves are syntactic objects. They are given meanings by interpretations. For example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula.

Propositional calculus

[edit]

The formulas of propositional calculus, also called propositional formulas,[14] are expressions such as . Their definition begins with the arbitrary choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses "(" and ")", all of which are assumed to not be in V. The formulas will be certain expressions (that is, strings of symbols) over this alphabet.

The formulas are inductively defined as follows:

  • Each propositional variable is, on its own, a formula.
  • If φ is a formula, then ¬φ is a formula.
  • If φ and ψ are formulas, and • is any binary connective, then ( φ • ψ) is a formula. Here • could be (but is not limited to) the usual operators ∨, ∧, →, or ↔.

This definition can also be written as a formal grammar in Backus–Naur form, provided the set of variables is finite:

<alpha set> ::= p | q | r | s | t | u | ... (the arbitrary finite set of propositional variables)
<form> ::= <alpha set> | ¬<form> | (<form><form>) | (<form><form>) | (<form><form>) | (<form><form>)

Using this grammar, the sequence of symbols

(((pq) ∧ (rs)) ∨ (¬q ∧ ¬s))

is a formula, because it is grammatically correct. The sequence of symbols

((pq)→(qq))p))

is not a formula, because it does not conform to the grammar.

A complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules (akin to the standard mathematical order of operations) are assumed among the operators, making some operators more binding than others. For example, assuming the precedence (from most binding to least binding) 1. ¬   2. →  3. ∧  4. ∨. Then the formula

(((pq) ∧ (rs)) ∨ (¬q ∧ ¬s))

may be abbreviated as

pqrs ∨ ¬q ∧ ¬s

This is, however, only a convention used to simplify the written representation of a formula. If the precedence was assumed, for example, to be left-right associative, in following order: 1. ¬   2. ∧  3. ∨  4. →, then the same formula above (without parentheses) would be rewritten as

(p → (qr)) → (s ∨ (¬q ∧ ¬s))

Predicate logic

[edit]

The definition of a formula in first-order logic is relative to the signature of the theory at hand. This signature specifies the constant symbols, predicate symbols, and function symbols of the theory at hand, along with the arities of the function and predicate symbols.

The definition of a formula comes in several parts. First, the set of terms is defined recursively. Terms, informally, are expressions that represent objects from the domain of discourse.

  1. Any variable is a term.
  2. Any constant symbol from the signature is a term
  3. an expression of the form f(t1,...,tn), where f is an n-ary function symbol, and t1,...,tn are terms, is again a term.

The next step is to define the atomic formulas.

  1. If t1 and t2 are terms then t1=t2 is an atomic formula
  2. If R is an n-ary predicate symbol, and t1,...,tn are terms, then R(t1,...,tn) is an atomic formula

Finally, the set of formulas is defined to be the smallest set containing the set of atomic formulas such that the following holds:

  1. is a formula when is a formula
  2. and are formulas when and are formulas;
  3. is a formula when is a variable and is a formula;
  4. is a formula when is a variable and is a formula (alternatively, could be defined as an abbreviation for ).

If a formula has no occurrences of or , for any variable , then it is called quantifier-free. An existential formula is a formula starting with a sequence of existential quantification followed by a quantifier-free formula.

Atomic and open formulas

[edit]

An atomic formula is a formula that contains no logical connectives nor quantifiers, or equivalently a formula that has no strict subformulas. The precise form of atomic formulas depends on the formal system under consideration; for propositional logic, for example, the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term.

According to some terminology, an open formula is formed by combining atomic formulas using only logical connectives, to the exclusion of quantifiers.[15] This is not to be confused with a formula which is not closed.

Closed formulas

[edit]

A closed formula, also ground formula or sentence, is a formula in which there are no free occurrences of any variable. If A is a formula of a first-order language in which the variables v1, …, vn have free occurrences, then A preceded by v1 ⋯ ∀vn is a universal closure of A.

Properties applicable to formulas

[edit]
  • A formula A in a language is valid if it is true for every interpretation of .
  • A formula A in a language is satisfiable if it is true for some interpretation of .
  • A formula A of the language of arithmetic is decidable if it represents a decidable set, i.e. if there is an effective method which, given a substitution of the free variables of A, says that either the resulting instance of A is provable or its negation is.

Usage of the terminology

[edit]

In earlier works on mathematical logic (e.g. by Church[16]), formulas referred to any strings of symbols and among these strings, well-formed formulas were the strings that followed the formation rules of (correct) formulas.

Several authors simply say formula.[17][18][19][20] Modern usages (especially in the context of computer science with mathematical software such as model checkers, automated theorem provers, interactive theorem provers) tend to retain of the notion of formula only the algebraic concept and to leave the question of well-formedness, i.e. of the concrete string representation of formulas (using this or that symbol for connectives and quantifiers, using this or that parenthesizing convention, using Polish or infix notation, etc.) as a mere notational problem.

The expression "well-formed formulas" (WFF) also crept into popular culture. WFF is part of an esoteric pun used in the name of the academic game "WFF 'N PROOF: The Game of Modern Logic", by Layman Allen,[21] developed while he was at Yale Law School (he was later a professor at the University of Michigan). The suite of games is designed to teach the principles of symbolic logic to children (in Polish notation).[22] Its name is an echo of whiffenpoof, a nonsense word used as a cheer at Yale University made popular in The Whiffenpoof Song and The Whiffenpoofs.[23]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A well-formed formula (WFF), also known as a wff, is a syntactically correct string in a formal logical language that adheres to recursive construction rules, allowing it to represent a meaningful proposition when interpreted semantically. In propositional logic, WFFs are built inductively from atomic propositions—such as sentence letters like p, q, or r—using logical connectives including negation (¬Φ), conjunction ((Φ ∧ Ψ)), disjunction ((Φ ∨ Ψ)), implication ((Φ → Ψ)), and biconditional ((Φ ↔ Ψ)), with nothing beyond these rules qualifying as a WFF. For example, expressions like ¬p or (p ∧ q) are WFFs, while unbalanced strings such as p ∧ or ¬ ¬p q are not, as they violate the syntactic structure. In predicate logic, the extends to include terms (variables, constants, and function applications), predicates applied to terms forming atomic formulas (e.g., P(x, y) meaning "x is related to y by P"), and quantifiers like universal (∀x Φ) and existential (∃x Φ), combined with the propositional connectives. Constants like true and false, propositional variables, and atomic formulas such as B(x) ("x is ") serve as base WFFs, enabling more expressive statements like ∀x B(x) ("everything is "). Ill-formed examples include x B(x) or B(R(x)) without proper parentheses or quantifiers, which fail to produce interpretable propositions. This recursive structure ensures precision in distinguishing valid logical expressions from arbitrary symbol sequences. Well-formed formulas form the syntactic foundation of formal logic, defining acceptable expressions for semantic evaluation, proof construction, and systems. By providing a rigorous , WFFs enable inductive proofs about logical properties and support applications in , such as provers and . Their role underscores the distinction between (form) and semantics (meaning), preventing in mathematical and philosophical .

Fundamentals

Definition and Motivation

A well-formed formula (wff), also known as a , is a finite sequence of symbols drawn from the alphabet of a formal language that satisfies the inductive syntactic rules prescribed for constructing valid expressions in that language. These rules define the smallest set of strings qualifying as wffs, excluding any ill-formed sequences that violate the formation criteria, thereby establishing a precise grammatical structure for logical expressions. The motivation for wffs stems from the need to formalize reasoning processes in a way that eliminates the ambiguities inherent in , ensuring that logical expressions can be parsed and evaluated unambiguously to support rigorous deduction. For example, without syntactic constraints, a string like "C ∨ E & I" might be interpreted as either (C ∨ E) & I or C ∨ (E & I), potentially leading to conflicting meanings in arguments or proofs; wffs resolve this by mandating parentheses and connective precedence to enforce a unique structure. This precision is vital for applications in mathematical proofs, , and computational logic systems, where ill-formed expressions could invalidate derivations or computations. In deductive systems, wffs constitute the foundational units for axioms, theorems, and inference steps, enabling the construction of proofs through rule-based derivations that maintain syntactic validity throughout. Systems like Hilbert-style axiomatic frameworks and rely on wffs to define what counts as a legitimate or conclusion, thereby guaranteeing the of logical inferences. Wffs thus play a central role in both propositional and predicate logics as the core elements for expressing and verifying complex statements.

Syntax in Formal Languages

In formal languages, well-formed formulas (wffs) are defined through a consisting of a —an of symbols including logical connectives, parentheses, and atomic elements—and a set of inductive rules that specify how to construct valid expressions. This ensures that only strings adhering to the syntactic structure are considered wffs, distinguishing them from ill-formed sequences that might resemble logical expressions but violate the rules. The inductive approach builds wffs recursively, starting from basic components and applying operations to generate more complex structures, providing a precise mechanism for in logical systems. The recursive definition of wffs typically includes a base case where atomic formulas serve as the foundational elements, followed by inductive steps that combine existing wffs using connectives. For instance, if AA and BB are wffs, then expressions such as (AB)(A \land B), (AB)(A \lor B), and similar compounds formed with other connectives are also wffs. This structure can be formalized using Backus-Naur Form (BNF) notation for clarity:

<formula> ::= <atom> | ( <formula> <connective> <formula> ) | (¬ <formula>)

<formula> ::= <atom> | ( <formula> <connective> <formula> ) | (¬ <formula>)

Here, <atom> represents the base atomic formulas, <connective> denotes binary operators like ∧ or ∨, and the notation captures the recursive nesting essential to the . Such definitions ensure that all wffs are generated systematically, with parentheses enforcing operator precedence and scope. A key property of this syntactic framework is the of : every wff corresponds to a unique syntactic , which decomposes the expression unambiguously into its atomic and connective components. This is achieved through algorithms that iteratively reduce the formula by matching balanced parentheses and applying the reverse of the inductive rules, guaranteeing no structural in interpretation. For example, the reveals the hierarchical application of connectives, facilitating consistent analysis across formal systems. This underpins the reliability of logical derivations and extends briefly to applications like propositional connectives or predicate quantifiers as specific instances of the general syntax.

Propositional Logic

Atomic Propositions

In propositional logic, atomic propositions, also referred to as atomic formulas or proposition letters, are the basic, indivisible units that form the foundation of all well-formed formulas (wffs). These are simple symbols, typically denoted by uppercase letters such as P,Q,RP, Q, R, or sometimes lowercase with subscripts like p1,p2p_1, p_2, representing declarative statements without any internal logical structure. Atomic propositions stand for elementary assertions that are either true or false but cannot be decomposed further within the logical system, such as "It is raining" or "5 is a ." Their is primitive and does not depend on other propositions, making them the starting point for logical analysis. According to the of propositional logic, every single atomic proposition qualifies as a well-formed formula by the base rule of formation. For example, PP is a wff, while the concatenation PQPQ is not, as it violates the requirement for a connective to link multiple atoms properly. These atoms are then combined using logical connectives to construct more complex wffs, enabling the expression of intricate relationships.

Connectives and Construction Rules

In propositional logic, well-formed formulas (WFFs) beyond atomic propositions are constructed by applying logical connectives recursively to existing WFFs, ensuring syntactic validity through precise rules. The standard connectives include (¬), conjunction (∧), disjunction (∨), implication (→), and biconditional (↔), each combining one or two subformulas to form more complex expressions. These operations allow for the building of compound propositions that represent logical relationships, such as "not P" for or "P and Q" for conjunction. The construction of WFFs follows inductive rules, where the set of WFFs is closed under the application of connectives:
  • If AA is a WFF, then ¬A\neg A is a WFF ().
  • If AA and BB are WFFs, then (AB)(A \land B) is a WFF (conjunction).
  • If AA and BB are WFFs, then (AB)(A \lor B) is a WFF (disjunction).
  • If AA and BB are WFFs, then (AB)(A \to B) is a WFF (implication).
  • If AA and BB are WFFs, then (AB)(A \leftrightarrow B) is a WFF (biconditional).
No other strings qualify as WFFs unless derived solely from these rules applied to atomic propositions. Full parenthesization is required for all compound WFFs to eliminate in operator precedence and association, as logical connectives lack inherent priority without explicit grouping. For instance, ((PQ)R)((P \land Q) \to R) differs from (P(QR))(P \land (Q \to R)), where the former implies "if P and Q, then R," while the latter means "P and (if Q, then R)." To illustrate the recursive construction, consider building the WFF (¬PQ)(\neg P \lor Q) from atomic propositions PP and QQ:
  1. PP is a WFF (atomic proposition).
  2. ¬P\neg P is a WFF, by the negation rule applied to PP.
  3. QQ is a WFF (atomic proposition).
  4. (¬PQ)(\neg P \lor Q) is a WFF, by the disjunction rule applied to ¬P\neg P and QQ.
This process demonstrates closure under the rules, ensuring every compound WFF is unambiguously formed. Similar derivations extend to more complex formulas, such as ((PQ)R)((P \to Q) \land R), by iteratively applying the rules to prior results. These propositional constructions provide the foundation for extending WFFs in predicate logic through quantifiers.

Predicate Logic

Terms and Predicates

In predicate logic, terms serve as the fundamental expressions that denote objects or elements within the . The syntax for terms is defined recursively: individual variables, such as xx or yy, are terms; individual constants, such as aa or bb, are terms; and if t1,,tnt_1, \dots, t_n are terms and ff is an nn-ary function , then f(t1,,tn)f(t_1, \dots, t_n) is a term. This recursive structure allows terms to build complex expressions representing composite objects, such as f(x,a)f(x, a) where ff is a applied to the variable xx and constant aa. Predicates, or more precisely predicate symbols, represent relations or properties over these terms and form the basis of atomic formulas in the syntax. A predicate symbol PP of arity nn combines with nn terms t1,,tnt_1, \dots, t_n to yield an atomic formula P(t1,,tn)P(t_1, \dots, t_n), which asserts that the relation holds for those terms. The arity specifies the number of arguments required, for instance, a unary predicate Q(x)Q(x) applies to a single term like the variable xx, while a binary predicate R(x,y)R(x, y) relates two terms such as variables xx and yy. Every such atomic formula constitutes a well-formed formula (wff) at the basic level. Representative examples illustrate this syntax: the Loves(John,Mary)\textit{Loves}(\textit{John}, \textit{Mary}) uses the binary predicate Loves\textit{Loves} with constants John\textit{John} and Mary\textit{Mary} as terms, expressing a relation between specific individuals; similarly, Equals(x,a)\textit{Equals}(x, a) employs the binary predicate Equals\textit{Equals} (often denoted as ==) with the variable xx and constant aa. These atomic components form the foundation upon which more complex wffs are constructed using connectives and quantifiers.

Quantifiers and Formation Rules

In predicate logic, well-formed formulas (wffs) are extended beyond atomic predicates by incorporating quantifiers, which allow for the expression of generality over variables. The universal quantifier, denoted ∀, and the existential quantifier, denoted ∃, operate on a variable and a subformula to form new wffs. Specifically, if AA is a wff and xx is a variable, then xA\forall x \, A and xA\exists x \, A are wffs, where the quantifier binds the variable xx within the scope of AA. These quantified formulas integrate with the connectives from propositional logic through recursive application, enabling complex structures that combine quantification and logical operations. For instance, if P(x)P(x) and Q(x)Q(x) are atomic formulas, then x(P(x)Q(x))\forall x (P(x) \to Q(x)) is a wff, formed by applying the implication connective to the quantified subformula. This recursive construction ensures that any valid propositional combination of wffs, including those with quantifiers, yields a new wff. The scope of a quantifier is the subformula immediately following it, typically delimited by parentheses, within which the quantified variable is bound and its occurrences are interpreted relative to the quantifier. Variables within this scope are bound by the quantifier, distinguishing them from free variables that remain unbound; this binding is crucial for determining whether a is open or closed. For example, in the wff x(Dog(x)Mammal(x))\forall x (Dog(x) \to Mammal(x)), the universal quantifier binds xx throughout the implication, expressing that every entity satisfying Dog(x)Dog(x) also satisfies Mammal(x)Mammal(x). The syntactic structure of such a quantified wff can be represented by a , which illustrates the hierarchical binding and scope:

∀x | (Dog(x) → Mammal(x)) / \ Dog(x) Mammal(x) | | x x

∀x | (Dog(x) → Mammal(x)) / \ Dog(x) Mammal(x) | | x x

In this tree, the root is the universal quantifier, with its scope as the child node containing the implication; the variable xx appears as leaves bound within the predicate nodes, confirming the formula's .

Formula Classification

Open Formulas

In predicate logic, an open formula, also known as an open well-formed formula (wff), is defined as a well-formed formula that contains at least one free variable, meaning a variable occurrence not bound by any quantifier. This distinguishes open formulas from closed formulas, which have no free variables and thus represent complete sentences. For instance, the formula P(x)P(x), where PP is a unary predicate and xx is a free variable, exemplifies an open formula, as it expresses a that depends on the value assigned to xx. Open formulas function as predicates with placeholders for variables, allowing them to serve as templates for more specific statements through substitution. A key property is the substitution rule, which permits replacing a free variable in an open formula with a term tt (such as a constant or another variable), provided the substitution avoids variable capture—ensuring that bound variables remain unaffected—to yield a new well-formed formula. This operation is fundamental in formal proofs and derivations, enabling the instantiation of general predicates into specific instances without altering the formula's structure. Examples of open formulas often involve partial quantification, where some variables are bound while others remain free. Consider yP(x,y)\forall y \, P(x, y), where xx is free and yy is bound by the universal quantifier; this formula is open because it predicates a relation dependent on the unbound xx. Similarly, greater(x,y)greater(x, y) is open with both xx and yy free, while ygreater(x,y)\forall y \, greater(x, y) remains open due to the free xx. Open formulas play a crucial role in schemas, such as those in theories, where they form the basis for universal closure: for an open ϕ\phi with free variables x1,,xmx_1, \dots, x_m, the universal closure x1xmϕ\forall x_1 \dots \forall x_m \, \phi generates closed s by binding all free variables. This mechanism allows schemas to capture infinite families of theorems efficiently, as seen in systems like Peano arithmetic.

Closed Formulas

In predicate logic, a closed formula, also known as a sentence, is a well-formed formula in which all occurrences of variables are bound by quantifiers, leaving no free variables. This property allows the formula to express a complete that possesses a definite under any interpretation. Closed formulas are formed by binding the free variables of an open formula through the application of universal or existential quantifiers. For an open formula ϕ\phi with free variables v1,,vnv_1, \dots, v_n, the universal closure is the sentence v1vnϕ\forall v_1 \dots \forall v_n \, \phi, while analogous existential closures can be constructed by replacing universal quantifiers with existential ones. In propositional logic, which contains no variables or quantifiers, every well-formed formula is closed by default. A representative example is PPP \to P, which is always true regardless of the assigned to the atomic proposition PP. In predicate logic, examples include xP(x)\forall x \, P(x), asserting that the unary predicate PP applies to every element in the domain, and y(Q(y)R(y))\exists y \, (Q(y) \land R(y)), indicating the existence of at least one element satisfying both unary predicates QQ and RR. Closed formulas are essential in theorem stating and formal deduction within logical systems, as they represent unambiguous assertions that can be proven or refuted independently of variable substitutions, forming the basis for axiomatic theories and completeness results in .

Properties and Analysis

Syntactic Properties

Well-formed formulas (wffs) in both propositional and predicate logic possess key syntactic properties that ensure their structural integrity and unambiguous interpretation. A fundamental property is the uniqueness of the parse tree for each wff, meaning that every valid formula corresponds to exactly one hierarchical structure derived from the . This uniqueness theorem holds because the inductive definition of wffs—starting from atomic propositions or predicates and building via connectives and quantifiers—guarantees that no two distinct sequences of symbols parse to the same tree, preventing in . The decidability of well-formedness follows directly from this structure, as there exist effective algorithms to verify whether a given constitutes a wff. For propositional logic, a algorithm can recursively check for balanced parentheses, proper connective placement, and adherence to formation rules, either constructing the unique or rejecting invalid inputs. Similar recursive procedures apply in predicate logic, scanning for valid terms, atomic formulas, and quantifier scopes to confirm syntactic correctness. These methods, such as recursive descent , terminate for finite strings and provide a mechanical test for membership in the set of wffs. Syntactically, distinct wffs remain inequivalent unless their symbol sequences and are identical, emphasizing strict adherence to parenthesization and operator precedence without assuming associativity or other conventions. For instance, the propositional formulas (PQ)R(P \land Q) \land R and P(QR)P \land (Q \land R) are syntactically distinct, each with its own unique , despite potential semantic similarities in other contexts. This property underscores the precision of formal syntax in logic, applying equally to open formulas with free variables and closed formulas without them.

Semantic Implications

In , the semantic evaluation of well-formed formulas (WFFs) relies on Tarski's framework, where an interpretation consists of a non-empty domain and assignments to predicates, functions, and constants, determining satisfaction or truth relative to that . For closed WFFs, which contain no free variables, truth is absolute within a given interpretation: a closed WFF ϕ\phi is true in a model M\mathcal{M} if Mϕ\mathcal{M} \models \phi, meaning the recursive satisfaction conditions hold for all variable assignments (which do not affect the outcome due to the absence of free variables). This is defined inductively—for atomic predicates via membership in the predicate's extension, for negations and connectives via Boolean operations, and for quantifiers by checking all domain elements. Consequently, closed WFFs express propositions with definite truth values, enabling global assessments of validity across models. Open WFFs, by contrast, are evaluated relative to a variable assignment that maps free variables to domain elements: an open WFF ψ(x)\psi(x) is satisfied in model M\mathcal{M} under assignment β\beta if M,βψ(x)\mathcal{M}, \beta \models \psi(x), such as when the element β(x)\beta(x) belongs to the extension of predicate PP for ψ(x)=P(x)\psi(x) = P(x). Satisfaction extends inductively, with quantifiers relativized to modified assignments that vary the bound variable over the domain while fixing free ones. This relativization underscores how open WFFs represent predicates or properties contingent on specific interpretations of their free variables. These semantic notions underpin key results in logic, such as Herbrand's theorem, which links the of WFFs to propositional tautologies over Herbrand bases, facilitating reductions from to propositional reasoning. For instance, a universal closed WFF xP(x)\forall x \, P(x) is true in a model M\mathcal{M} with domain DD P(a)P(a) holds for every aDa \in D, illustrating how WFF semantics enables precise model-theoretic analysis and completeness proofs where semantic entailment aligns with syntactic derivation.

Historical and Terminological Context

Origins and Development

The concept of a well-formed formula (wff) emerged from efforts to formalize in the late , with Gottlob Frege's (1879) laying foundational groundwork by introducing a symbolic notation that rigorously defined the syntax of logical expressions, distinguishing valid constructions from ill-formed ones to avoid ambiguities in . This innovation marked a shift toward precise syntactic rules, enabling the representation of judgments and inferences in a two-dimensional script that prefigured modern formal systems. Building on Frege's ideas, and Bertrand Russell's (1910–1913) advanced the formalization of wffs within a type-theoretic framework, where expressions were required to adhere to strict formation rules to ensure type consistency and avoid paradoxes like Russell's. Their work systematized the notion of well-formed expressions in , influencing the development of both propositional and predicate logics by providing a blueprint for syntactic validity in axiomatic systems. In the 1920s, David Hilbert's extended predicate logic with quantifier-like operators, incorporating formation rules for wffs that integrated choice functions into to support his program for the foundations of mathematics. This development emphasized the role of wffs in finitary proofs and consistency investigations. The 1930s saw further milestones with Alonzo Church's , which defined wffs through abstraction and application rules, facilitating recursive function definitions and while refining syntactic notions in functional logics. Concurrently, Alfred Tarski's work on truth and provided a semantic foundation, linking the syntax of wffs in formal languages to their model-theoretic interpretations and resolving paradoxes in self-referential statements. Post-World War II advancements in the 1960s, particularly J. A. Robinson's resolution principle, highlighted the practical importance of wff parsing in , where clauses—specialized wffs—were manipulated to derive contradictions efficiently in systems. This refinement underscored the enduring evolution of wff concepts toward computational verification of logical validity.

Variations in Usage

In , the concept of a well-formed formula (wff) extends the syntax of classical propositional and predicate logics by incorporating modal operators such as necessity (A\square A) and possibility (A\Diamond A), where AA is itself a wff, allowing expressions to capture notions of necessity and possibility across possible worlds. Similarly, temporal logics build on this foundation by adding temporal operators like "always in the future" (GϕG \phi) or "eventually" (FϕF \phi), where ϕ\phi is a wff from propositional logic, to define wffs that model time-dependent properties in sequences of states. In computer science, particularly formal verification, wffs are adapted to specify constraints in satisfiability modulo theories (SMT) solvers, where formulas must be well-formed in a many-sorted first-order logic, ensuring each expression has a unique sort (type) to support decidable solving over theories like arithmetic or bit-vectors. In programming languages, the term "well-formed" applies to type-safe expressions, where a program is well-formed if it passes static type checking, guaranteeing that well-typed terms do not lead to runtime type errors during execution, as formalized in type safety theorems combining preservation and progress properties. Terminological variations appear in proof assistants, where Coq treats logical formulas as well-typed terms of sort , emphasizing well-formedness through automatic checks for correctness in definitions and proofs, though some literature refers to valid proof steps or contexts as "legal" to denote compliance with the system's rules. In non-classical logics like , wffs follow the same syntactic formation rules as in —built from atoms, connectives, and quantifiers—but exclude principles like the (A¬AA \lor \neg A) as a valid , highlighting a semantic rather than purely syntactic variation in their usage and validation.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.