Recent from talks
Nothing was collected or created yet.
Recursive definition
View on Wikipedia
In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set.
A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function n! is defined by the rules
This definition is valid for each natural number n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function n!, starting from n = 0 and proceeding onwards with n = 1, 2, 3 etc.
The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction.[1]
An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set of natural numbers is:
- 0 is in
- If an element n is in then n + 1 is in
- is the smallest set satisfying (1) and (2).
There are many sets that satisfy (1) and (2) – for example, the set {0, 1, 1.649, 2, 2.649, 3, 3.649, …} satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members.
Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of n + 1 whenever it holds of n, then the property holds of all natural numbers (Aczel 1977:742).
Form of recursive definitions
[edit]Most recursive definitions have two foundations: a base case (basis) and an inductive clause.
The difference between a circular definition and a recursive definition is that a recursive definition must always have base cases, cases that satisfy the definition without being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., closer to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case".[2]
In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress.
That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial.[3] Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of f(0) (i.e., base case) is given, and that for n > 0, an algorithm is given for determining f(n) in terms of n, (i.e., inductive clause).
More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' Topology. However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below.[4]
Principle of recursive definition
[edit]Let A be a set and let a0 be an element of A. If ρ is a function which assigns to each function f mapping a nonempty section of the positive integers into A, an element of A, then there exists a unique function such that
Examples of recursive definitions
[edit]Elementary functions
[edit]Addition is defined recursively based on counting as
Multiplication is defined recursively as
Exponentiation is defined recursively as
Binomial coefficients can be defined recursively as
Prime numbers
[edit]The set of prime numbers can be defined as the unique set of positive integers satisfying
- 2 is a prime number,
- any other positive integer is a prime number if and only if it is not divisible by any prime number smaller than itself.
The primality of the integer 2 is the base case; checking the primality of any larger integer X by this definition requires knowing the primality of every integer between 2 and X, which is well defined by this definition. That last point can be proved by induction on X, for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible.
Non-negative even numbers
[edit]The even numbers can be defined as consisting of
- 0 is in the set E of non-negative evens (basis clause),
- For any element x in the set E, x + 2 is in E (inductive clause),
- Nothing is in E unless it is obtained from the basis and inductive clauses (extremal clause).
Well formed formula
[edit]The notion of a well-formed formula (wff) in propositional logic is defined recursively as the smallest set satisfying the three rules:
- p is a wff if p is a propositional variable.
- ¬ p is a wff if p is a wff.
- (p • q) is a wff if p and q are wffs and • is one of the logical connectives ∨, ∧, →, or ↔.
The definition can be used to determine whether any particular string of symbols is a wff:
- (p ∧ q) is a wff, because the propositional variables p and q are wffs and ∧ is a logical connective.
- ¬ (p ∧ q) is a wff, because (p ∧ q) is a wff.
- (¬ p ∧ ¬ q) is a wff, because ¬ p and ¬ q are wffs and ∧ is a logical connective.
Recursive definitions as logic programs
[edit]Logic programs can be understood as sets of recursive definitions.[5][6] For example, the recursive definition of even number can be written as the logic program:
even(0).
even(s(s(X))) :- even(X).
Here :- represents if, and s(X) represents the successor of X, namely X+1, as in Peano arithmetic.
The logic programming language Prolog uses backward reasoning to solve goals and answer queries. For example, given the query ?- even(s(s(0))) it produces the answer true. Given the query ?- even(s(0)) it produces the answer false.
The program can be used not only to check whether a query is true, but also to generate answers that are true. For example:
?- even(X).
X = 0
X = s(s(0))
X = s(s(s(s(0))))
X = s(s(s(s(s(s(0))))))
.....
Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented by negation as failure, as in the definition:
even(0).
even(s(X)) :- not(even(X)).
See also
[edit]Notes
[edit]- ^ Henkin, Leon (1960). "On Mathematical Induction". The American Mathematical Monthly. 67 (4): 323–338. doi:10.2307/2308975. ISSN 0002-9890. JSTOR 2308975.
- ^ "All About Recursion". www.cis.upenn.edu. Archived from the original on 2019-08-02. Retrieved 2019-10-24.
- ^ For a proof of Recursion Theorem, see On Mathematical Induction (1960) by Leon Henkin.
- ^ Munkres, James (1975). Topology, a first course (1st ed.). New Jersey: Prentice-Hall. p. 68, exercises 10 and 12. ISBN 0-13-925495-1.
- ^ Denecker, M., Ternovska, E.: A logic of nonmonotone inductive definitions. ACM Trans. Comput. Log. 9(2), 14:1–14:52 (2008)
- ^ Warren, D.S. and Denecker, M., 2023. A better logical semantics for prolog. In Prolog: The Next 50 Years (pp. 82-92). Cham: Springer Nature Switzerland.
References
[edit]- Halmos, Paul (1960). Naive set theory. van Nostrand. OCLC 802530334.
- Aczel, Peter (1977). "An Introduction to Inductive Definitions". In Barwise, J. (ed.). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Vol. 90. North-Holland. pp. 739–782. doi:10.1016/S0049-237X(08)71120-0. ISBN 0-444-86388-5.
- Hein, James L. (2010). Discrete Structures, Logic, and Computability. Jones & Bartlett. ISBN 978-0-7637-7206-2. OCLC 636352297.
Recursive definition
View on GrokipediaFundamentals
Definition
A recursive definition provides a way to specify mathematical objects, such as sets, sequences, or functions, by referring to the object itself in a structured manner that begins with simple initial elements and builds complexity through repeated application of rules. This approach allows for the precise characterization of infinite structures without enumerating every element explicitly, relying instead on a foundational simplicity that expands iteratively.[4] Formally, a recursive definition—also known as an inductive definition—comprises a base clause that identifies the initial or simplest objects belonging to the defined collection, and one or more inductive clauses that describe how to generate additional objects from those already established. The base clause ensures a starting point, while the inductive clauses provide the mechanism for extension, guaranteeing that every object in the collection can be derived through finite applications of these rules. This structure is fundamental in areas like set theory and logic, where it enables the definition of complex entities like the natural numbers or well-formed expressions.[4][3] Recursive definitions differ from circular definitions, which merely restate the term being defined without providing new information and thus lead to unresolved infinite regress, such as equating a concept solely to itself. In contrast, recursive definitions are acyclic and well-founded: they progress strictly from simpler cases to more complex ones, anchored by the base clause to prevent endless looping and ensure every instance terminates in a finite derivation. This foundational progression aligns with principles like mathematical induction, which justifies the completeness of such definitions.[5][6] In general, for a function defined on a well-ordered domain, the recursive form specifies when belongs to a base set , and otherwise for some auxiliary function and arguments that are strictly simpler than under the given ordering. This ensures that computation or construction always reduces to the base case through a finite chain of dependencies, avoiding ambiguity or non-termination.[3][1]Principle of Recursive Definition
The principle of recursive definition ensures that, in an appropriate mathematical domain, a specification consisting of base clauses and inductive rules uniquely determines an object satisfying those clauses. In set theory, this is formalized by the recursion theorem, which states that for a well-founded and set-like relation on a class , and a class function , there exists a unique class function such that for all , , where denotes the set of -predecessors of .[7] This theorem applies particularly when the base case specifies values for minimal elements under , with inductive steps building upon prior values. The applicability of this principle requires the domain to be equipped with a well-founded partial order, ensuring no infinite descending chains exist and thereby preventing circular or ill-defined dependencies in the recursion.[7] A relation is well-founded if every nonempty subclass of has an -minimal element, and it is set-like if the predecessor set is a set for each .[7] These conditions, satisfied by structures like the ordinals under membership or arbitrary well-ordered sets, guarantee that the recursive process terminates and yields a coherent definition. The uniqueness of follows from transfinite induction: supposing two functions and both satisfy the recursion equation, their agreement on predecessors implies equality at any minimal point of difference, leading to a contradiction otherwise.[7] For existence, one constructs via transfinite iteration: define approximations on the -closed subsets of by assigning values to minimal elements using and previously defined portions, proceeding up to the ordinal rank of the relation.[7] Alternatively, in domains forming complete partial orders (such as power sets under inclusion), the recursive definition corresponds to the least fixed point of the monotonic operator induced by , obtained as the supremum of the iterated applications starting from the bottom element.[8]Base Case and Inductive Step
In a recursive definition, the base case establishes the foundational values or memberships for the simplest or initial elements of the domain, preventing infinite regression and providing a starting point for the definition. For instance, in defining a function over non-negative integers, the base case might specify , assigning a concrete value to the minimal input without relying on prior computations. This component ensures that the definition is grounded and applicable to the most elementary instances, such as the empty set in structural definitions or the zeroth term in sequences.[8] The inductive step, also known as the recursive step, provides rules for extending the definition to more complex elements by expressing them in terms of previously defined simpler ones. This step outlines how to construct or compute values for larger inputs using outputs from smaller ones, such as for , where the value at depends directly on the value at . It enables the progressive buildup of the entire structure, ensuring that every element in the domain can be reached through finite applications of the rules.[8][9] The base case and inductive step are interdependent, with the base anchoring the process to avoid circularity and the inductive step driving expansion toward greater complexity. Without a solid base, the inductive step cannot initiate computation, leading to stagnation; conversely, an inductive step without connection to the base fails to propagate values effectively. This synergy mirrors the principle of recursive definition, where the base halts descent and the step ascends systematically.[10][1] Common pitfalls in formulating these components include incomplete base cases, which can result in undefined values for certain elements and render the overall definition inapplicable or lead to non-termination in computations. For example, omitting a base case for all initial conditions might cause attempts to evaluate the function to loop indefinitely without resolution. Similarly, overbroad inductive steps can introduce ambiguity by allowing multiple interpretations or inconsistent derivations for the same element, undermining the definition's uniqueness and clarity.[8][11]Mathematical Examples
Arithmetic Operations
In the axiomatic foundation of natural numbers, as established by the Peano axioms, basic arithmetic operations are defined recursively using the successor function , which maps each natural number to its immediate successor . This approach ensures that the operations are constructed from primitive notions without circularity, relying solely on the axioms governing the natural numbers, zero, and induction.[8] Addition is defined recursively as follows: for any natural numbers and , This definition, introduced by Richard Dedekind, specifies the base case where adding zero leaves the first argument unchanged and the inductive step where adding the successor of is equivalent to taking the successor of the sum with .[12][8] Multiplication builds upon addition in a similar recursive manner: for any natural numbers and , Here, the base case sets the product with zero to zero, while the inductive step treats multiplication by the successor as adding to the product with , effectively defining multiplication as repeated addition. Dedekind formalized this in his foundational work on the nature of numbers.[12][8] Exponentiation extends this pattern by iterating multiplication: for any natural numbers and , The base case establishes that any number raised to the power of zero is one, and the inductive step computes the power with the successor as the product of and the power with , representing exponentiation as repeated multiplication. This recursive structure, also due to Dedekind, underpins the hierarchy of arithmetic operations within Peano arithmetic.[12][8] These definitions exemplify the principle of recursive definition by providing a base case and an inductive rule that references the operation on smaller inputs, allowing all values to be derived systematically from the Peano axioms without presupposing the operations themselves. Giuseppe Peano incorporated and axiomatized these recursive constructions in his 1889 treatise, solidifying their role in formal arithmetic.[13][8]Sequences and Sets
Recursive definitions are commonly used to define sequences in mathematics, where each term depends on previous terms. One classic example is the factorial sequence, which counts the number of permutations of distinct objects. The factorial of a non-negative integer , denoted , is defined recursively as follows: the base case is , and for , the inductive step is .[2] This definition generates the sequence 1, 1, 2, 6, 24, 120, and so on, building each value from the prior one through multiplication.[14] Another prominent sequence defined recursively is the Fibonacci sequence, which appears in various natural phenomena such as plant branching patterns. It is specified by the base cases and , with the recursive rule for .[2] This produces the terms 0, 1, 1, 2, 3, 5, 8, 13, 21, and continues indefinitely, where each term is the sum of the two preceding ones.[15] The recursive structure highlights the sequence's self-similar growth properties.[8] Recursive definitions also apply to sets, generating their members through base elements and rules that incorporate existing members. For the set of even non-negative integers, denoted , the base case is , and the inductive step states that if , then .[1] This construction ensures , capturing all non-negative even numbers without including odds, as the recursive addition of 2 preserves parity from the base.[16] The set of prime numbers, , can similarly be defined inductively as the smallest set of positive integers greater than 1 satisfying: 2 is prime (base case), and for any integer , if is not divisible by any prime with .[17] This recursive characterization builds by checking divisibility against previously identified primes, ensuring each new member is the smallest integer not factored by smaller ones in the set.[8] Such definitions emphasize the generative nature of recursion in set theory, distinguishing primes from composites through successive verification.Structural Definitions
Recursive definitions excel at modeling hierarchical or self-similar structures in mathematics, where complex objects are built by nesting simpler instances of the same type, enabling precise descriptions of entities like trees and ordinals that defy linear enumeration. This approach contrasts with sequential definitions by emphasizing composition over iteration, capturing the intrinsic nesting that defines such structures.[18] A prominent example is the binary tree, a fundamental hierarchical structure used to represent ordered data with branching. Formally, a binary tree is defined recursively as follows:- It is either empty (null), or
- It consists of a root node containing a data element and two pointers: one to a left binary subtree and one to a right binary subtree, where each subtree is itself a binary tree.
- ,
- For any ordinal , the successor is .
Logical Applications
Well-Formed Formulas
In propositional logic, well-formed formulas (wffs) are syntactically valid expressions defined recursively to capture the structure of the language precisely. The base case consists of atomic formulas, which are the propositional variables such as , , or .[22] These atomic elements form the foundation upon which more complex formulas are built through recursive application of logical connectives. The inductive step specifies that if and are well-formed formulas, then so are the negated formula , the conjunction , the disjunction , and the implication .[22] No other strings qualify as wffs except those generated by finite applications of these rules.[23] The inclusion of parentheses in compound expressions ensures balanced nesting and resolves ambiguities in operator precedence, allowing recursive construction of arbitrarily complex yet unambiguous syntactic objects.[24] This recursive approach extends naturally to predicate logic, where the language incorporates quantifiers and relations over objects. Atomic formulas here are formed by applying an n-ary predicate symbol to exactly n terms , respecting the predicate's arity to maintain syntactic validity; for example, if is binary, is well-formed.[25] The connectives operate recursively on these and other wffs as in the propositional case.[26] Additionally, if is a well-formed formula and is a variable, then the universal quantification and the existential quantification are well-formed formulas.[25] Parentheses continue to enforce proper recursive nesting around quantified and connected subformulas, ensuring that the entire syntax of predicate logic is defined without ambiguity and supports the formal expression of properties and relations.[25]Predicate Logic Constructs
In first-order logic, terms are fundamental syntactic objects that denote elements of the domain of discourse, defined recursively to ensure a structured and unambiguous formation. The base cases consist of all variables and constant symbols in the language. For an n-ary function symbol and terms , the expression qualifies as a term, allowing complex terms to be built inductively from simpler ones. Nothing beyond these rules counts as a term, preventing ill-formed expressions and guaranteeing that every term can be decomposed into its atomic components via repeated application of the inverse operations.[27] Atomic formulas extend this recursive structure to express basic assertions about terms, forming the atoms from which compound formulas are assembled. Specifically, for an n-ary predicate symbol and terms , the expression is an atomic formula, capturing relational properties among domain elements. Equality provides another primitive: if and are terms, then is an atomic formula. This recursive reliance on terms ensures atomic formulas precisely delineate the simplest propositions, such as membership or ordering relations in mathematical structures.[27] Axiomatic systems in first-order logic employ recursive definitions to generate theorems from axioms via iterative application of inference rules, yielding the deductive closure of the axiom set. Modus ponens exemplifies this: given theorems and , the rule infers as a new theorem, with the process repeating indefinitely on derived results. Other rules, like generalization, further expand this closure, ensuring all logically entailed statements are systematically producible. This recursive mechanism underpins the completeness of axiomatic proofs, as articulated in Hilbert-style systems.[28] The Herbrand universe embodies recursive term generation in a parameter-free context, comprising all ground terms (variable-free) derivable from the language's constants and function symbols. Its construction begins with the constant symbols—if none exist, an arbitrary constant is adjoined—and closes under function application: for terms already in the universe and an n-ary function , is added. This exhaustively enumerates the "individuals" available for interpretation in Herbrand models, central to reducing quantifier elimination in automated theorem proving.[29]Computational Uses
Recursive Algorithms
In computer science, a recursive function is defined as one that solves a problem by calling itself with a smaller or simpler instance of the same problem, typically until a base case is reached where no further recursion occurs.[30] This approach mirrors the inductive step in mathematical recursion, allowing complex computations to be broken down into repeated applications of simpler operations. For instance, the quicksort algorithm partitions an array around a pivot element and recursively sorts the subarrays on either side, with the base case being an empty or single-element array that requires no sorting.[31] Tail recursion occurs when the recursive call is the final operation in the function, with no subsequent computations performed on the returned value, enabling compilers or interpreters to optimize the recursion into an iterative loop and avoid stack overflow from deep call chains. This optimization is particularly valuable in functional programming languages like Scheme, where tail calls are guaranteed to be efficient, transforming what would otherwise be linear space usage into constant space. Mutual recursion involves two or more functions that invoke each other in a cyclic manner to solve interdependent subproblems, such as determining whether a number is even or odd by alternating calls between helper functions. A classic example is the pair of functionsisEven(n) and isOdd(n), where isEven(0) returns true as the base case, isEven(n) otherwise calls isOdd(n-1), and isOdd(n) symmetrically calls isEven(n-1) with isOdd(0) false; this mutual structure elegantly captures parity without direct iteration.
The time and space complexity of recursive algorithms is analyzed using recursion trees or master theorems, where time complexity reflects the total work across all recursive levels and space complexity accounts for the maximum depth of the call stack. For linear recursion, such as computing a factorial by successive multiplication, the time complexity is O(n) due to n recursive calls each performing constant work, while space complexity is also O(n) from the stack frames; in contrast, balanced divide-and-conquer recursion like quicksort averages O(n log n) time but can reach O(n^2) in the worst case. These analyses highlight how recursion depth influences resource usage, often favoring tail-recursive variants for efficiency in practice.
Logic Programs
In logic programming, recursive definitions are foundational to expressing computations declaratively through Horn clauses, which are logical formulas consisting of a single positive literal (the head) and a conjunction of literals (the body), enabling the definition of predicates via recursive rules.[32] This approach, pioneered in the development of languages like Prolog, treats programs as sets of such clauses where recursion arises naturally in defining relations or functions, contrasting with imperative programming by focusing on what holds true rather than how to compute it step-by-step.[32] For instance, the parity of natural numbers can be defined recursively using clauses likeeven(0). as a base case and even(s(X)) :- odd(X). along with odd(s(X)) :- even(X)., where s denotes the successor function, allowing the system to infer evenness or oddness through unfolding the recursion.[33]
Definite clause grammars (DCGs) extend this recursive framework specifically for language parsing and analysis, representing context-free grammars as Horn clauses augmented with arguments to track state, such as positions in input strings.[34] In a DCG, non-terminals are defined recursively; for example, a simple grammar for arithmetic expressions might include rules like expr(Xs, Ys, X + Z) --> expr(Xs, Ws, X), ['+'], term(Ws, Ys, Z). and base cases for terms, enabling the parser to build parse trees by recursively descending into substructures while consuming the input.[34] This recursive structure directly mirrors the inductive definitions of formal languages, facilitating both recognition and generation tasks in a unified logical setting.[34]
The evaluation of these recursive definitions in logic programs relies on SLD (Selective Linear Definite clause) resolution, an inference mechanism that systematically derives facts from clauses by selecting literals for resolution in a linear fashion, ensuring completeness for Horn clause programs under fair computation strategies.[35] SLD resolution unfolds recursive predicates by repeatedly applying rules until base cases are reached or failure occurs, with backtracking to explore alternatives, thus computing the least Herbrand model of the program.[33] A classic example is the ancestor relation, defined by base clause ancestor(X, Y) :- parent(X, Y). and recursive clause ancestor(X, Z) :- parent(X, Y), [ancestor](/page/Ancestor)(Y, Z)., where querying ancestor(A, B) triggers SLD to traverse the family tree recursively until terminating at direct parents.[32] This mechanism guarantees that recursive definitions are both sound and complete in their declarative interpretation, provided the recursion is well-founded to avoid infinite loops.[35]