Hubbry Logo
General recursive functionGeneral recursive functionMain
Open search
General recursive function
Community hub
General recursive function
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
General recursive function
General recursive function
from Wikipedia

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function).[1] In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines[2][4] (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.

Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms.

The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R.

Definition

[edit]

The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator μ.

The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive.

Primitive or "basic" functions:

  1. Constant functions Ck
    n
    : For each natural number n and every k
    Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator.
  2. Successor function S:
  3. Projection function (also called the Identity function): For all natural numbers such that :

Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result):

  1. Composition operator (also called the substitution operator): Given an m-ary function and m k-ary functions :
    This means that is defined only if and are all defined.
  2. Primitive recursion operator ρ: Given the k-ary function and k+2 -ary function :
    This means that is defined only if and are defined for all
  3. Minimization operator μ: Given a (k+1)-ary function , the k-ary function is defined by:

Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which f is not defined, then the search never terminates, and is not defined for the argument

While some textbooks use the μ-operator as defined here,[5][6] others[7][8] demand that the μ-operator is applied to total functions f only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below).[5][6] The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total.[7]

The strong equality relation can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that

holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined.

Examples

[edit]

Examples not involving the minimization operator can be found at Primitive recursive function#Examples.

The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive.

  • The integer square root of x can be defined as the least z such that . Using the minimization operator, a general recursive definition is , where Not, Gt, and Mul are logical negation, greater-than, and multiplication,[9] respectively. In fact, is 0 if, and only if, holds. Hence is the least z such that holds. The negation junctor Not is needed since Gt encodes truth by 1, while μ seeks for 0.

The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator.

Total recursive function

[edit]

A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem.

Equivalence with other models of computability

[edit]

In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values).

Normal form theorem

[edit]

A normal form theorem due to Kleene says that for each k there are primitive recursive functions and such that for any μ-recursive function with k free variables there is an e such that

.

The number e is called an index or Gödel number for the function f.[10]: 52–53  A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function.

Minsky observes the defined above is in essence the μ-recursive equivalent of the universal Turing machine:

To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine [11]

Symbolism

[edit]

A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x:

  • Constant function: Kleene uses " Cn
    q
    (x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ":
e.g. C7
13
( r, s, t, u, v, w, x ) = 13
e.g. const13 ( r, s, t, u, v, w, x ) = 13
  • Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows:
S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc.
  • Identity function: Kleene (1952) uses " Un
    i
    " to indicate the identity function over the variables xi; B-B-J use the identity function idn
    i
    over the variables x1 to xn:
Un
i
( x ) = idn
i
( x ) = xi
e.g. U7
3
= id7
3
( r, s, t, u, v, w, x ) = t
  • Composition (Substitution) operator: Kleene uses a bold-face Sm
    n
    (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn":
If we are given h( x )= g( f1(x), ... , fm(x) )
h(x) = Sn
m
(g, f1, ... , fm )
In a similar manner, but without the sub- and superscripts, B-B-J write:
h(x')= Cn[g, f1 ,..., fm](x)
  • Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given:
  • base step: h( 0, x )= f( x ), and
  • induction step: h( y+1, x ) = g( y, h(y, x),x )
Example: primitive recursion definition of a + b:
  • base step: f( 0, a ) = a = U1
    1
    (a)
  • induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U3
    2
    ( b, c, a ))
R2 { U1
1
(a), S [ (U3
2
( b, c, a ) ] }
Pr{ U1
1
(a), S[ (U3
2
( b, c, a ) ] }

Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions

  1. S(a) = a'
  2. U1
    1
    (a) = a
  3. U3
    2
    ( b, c, a ) = c
  4. g(b, c, a) = S(U3
    2
    ( b, c, a )) = c'
  5. base step: h( 0, a ) = U1
    1
    (a)
induction step: h( b', a ) = g( b, h( b, a ), a )

He arrives at:

a+b = R2[ U1
1
, S3
1
(S, U3
2
) ]

Examples

[edit]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a general recursive function (also called a μ-recursive function or partial recursive function) is a from the natural numbers to the natural numbers that can be generated from the basic functions of zero, successor, and projection by means of the operations of composition, primitive , and minimization (the μ-operator). This class of functions provides a precise for algorithmic , encompassing all functions that can be effectively calculated by a finite mechanical procedure. Introduced by in his 1934 lectures on undecidable propositions, the concept was formalized and expanded by and Stephen C. Kleene in the mid-1930s as part of efforts to resolve foundational questions in mathematics, such as the posed by . The general recursive functions are foundational to the Church-Turing thesis, which conjectures that they coincide exactly with the intuitively computable functions, a hypothesis first proposed by Church in 1936 and independently supported by Alan Turing's analysis of mechanical computation the same year. Kleene's 1936 paper provided a rigorous schema for their construction, proving closure under the defining operations and establishing their role in representing arithmetic predicates. Key properties include the existence of a universal general recursive function capable of simulating any other in the class (Kleene's normal form theorem) and their equivalence to functions computable by Turing machines, as demonstrated by Turing in 1937. These functions distinguish computable from non-computable problems, such as the , which Gödel and Church showed to be undecidable in formal systems. Historically, the development bridged earlier notions of primitive recursive functions (limited to total functions, as in Skolem 1923 and Gödel 1931) with broader models of partial computability, influencing modern .

Background and Prerequisites

Historical Context

The concept of general recursive functions emerged in amid efforts to formalize the notion of effective in , particularly in response to David Hilbert's program for securing the foundations of , which included the —a challenge to determine whether there exists an algorithm for deciding the validity of any mathematical statement in . played a pivotal role, initially introducing primitive recursive functions in his 1931 paper on incompleteness theorems, where he used them to arithmetize syntax and demonstrate limitations in formal systems. These functions served as a precursor, capturing a broad class of computable operations but proving insufficient for all effectively calculable ones, such as the . The extension to general recursive functions was spurred by correspondence between Gödel and Jacques Herbrand in 1931, where Herbrand suggested incorporating a minimization operator to handle partial functions, allowing definitions that search for the least value satisfying a condition. Gödel adopted this idea, terming the resulting class "general recursive" in his 1934 lectures at the Institute for Advanced Study, though the formal publication of the definition came later through Stephen Kleene's 1936 paper. This development addressed gaps in primitive recursion by enabling the computation of all total recursive functions while also accommodating partial ones, providing a precise model of mechanical procedures. Alonzo Church contributed significantly during this period, developing in 1932–1933 as an alternative formalization of functions and computation. In 1936, Church proved the undecidable using lambda-definability, proposing his thesis that effectively calculable functions coincide with lambda-definable ones—a claim soon extended to include general recursive functions after equivalence proofs. This negative resolution, corroborated by Alan Turing's 1936 work on computable numbers, solidified the Church-Turing thesis, establishing general recursive functions as a foundational benchmark for and underscoring the limits of algorithmic decidability in logic.

Primitive Recursive Functions

Primitive recursive functions form a fundamental class in , introduced by in his 1931 paper on formally undecidable propositions. These functions are defined as the smallest class of total functions from natural numbers to natural numbers that includes certain basic functions and is closed under composition and primitive recursion. The basic functions are the constant zero function Z(x)=0Z(\mathbf{x}) = 0, which ignores any input; the S(x)=x+1S(x) = x + 1; and the projection functions πik(x1,,xk)=xi\pi_i^k(x_1, \dots, x_k) = x_i for 1ik1 \leq i \leq k, which select the ii-th argument. Composition allows combining previously defined primitive recursive functions: if gg is jj-ary and h1,,hjh_1, \dots, h_j are kk-ary primitive recursive functions, then the function f(x)=g(h1(x),,hj(x))f(\mathbf{x}) = g(h_1(\mathbf{x}), \dots, h_j(\mathbf{x})) is also primitive recursive. Primitive recursion provides the schema for defining new functions from existing ones. Specifically, a (k+1)(k+1)-ary function h(n,y1,,yk)h(n, y_1, \dots, y_k) is primitive recursive if there exist primitive recursive functions g(y1,,yk)g(y_1, \dots, y_k) (k-ary) and f(n,z,y1,,yk)f(n, z, y_1, \dots, y_k) ((k+2)-ary) such that: h(0,y1,,yk)=g(y1,,yk),h(n+1,y1,,yk)=f(n,h(n,y1,,yk),y1,,yk).\begin{align*} h(0, y_1, \dots, y_k) &= g(y_1, \dots, y_k), \\ h(n+1, y_1, \dots, y_k) &= f(n, h(n, y_1, \dots, y_k), y_1, \dots, y_k). \end{align*} This recursive step builds the function iteratively from the base case at 0 up to any input nn. Common arithmetic operations illustrate primitive recursive functions. Addition +(m,n)+(m, n) is defined by +(0,n)=n+(0, n) = n and +(m+1,n)=S(+(m,n))+(m+1, n) = S(+(m, n)), using the successor SS and recursion. Multiplication ×(m,n)\times(m, n) follows with ×(0,n)=0\times(0, n) = 0 and ×(m+1,n)=+(×(m,n),n)\times(m+1, n) = +( \times(m, n), n ), composing addition and recursion. Exponentiation exp(m,n)\exp(m, n) uses exp(0,n)=1\exp(0, n) = 1 (via a constant function) and exp(m+1,n)=×(exp(m,n),n)\exp(m+1, n) = \times( \exp(m, n), n ), again via composition and recursion. All primitive recursive functions are total, meaning they are defined and yield a unique output for every input of . This totality is established by on the definition of the function: base functions are total by inspection, compositions of total functions are total, and primitive recursion defines the value at each step uniquely from prior total values, ensuring coverage for all inputs. They are also computable, as each can be evaluated algorithmically by unfolding the definitions a finite number of times until reaching base cases, corresponding to a step-by-step procedure executable by a .

Definition and Construction

Base Functions and Composition

The foundation of general recursive functions, as formalized in , begins with a set of basic functions that serve as the initial building blocks. These base functions are simple and intuitive, providing the elementary operations from which more complex functions are constructed through closure operations. The three primary base functions are the constant zero function, the , and the projection functions. The constant zero function, often denoted as ZZ, is a function of arbitrary arity that returns 0 for any input tuple (x1,,xn)(x_1, \dots, x_n), where n0n \geq 0. Formally, Z(x1,,xn)=0.Z(x_1, \dots, x_n) = 0. This function introduces the constant value 0 into the system, essential for initializing computations and representing absence or null values in recursive definitions. It was introduced by Kurt Gödel in his 1931 work on formal systems, where it forms one of the foundational elements for defining recursive predicates. The , denoted SS, is a that increments its argument by 1: S(x)=x+1.S(x) = x + 1. This operation allows the generation of all numbers starting from , enabling the construction of and counting mechanisms within recursive functions. Like the zero function, it originates from Gödel's framework, providing the means to build positive integers iteratively. Projection functions, denoted PikP_i^k for 1ik1 \leq i \leq k, are k-ary functions that select and return the i-th argument from an input tuple (x1,,xk)(x_1, \dots, x_k): Pik(x1,,xk)=xi.P_i^k(x_1, \dots, x_k) = x_i. These functions facilitate the manipulation and referencing of specific variables in multi-argument expressions, crucial for composing functions that depend on subsets of inputs. Gödel incorporated projections in to handle variable selection in his recursive definitions, ensuring flexibility in function arguments. A key operation for building more complex functions from these bases is composition, which combines functions by substituting outputs of inner functions as inputs to an outer function. Given an m-ary function g(y1,,ym)g(y_1, \dots, y_m) and m unary or multi-ary functions h1,,hmh_1, \dots, h_m each taking n arguments, the composition yields an n-ary function ff defined as f(x1,,xn)=g(h1(x1,,xn),,hm(x1,,xn)).f(x_1, \dots, x_n) = g\bigl( h_1(x_1, \dots, x_n), \dots, h_m(x_1, \dots, x_n) \bigr). This operation, without involving , allows the nesting of computations, enabling the creation of functions like from successor applications. Composition was part of Gödel's 1931 schema for recursive functions and later formalized by Stephen Kleene in his 1936 treatment of general recursive functions, where it ensures closure under substitution. The class of primitive recursive functions— a subclass of general recursive functions— is precisely the smallest class containing the base functions and closed under composition and primitive recursion. This closure property guarantees that any function obtained by repeated compositions of base functions remains within the primitive recursive class, providing a robust foundation for computable arithmetic operations. Kleene's 1936 analysis confirmed this closure, linking it to the broader theory of computability.

Primitive Recursion and Minimization

Primitive recursion is a scheme for defining functions by specifying their value at 0 and how to compute the value at the successor of an argument from previous values. Given a k-ary base function ff and a (k+2)-ary step function gg, the (k+1)-ary function hh is defined by primitive recursion as: h(x1,,xk,0)=f(x1,,xk),h(x_1, \dots, x_k, 0) = f(x_1, \dots, x_k), h(x1,,xk,y+1)=g(x1,,xk,y,h(x1,,xk,y)).h(x_1, \dots, x_k, y+1) = g(x_1, \dots, x_k, y, h(x_1, \dots, x_k, y)). This allows the construction of total functions like addition and multiplication from the base functions. General recursive functions, also known as μ-recursive functions, extend the class of primitive recursive functions by incorporating the minimization operator, while remaining closed under composition and primitive recursion. This construction allows for the definition of a broader set of computable functions, including those that involve unbounded search. The foundational work establishing this closure is due to Stephen Kleene, who formalized the class in terms of effective procedures on natural numbers. The minimization operator, denoted μy [f(x₁, ..., xₙ, y) = 0], applied to a f, yields a new function that returns the smallest y such that f(x₁, ..., xₙ, y) equals zero, for given inputs x₁ through xₙ. If no such y exists, the resulting function is undefined for those inputs. This operator formalizes an unbounded minimization , enabling the of functions that require searching over potentially infinite domains until a condition is met. Unlike primitive recursive functions, which are total and always defined for all inputs, the inclusion of minimization introduces partial functions that may diverge or remain undefined for certain arguments. This partiality arises precisely when the search for y fails to terminate. Formally, a function φ is general recursive if it belongs to the smallest class of functions containing the base functions (such as the zero function, , and projection functions) and closed under composition, primitive recursion, and the μ-operator, obtained through finitely many applications of these operations. This inductive definition ensures that general recursive functions capture all effectively computable functions on the natural numbers.

Classifications and Properties

Total Recursive Functions

A total recursive function, also known as a recursive function in the strict sense, is a general recursive function that is defined for every input in its domain, mapping from tuples of natural numbers to natural numbers without any undefined values. This totality ensures that the computation always terminates and produces an output for all possible arguments, distinguishing it from partial cases where minimization might lead to non-halting behaviors. All primitive recursive functions are total recursive, as they are constructed via composition and primitive recursion, which guarantee defined outputs for all inputs; however, the converse does not hold, since the class of total recursive functions is strictly larger. A example is the Ackermann-Péter function, which grows faster than any primitive recursive function and thus lies outside the primitive recursive class, yet remains total recursive due to its computability via general recursion schemes. Total recursive functions coincide with the class of functions computable by Turing machines that halt on every input, embodying the total computable functions in the sense of Church's thesis. This equivalence underscores their role as the effectively calculable total functions, closed under composition, primitive recursion, and certain forms of minimization that preserve totality.

Partial Recursive Functions

Partial recursive functions, also referred to as μ-recursive functions, form the class of all computable partial functions on the natural numbers. They are obtained as the smallest class of partial functions containing the basic functions—such as the zero function, successor function, and projection functions—and closed under the operations of composition and primitive recursion, with the addition of the unbounded minimization operator (μ-operator). The μ-operator applied to a total function g allows defining a new partial function f(\vec{x}) = μ y [g(\vec{x}, y) = 0], which is defined for input \vec{x} if there exists a natural number y such that g(\vec{x}, y) = 0, and in that case takes the value of the least such y; otherwise, f(\vec{x}) is undefined. This operator introduces the possibility of non-termination, distinguishing partial recursive functions from the strictly total primitive recursive functions. The domain of a partial recursive function φ: \mathbb{N}^k \rightharpoonup \mathbb{N} is the set dom(φ) = {\vec{x} \in \mathbb{N}^k \mid φ(\vec{x}) \downarrow }, where the notation ↓ indicates that the computation halts and yields a defined value. Unlike total functions, which are defined everywhere, the domains of partial recursive functions may be proper subsets of \mathbb{N}^k, corresponding precisely to the recursively enumerable (r.e.) subsets of \mathbb{N}^k. For any partial recursive function, membership in its domain can be tested by searching for a terminating computation, though this search may not halt for inputs outside the domain. The total recursive functions form the subclass of partial recursive functions whose domains are the entire \mathbb{N}^k. A key property of partial recursive functions is that their domains admit effective enumerations via total recursive means. Specifically, for every partial recursive function φ_e (where e is an index in a of the class), there exists a primitive recursive predicate T_e(\vec{x}, z), known as an instance of Kleene's T-predicate, such that \vec{x} \in dom(φ_e) \exists z , T_e(\vec{x}, z). Here, z encodes a valid halting computation of φ_e on \vec{x}, and a primitive recursive extraction function U(z) recovers the output value when defined. The universal T-predicate T(e, \vec{x}, z) is itself primitive recursive and applies uniformly across all indices e, enabling the representation of any partial recursive computation in normal form: φ_e(\vec{x}) \simeq U(\mu z , T(e, \vec{x}, z)). This structure underscores the computability of domains while highlighting the undecidability of the halting problem for general inputs.

Examples

Primitive Recursive Examples

Primitive recursive functions are constructed from basic functions using composition and primitive recursion, without the need for minimization, allowing for the definition of familiar arithmetic operations on natural numbers. These examples demonstrate how fundamental operations like , , and can be built step by step, showcasing the closure properties of the primitive recursive class. The addition function, denoted as add(x,y)\operatorname{add}(x, y), is primitive recursive and serves as a foundational example. It is defined by the recursion scheme with base case add(x,0)=x\operatorname{add}(x, 0) = x and recursive step add(x,y+1)=S(add(x,y))\operatorname{add}(x, y+1) = S(\operatorname{add}(x, y)), where SS is the successor function, one of the initial functions. This construction iteratively applies the successor yy times to xx, ensuring the function is total and computable by finite iteration. Building on , the function mul(x,y)\operatorname{mul}(x, y) is also primitive recursive. It uses the recursion scheme: base case mul(x,0)=0\operatorname{mul}(x, 0) = 0 (the constant zero function, an initial function) and recursive step mul(x,y+1)=add(mul(x,y),x)\operatorname{mul}(x, y+1) = \operatorname{add}(\operatorname{mul}(x, y), x). This repeatedly adds xx to itself yy times, leveraging the previously established primitive recursive via composition. Exponentiation exp(x,y)\operatorname{exp}(x, y) extends this hierarchy and remains primitive recursive. Defined recursively as base case exp(x,0)=1\operatorname{exp}(x, 0) = 1 (using the constant function) and recursive step exp(x,y+1)=mul(exp(x,y),x)\operatorname{exp}(x, y+1) = \operatorname{mul}(\operatorname{exp}(x, y), x), it computes xyx^y by iteratively multiplying xx into the accumulator yy times, composing with the primitive recursive . More generally, bounded summation and bounded product operations are primitive recursive when the summand or factor is itself primitive recursive. The bounded sum z=0yg(x,z)\sum_{z=0}^{y} g(x, z), where gg is primitive recursive, can be defined recursively: base case for y=0y = 0 is g(x,0)g(x, 0), and step z=0y+1g(x,z)=add(z=0yg(x,z),g(x,y+1))\sum_{z=0}^{y+1} g(x, z) = \operatorname{add}\left( \sum_{z=0}^{y} g(x, z), g(x, y+1) \right), using and composition. Similarly, the bounded product z=0yg(x,z)\prod_{z=0}^{y} g(x, z) follows an analogous scheme with in the recursive step, ensuring these operations stay within the primitive recursive class by bounding the recursion depth.

Non-Primitive Recursive Examples

One prominent example of a total general recursive function that is not primitive recursive is the , which demonstrates the limitations of primitive recursion through its extremely rapid growth rate. The commonly used two-argument version, a simplification of the original three-argument form, is defined recursively for nonnegative integers mm and nn as follows: A(0,n)=n+1,A(m+1,0)=A(m,1),A(m+1,n+1)=A(m,A(m+1,n)).\begin{align*} A(0, n) &= n + 1, \\ A(m + 1, 0) &= A(m, 1), \\ A(m + 1, n + 1) &= A(m, A(m + 1, n)). \end{align*} This definition was introduced in its three-variable form by in 1928 to provide an explicit total outside the class of primitive recursive functions. It was later refined to the two-argument version by Rózsa Péter, highlighting functions computable via general recursion but not primitive recursion. The function's values escalate dramatically—for instance, A(4,2)=2655363A(4, 2) = 2^{65536} - 3—surpassing the growth of functions like or . To establish that the Ackermann function is not primitive recursive, a diagonalization argument is employed. Primitive recursive functions form a indexed by the depth of nesting in their recursive definitions, where each level kk is dominated by a specific growth rate (e.g., level 0 for , level 1 for , up to higher levels for iterated exponentials). The diagonal sequence A(k,k)A(k, k) grows faster than the bounding function at any fixed hierarchy level kk, ensuring no single can majorize the entire . The μ-operator plays a crucial role in defining general recursive functions that are not primitive recursive, particularly through unbounded minimization. An illustrative partial recursive function relying on the μ-operator is the halting predicate for Turing machines, which determines whether a machine with Gödel number ee halts on input xx. Define h(e,x)=μt[U(te(x,t))=0]h(e, x) = \mu t \, [U(t_e(x, t)) = 0], where te(x,t)t_e(x, t) simulates tt steps of machine ee on xx, and UU extracts the output if halted (both primitive recursive via Kleene's T-predicate). The function hh is partial, defined only if the machine halts, and undecidable in general, underscoring the power and limitations of general recursion.

Equivalence to Other Models

Turing Machines

A consists of a of states, including an initial state and halting states; an infinite tape divided into cells, each holding a from a that includes a blank and the input as a subset; a read-write head that scans one cell at a time; and a partial transition function that, given the current state and under the head, specifies a to write, a direction (left or right) to move the head, and a next state. starts with the input encoded as a of s on the tape (with the rest blank), the head on the leftmost input , and the in the initial state; it proceeds by repeatedly applying the transition function until entering a halting state, at which point the tape contents represent the output if defined. This model formalizes mechanical as a discrete, step-by-step process. The equivalence between general recursive functions (also known as μ-recursive functions) and computability relies on mutual . To show that every μ-recursive function is Turing-computable, natural numbers are encoded on the tape, typically in unary (using a string of marks) or for indices. A can then interpret an index e encoding the function's definition via base functions, composition, primitive , and μ-operator, simulating the computation on input x. Primitive is implemented by bounded over the tape, tracking parameters and accumulating results; the μ-operator is simulated by an unbounded search that enumerates candidate values until the predicate holds, using dovetailing to handle potential non-halting. The step relation T(e, x, y, s) captures whether the universal machine, starting with index e and input x, reaches output y after exactly s steps, allowing formalization of halting and computation. Conversely, any computation can be encoded as a μ-recursive function by representing configurations (state, head position, tape contents) as numbers via functions, with transitions defined recursively to simulate steps until halting. This bidirectional simulation establishes the central equivalence theorem: a partial function on the natural numbers is general recursive if and only if it is computable by a . The result was demonstrated through foundational works showing that Turing's "computable" functions coincide with Kleene's general recursive functions, via proofs of inclusion in both directions. The Church-Turing thesis posits as a that every function effectively computable in the intuitive sense is general recursive (equivalently, Turing-computable), providing a unifying foundation for the field despite lacking , as it bridges informal notions of with precise models.

Lambda Calculus and Other Systems

The untyped , introduced by , serves as a foundational equivalent to general recursive functions. In this system, terms are built from variables, abstractions (λx.M), and applications (M N), with computation proceeding via beta-reduction, the process of substituting the argument of an application into the body of a lambda abstraction, subject to variable capture avoidance. Natural numbers are encoded as Church numerals, where the numeral for n applies a function f exactly n times to an argument x; for instance, 0 is λf.λx.x, 1 is λf.λx.f x, and successor is λn.λf.λx.f (n f x). These encodings allow arithmetic operations, such as and , to be defined combinatorially within lambda terms. The equivalence between general recursive functions and lambda-definable functions was established by Stephen Kleene, who demonstrated that every general recursive function can be encoded as a lambda term, and conversely, every lambda-definable function is general recursive. Kleene's translation maps the base functions (zero, successor, projection) and operations (composition, primitive recursion, minimization) to corresponding lambda terms, preserving computability. The minimization operator, which introduces partiality by searching for the least zero of a function, corresponds in lambda calculus to the use of fixed-point combinators, such as the Y combinator defined as Y = λf.(λx.f (x x)) (λx.f (x x)), which enables the definition of recursive functions by finding fixed points without explicit self-reference. Beyond , general recursive functions are equivalent to computations in other models, including introduced by . A consists of a finite set of registers holding non-negative integers and instructions for incrementing, decrementing (if positive), or transferring control, with Minsky showing that such machines can simulate the μ-operator, thus computing all partial recursive functions. Similarly, Emil Post's canonical systems, or production systems, provide another equivalent formulation, where strings over an alphabet are transformed via rule applications, and Post established that these systems generate exactly the recursively enumerable sets when considering normal forms. These equivalences underscore the universality of general recursive functions across diverse computational paradigms.

Key Theorems

Normal Form Theorem

Kleene's normal form theorem provides a canonical representation for every partial recursive function in terms of primitive recursive functions and the minimization operator. Specifically, for every partial recursive function ϕe(k)(xˉ)\phi_e^{(k)}(\bar{x}) of kk arguments with index ee, there exist a primitive recursive predicate T(k)(e,xˉ,y)T^{(k)}(e, \bar{x}, y) and a primitive recursive function u(y)u(y) such that ϕe(k)(xˉ)u(μyT(k)(e,xˉ,y))\phi_e^{(k)}(\bar{x}) \simeq u(\mu y \, T^{(k)}(e, \bar{x}, y)), where \simeq denotes convergence to the same value if the right-hand side is defined, and μy\mu y is the minimization operator finding the least yy satisfying the predicate. The predicate T(k)(e,xˉ,y)T^{(k)}(e, \bar{x}, y), known as the Kleene TT-predicate, encodes whether yy represents a valid computation history for the ee-th partial recursive function on input xˉ\bar{x}. It is primitive recursive, hence total and computable for all inputs, and satisfies adequacy: if ϕe(k)(xˉ)\phi_e^{(k)}(\bar{x}) converges to a value, then there exists a unique minimal yy such that T(k)(e,xˉ,y)T^{(k)}(e, \bar{x}, y) holds, and for all larger yy', T(k)(e,xˉ,y)T^{(k)}(e, \bar{x}, y') also holds. The function u(y)u(y) unraveled the code yy to extract the output value, and is likewise primitive recursive and total. These components ensure that the normal form captures exactly the partial recursive functions without introducing extraneous computations. The proof relies on the indexing , which assigns indices ee to all partial recursive definitions via a scheme that encodes the structure of recursive schemata (composition, primitive recursion, and minimization). Computations are then arithmetized by coding entire sequences of function applications as yy, allowing T(k)T^{(k)} to verify step-by-step validity using primitive recursive checks on the codes. This construction demonstrates that every partial recursive function arises from a fixed universal form, independent of the specific index ee. As corollaries, the theorem yields the existence of a universal partial recursive function φ(x,yˉ)=u(μzT(k)(x,yˉ,z))\varphi(x, \bar{y}) = u(\mu z \, T^{(k)}(x, \bar{y}, z)) that simulates the xx-th function on yˉ\bar{y}, enumerating all partial recursive functions. It also implies the ss-mm-nn theorem, which provides a primitive recursive way to adjust indices for arguments, such as a function sm,n(i,wˉ)s^{m,n}(i, \bar{w}) satisfying ϕsm,n(i,wˉ)(n)(zˉ)=ϕi(m+n)(wˉ,zˉ)\phi_{s^{m,n}(i, \bar{w})}^{(n)}(\bar{z}) = \phi_i^{(m+n)}(\bar{w}, \bar{z}). These results underscore the uniformity of partial recursive functions and their equivalence to other models of computation, such as Turing machines.

Fixed-Point Theorem

Kleene's fixed-point theorem, a cornerstone of recursion theory, guarantees the existence of self-referential functions within the class of partial recursive functions. Specifically, for any partial recursive function f(x,y)f(x, y), there exists a partial recursive function g(x)g(x) (often total when the fixed point is defined everywhere) such that g(x)=f(x,g(x))g(x) = f(x, g(x)) for all xx in the domain where the right-hand side is defined. This result, first established by Stephen Kleene, enables the construction of functions that apply an operator to themselves, capturing essential aspects of and self-reference. The proof relies on the s-m-n theorem (parametrization theorem) and the universal function from Kleene's normal form theorem. Let ee be an index such that ϕe(x,y)=f(x,y)\phi_e(x, y) = f(x, y). By the s-m-n theorem, there exists a primitive recursive function s1s^1 such that ϕse1(z)(w)=ϕe(z,w)\phi_{s^1_e(z)}(w) = \phi_e(z, w). To achieve self-reference, construct an index for the function that incorporates its own index in the application of ff. Let kk be an index for the partial recursive function ψ(z,x)=ϕz(x,ϕsz1(z)(x))\psi(z, x) = \phi_z(x, \phi_{s^1_z(z)}(x)). Then, taking g=ϕkg = \phi_k, it follows that g(x)=ϕk(x,g(x))=f(x,g(x))g(x) = \phi_k(x, g(x)) = f(x, g(x)), as the diagonal application aligns the indices appropriately. The explicit construction of the fixed point is given by the equation ϕse1(e)(x)=ϕe(x,ϕse1(e)(x)),\phi_{s^1_e(e)}(x) = \phi_e(x, \phi_{s^1_e(e)}(x)), where the left side denotes the self-applied function and the right side embeds the recursive call, ensuring the fixed-point property holds by the properties of the universal underlying the normal form. This theorem plays a pivotal role in applications involving diagonalization and self-reference. It underpins the diagonalization lemma, which is central to , allowing the construction of self-referential sentences in formal systems that assert their own unprovability. Additionally, as the recursion theorem itself, it facilitates the effective generation of programs that can inspect and modify their own descriptions, with implications for quines and undecidability results in .

Notation and Formalism

Standard Symbols and Conventions

In the theory of general recursive functions, partial recursive functions are commonly indexed by natural numbers using the notation ϕe\phi_e, where eNe \in \mathbb{N} serves as a Gödel number encoding a program or formal description of the function. This indexing allows for the enumeration of all partial recursive functions, facilitating proofs of universality and completeness in computability. The domain of such a function, denoted Dom(ϕe)\operatorname{Dom}(\phi_e), consists of the set of inputs for which ϕe\phi_e is defined and converges to a value, reflecting the partial nature of general recursive functions. Arithmetic operations essential to the construction of recursive functions include the , often symbolized as xyx \oplus y, which bijectively encodes pairs of natural numbers into single natural numbers to enable coding of higher-arity functions. The , sg(n)\operatorname{sg}(n), is defined as 0 if n=0n = 0 and 1 otherwise, providing a primitive recursive test for positivity that underpins many bounded operations. Standard primitive recursive predicates include equality, whose returns 1 if the arguments are equal and 0 otherwise, and the less-than relation, which returns 1 if the first argument is strictly less than the second and 0 otherwise; both are constructed using basic recursive operations like and the .

Formal Language Representation

The formal syntax for general recursive functions, also known as μ-recursive functions, is constructed as a term language over the natural numbers N={0,1,2,}\mathbb{N} = \{0, 1, 2, \dots\}, starting from basic terms and applying closure under composition, primitive recursion, and minimization. The base terms include the constant zero function Zn(x)=0Z^n(\vec{x}) = 0
Add your contribution
Related Hubs
User Avatar
No comments yet.