Recent from talks
Contribute something
Nothing was collected or created yet.
Hyperoperation
View on Wikipedia
In mathematics, the hyperoperation sequence[nb 1] is an infinite sequence of arithmetic operations (called hyperoperations in this context)[1][11][13] that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).
After that, the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.)[5] and can be written using n − 2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by:
It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:
This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes's number and googolplexplex (e.g. is much larger than Skewes's number and googolplexplex), but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3).[14]
This recursion rule is common to many variants of hyperoperations.
Definition
[edit]Definition, most common
[edit]The hyperoperation sequence is the sequence of binary operations , defined recursively as follows:
(Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument.)
For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as
The operations for n ≥ 3 can be written in Knuth's up-arrow notation.
So what will be the next operation after exponentiation? We defined multiplication so that and defined exponentiation so that so it seems logical to define the next operation, tetration, so that with a tower of three 'a'. Analogously, the pentation of (a, 3) will be tetration(a, tetration(a, a)), with three "a" in it.
Knuth's notation could be extended to negative indices ≥ −2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that
the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term;[15] so a is the base, b is the exponent (or hyperexponent),[12] and n is the rank (or grade),[6] and moreover, is read as "the bth n-ation of a", e.g. is read as "the 9th tetration of 7", and is read as "the 789th 123-ation of 456".
In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x + 1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.
Definition, using iteration
[edit]Define iteration of a function f of two variables as
The hyperoperation sequence can be defined in terms of iteration, as follows. For all integers define
As iteration is associative, the last line can be replaced by
Computation
[edit]The definitions of the hyperoperation sequence can naturally be transposed to term rewriting systems (TRS).
TRS based on definition sub 1.1
[edit]The basic definition of the hyperoperation sequence corresponds with the reduction rules
To compute one can use a stack, which initially contains the elements .
Then, repeatedly until no longer possible, three elements are popped and replaced according to the rules[nb 2]
Schematically, starting from :
WHILE stackLength <> 1
{
POP 3 elements;
PUSH 1 or 5 elements according to the rules r1, r2, r3, r4, r5;
}
Example
Compute .[16]
The reduction sequence is[nb 2][17]
When implemented using a stack, on input
| the stack configurations | represent the equations |
TRS based on definition sub 1.2
[edit]The definition using iteration leads to a different set of reduction rules
As iteration is associative, instead of rule r11 one can define
Like in the previous section the computation of can be implemented using a stack.
Initially the stack contains the four elements .
Then, until termination, four elements are popped and replaced according to the rules[nb 2]
Schematically, starting from :
WHILE stackLength <> 1
{
POP 4 elements;
PUSH 1 or 7 elements according to the rules r6, r7, r8, r9, r10, r11;
}
Example
Compute .
On input the successive stack configurations are
The corresponding equalities are
When reduction rule r11 is replaced by rule r12, the stack is transformed according to
The successive stack configurations will then be
The corresponding equalities are
Remarks
- is a special case. See below.[nb 3]
- The computation of according to the rules {r6 - r10, r11} is heavily recursive. The culprit is the order in which iteration is executed: . The first disappears only after the whole sequence is unfolded. For instance, converges to 65536 in 2863311767 steps, the maximum depth of recursion[18] is 65534.
- The computation according to the rules {r6 - r10, r12} is more efficient in that respect. The implementation of iteration as mimics the repeated execution of a procedure H.[19] The depth of recursion, (n+1), matches the loop nesting. Meyer & Ritchie (1967) formalized this correspondence. The computation of according to the rules {r6-r10, r12} also needs 2863311767 steps to converge on 65536, but the maximum depth of recursion is only 5, as tetration is the 5th operator in the hyperoperation sequence.
- The considerations above concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules r11 and r12 are considered "the same"). As the example shows the reduction of converges in 9 steps: 1 X r7, 3 X r8, 1 X r9, 2 X r10, 2 X r11/r12. The modus iterandi only affects the order in which the reduction rules are applied.
Examples
[edit]Below is a list of the first seven (0th to 6th) hyperoperations (0⁰ is defined as 1).
| n | Operation, Hn(a, b) |
Definition | Names | Domain |
|---|---|---|---|---|
| 0 | or | Increment, successor, zeration, hyper0 | Arbitrary | |
| 1 | or | Addition, hyper1 | ||
| 2 | or | Multiplication, hyper2 | ||
| 3 | or | Exponentiation, hyper3 | b real, with some multivalued extensions to complex numbers | |
| 4 | or | Tetration, hyper4 | a ≥ 0 or an integer, b an integer ≥ −1 [nb 4] (with some proposed extensions) | |
| 5 | or | Pentation, hyper5 | a, b integers ≥ −1 [nb 4] | |
| 6 | Hexation, hyper6 |
Special cases
[edit]Hn(0, b) =
- b + 1, when n = 0
- b, when n = 1
- 0, when n = 2
- 1, when n = 3 and b = 0 [nb 3]
- 0, when n = 3 and b > 0 [nb 3]
- 1, when n > 3 and b is even (including 0)
- 0, when n > 3 and b is odd
Hn(1, b) =
- b, when n = 2
- 1, when n ≥ 3
Hn(a, 0) =
- 0, when n = 2
- 1, when n = 0, or n ≥ 3
- a, when n = 1
Hn(a, 1) =
- 2, when n = 0
- a + 1, when n = 1
- a, when n ≥ 2
Hn(a, a) =
- Hn+1(a, 2), when n ≥ 1
Hn(a, −1) =[nb 4]
- 0, when n = 0, or n ≥ 4
- a − 1, when n = 1
- −a, when n = 2
- 1/a , when n = 3
Hn(2, 2) =
- 3, when n = 0
- 4, when n ≥ 1, easily demonstrable recursively.
History
[edit]One of the earliest discussions of hyperoperations was that of Albert Bennett in 1914, who developed some of the theory of commutative hyperoperations (see below).[6] About 12 years later, Wilhelm Ackermann defined the function , which somewhat resembles the hyperoperation sequence.[20]
In his 1947 paper,[5] Reuben Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, etc.). As a three-argument function, e.g., , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function — recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.
The original three-argument Ackermann function uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for result in , thus differing from the hyperoperations beyond exponentiation.[7][21][22] The significance of the b + 1 in the previous expression is that = , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in , and so on for the higher-level operations. (See the Ackermann function article for details.)
Notations
[edit]This is a list of notations that have been used for hyperoperations.
| Name | Notation equivalent to | Comment |
|---|---|---|
| Knuth's up-arrow notation | Used by Knuth[23] (for n ≥ 3), and found in several reference books.[24][25] | |
| Hilbert's notation | Used by David Hilbert.[26] | |
| Goodstein's notation | Used by Reuben Goodstein.[5] | |
| Original Ackermann function | Used by Wilhelm Ackermann (for n ≥ 1)[20] | |
| Ackermann–Péter function | This corresponds to hyperoperations for base 2 (a = 2) | |
| Nambiar's notation | Used by Nambiar (for n ≥ 1)[27] | |
| Superscript notation | Used by Robert Munafo.[21] | |
| Subscript notation (for lower hyperoperations) | Used for lower hyperoperations by Robert Munafo.[21] | |
| Operator notation (for "extended operations") | Used for lower hyperoperations by John Doner and Alfred Tarski (for n ≥ 1).[28] | |
| Square bracket notation | Used in many online forums; convenient for ASCII. | |
| Conway chained arrow notation | Used by John Horton Conway (for n ≥ 3) |
Variant starting from a
[edit]In 1928, Wilhelm Ackermann defined a 3-argument function which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function was less similar to modern hyperoperations, because his initial conditions start with for all n > 2. Also he assigned addition to n = 0, multiplication to n = 1 and exponentiation to n = 2, so the initial conditions produce very different operations for tetration and beyond.
| n | Operation | Comment |
|---|---|---|
| 0 | ||
| 1 | ||
| 2 | ||
| 3 | An offset form of tetration. The iteration of this operation is different than the iteration of tetration. | |
| 4 | Not to be confused with pentation. |
Another initial condition that has been used is (where the base is constant ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.
Variant starting from 0
[edit]In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows.[29] Since then, many other authors[30][31][32] have renewed interest in the application of hyperoperations to floating-point representation. (Since Hn(a, b) are all defined for b = -1.) While discussing tetration, Clenshaw et al. assumed the initial condition , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.
| n | Operation | Comment |
|---|---|---|
| 0 | ||
| 1 | ||
| 2 | ||
| 3 | ||
| 4 | An offset form of tetration. The iteration of this operation is much different than the iteration of tetration. | |
| 5 | Not to be confused with pentation. |
Lower hyperoperations
[edit]An alternative for these hyperoperations is obtained by evaluation from left to right.[9] Since
define (with ° or subscript)
with
This was extended to ordinal numbers by Doner and Tarski,[33] by :
It follows from Definition 1(i), Corollary 2(ii), and Theorem 9, that, for a ≥ 2 and b ≥ 1, that [original research?]
But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyperoperators:[34][nb 5]
If α ≥ 2 and γ ≥ 2,[28][Corollary 33(i)][nb 5]
| n | Operation | Comment |
|---|---|---|
| 0 | Increment, successor, zeration | |
| 1 | ||
| 2 | ||
| 3 | ||
| 4 | Not to be confused with tetration. | |
| 5 | Not to be confused with pentation. Similar to tetration. |
Commutative hyperoperations
[edit]Commutative hyperoperations were considered by Albert Bennett as early as 1914,[6] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule
which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.
| n | Operation | Comment |
|---|---|---|
| 0 | Smooth maximum (LogSumExp) | |
| 1 | ||
| 2 | This is due to the properties of the logarithm. | |
| 3 | In a finite field, this is the Diffie–Hellman key exchange operation. | |
| 4 | Not to be confused with tetration. |
Numeration systems based on the hyperoperation sequence
[edit]R. L. Goodstein[5] used the sequence of hyperoperators to create systems of numeration for the nonnegative integers. The so-called complete hereditary representation of integer n, at level k and base b, can be expressed as follows using only the first k hyperoperators and using as digits only 0, 1, ..., b − 1, together with the base b itself:
- For 0 ≤ n ≤ b − 1, n is represented simply by the corresponding digit.
- For n > b − 1, the representation of n is found recursively, first representing n in the form
- b [k] xk [k − 1] xk − 1 [k - 2] ... [2] x2 [1] x1
- where xk, ..., x1 are the largest integers satisfying (in turn)
- b [k] xk ≤ n
- b [k] xk [k − 1] xk − 1 ≤ n
- ...
- b [k] xk [k − 1] xk − 1 [k - 2] ... [2] x2 [1] x1 ≤ n
- Any xi exceeding b − 1 is then re-expressed in the same manner, and so on, repeating this procedure until the resulting form contains only the digits 0, 1, ..., b − 1, together with the base b.
Unnecessary parentheses can be avoided by giving higher-level operators higher precedence in the order of evaluation; thus,
- level-1 representations have the form b [1] X, with X also of this form;
- level-2 representations have the form b [2] X [1] Y, with X,Y also of this form;
- level-3 representations have the form b [3] X [2] Y [1] Z, with X,Y,Z also of this form;
- level-4 representations have the form b [4] X [3] Y [2] Z [1] W, with X,Y,Z,W also of this form;
and so on.
In this type of base-b hereditary representation, the base itself appears in the expressions, as well as "digits" from the set {0, 1, ..., b − 1}. This compares to ordinary base-2 representation when the latter is written out in terms of the base b; e.g., in ordinary base-2 notation, 6 = (110)2 = 2 [3] 2 [2] 1 [1] 2 [3] 1 [2] 1 [1] 2 [3] 0 [2] 0, whereas the level-3 base-2 hereditary representation is 6 = 2 [3] (2 [3] 1 [2] 1 [1] 0) [2] 1 [1] (2 [3] 1 [2] 1 [1] 0). The hereditary representations can be abbreviated by omitting any instances of [1] 0, [2] 1, [3] 1, [4] 1, etc.; for example, the above level-3 base-2 representation of 6 abbreviates to 2 [3] 2 [1] 2.
Examples: The unique base-2 representations of the number 266, at levels 1, 2, 3, 4, and 5 are as follows:
- Level 1: 266 = 2 [1] 2 [1] 2 [1] ... [1] 2 (with 133 2s)
- Level 2: 266 = 2 [2] (2 [2] (2 [2] (2 [2] 2 [2] 2 [2] 2 [2] 2 [1] 1)) [1] 1)
- Level 3: 266 = 2 [3] 2 [3] (2 [1] 1) [1] 2 [3] (2 [1] 1) [1] 2
- Level 4: 266 = 2 [4] (2 [1] 1) [3] 2 [1] 2 [4] 2 [2] 2 [1] 2
- Level 5: 266 = 2 [5] 2 [4] 2 [1] 2 [5] 2 [2] 2 [1] 2
See also
[edit]Notes
[edit]- ^ Sequences similar to the hyperoperation sequence have historically been referred to by many names, including: the Ackermann function[1] (3-argument), the Ackermann hierarchy,[2] the Grzegorczyk hierarchy[3][4] (which is more general), Goodstein's version of the Ackermann function,[5] operation of the nth grade,[6] z-fold iterated exponentiation of x with y,[7] arrow operations,[8] reihenalgebra[9] and hyper-n.[1][9][10][11][12]
- ^ a b c This implements the leftmost-innermost (one-step) strategy.
- ^ a b c For more details, see Powers of zero or Zero to the power of zero.
- ^ a b c Let x = a[n](−1). By the recursive formula, a[n]0 = a[n − 1](a[n](−1)) ⇒ 1 = a[n − 1]x. One solution is x = 0, because a[n − 1]0 = 1 by definition when n ≥ 4. This solution is unique because a[n − 1]b > 1 for all a > 1, b > 0 (proof by recursion).
- ^ a b Ordinal addition is not commutative; see ordinal arithmetic for more information
References
[edit]- ^ a b c Geisler 2003.
- ^ Friedman 2001.
- ^ Campagnola, Moore & Félix Costa 2002.
- ^ Wirz 1999.
- ^ a b c d e Goodstein 1947.
- ^ a b c d Bennett 1915.
- ^ a b Black 2009.
- ^ Littlewood 1948.
- ^ a b c Müller 1993.
- ^ Munafo 1999a.
- ^ a b Robbins 2005.
- ^ a b Galidakis 2003.
- ^ Rubtsov & Romerio 2005.
- ^ Townsend 2016.
- ^ Romerio 2008.
- ^ Bezem, Klop & De Vrijer 2003.
- ^ In each step the underlined redex is rewritten.
- ^ The maximum depth of recursion refers to the number of levels of activation of a procedure which exist during the deepest call of the procedure. Cornelius & Kirby (1975)
- ^ LOOP n TIMES DO H.
- ^ a b Ackermann 1928.
- ^ a b c Munafo 1999b.
- ^ Cowles & Bailey 1988.
- ^ Knuth 1976.
- ^ Zwillinger 2002.
- ^ Weisstein 2003.
- ^ Hilbert 1926.
- ^ Nambiar 1995.
- ^ a b Doner & Tarski 1969.
- ^ Clenshaw & Olver 1984.
- ^ Holmes 1997.
- ^ Zimmermann 1997.
- ^ Pinkiewicz, Holmes & Jamil 2000.
- ^ Doner & Tarski 1969, Definition 1.
- ^ Doner & Tarski 1969, Theorem 3(iii).
Bibliography
[edit]- Ackermann, Wilhelm (1928). "Zum Hilbertschen Aufbau der reellen Zahlen". Mathematische Annalen. 99: 118–133. doi:10.1007/BF01459088. S2CID 123431274.
- Bennett, Albert A. (December 1915). "Note on an Operation of the Third Grade". Annals of Mathematics. Second Series. 17 (2): 74–75. doi:10.2307/2007124. JSTOR 2007124.
- Bezem, Marc; Klop, Jan Willem; De Vrijer, Roel (2003). "First-order term rewriting systems". Term Rewriting Systems by "Terese". Cambridge University Press. pp. 38–39. ISBN 0-521-39115-6.
- Black, Paul E. (16 March 2009). "Ackermann's function". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology (NIST). Retrieved 29 August 2021.
- Campagnola, Manuel Lameiras; Moore, Cristopher; Félix Costa, José (December 2002). "Transfinite Ordinals in Recursive Number Theory". Journal of Complexity. 18 (4): 977–1000. doi:10.1006/jcom.2002.0655.
- Clenshaw, C.W.; Olver, F.W.J. (April 1984). "Beyond floating point". Journal of the ACM. 31 (2): 319–328. doi:10.1145/62.322429. S2CID 5132225.
- Cornelius, B.J.; Kirby, G.H. (1975). "Depth of recursion and the ackermann function". BIT Numerical Mathematics. 15 (2): 144–150. doi:10.1007/BF01932687. S2CID 120532578.
- Cowles, J.; Bailey, T. (30 September 1988). "Several Versions of Ackermann's Function". Dept. of Computer Science, University of Wyoming, Laramie, WY. Retrieved 29 August 2021.
- Doner, John; Tarski, Alfred (1969). "An extended arithmetic of ordinal numbers". Fundamenta Mathematicae. 65: 95–127. doi:10.4064/fm-65-1-95-127.
- Friedman, Harvey M. (July 2001). "Long Finite Sequences". Journal of Combinatorial Theory. Series A. 95 (1): 102–144. doi:10.1006/jcta.2000.3154.
- Galidakis, I. N. (2003). "Mathematics". Archived from the original on 20 April 2009. Retrieved 17 April 2009.
- Geisler, Daniel (2003). "What lies beyond exponentiation?". Retrieved 17 April 2009.
- Goodstein, Reuben Louis (December 1947). "Transfinite Ordinals in Recursive Number Theory" (PDF). Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR 2266486. S2CID 1318943.
- Hilbert, David (1926). "Über das Unendliche". Mathematische Annalen. 95: 161–190. doi:10.1007/BF01206605. S2CID 121888793.
- Holmes, W. N. (March 1997). "Composite Arithmetic: Proposal for a New Standard". Computer. 30 (3): 65–73. doi:10.1109/2.573666. Retrieved 21 April 2009.
- Knuth, Donald Ervin (December 1976). "Mathematics and Computer Science: Coping with Finiteness". Science. 194 (4271): 1235–1242. Bibcode:1976Sci...194.1235K. doi:10.1126/science.194.4271.1235. PMID 17797067. S2CID 1690489. Retrieved 21 April 2009.
- Littlewood, J. E. (July 1948). "Large Numbers". Mathematical Gazette. 32 (300): 163–171. doi:10.2307/3609933. JSTOR 3609933. S2CID 250442130.
- Meyer, Albert R.; Ritchie, Dennis MacAlistair (1967). The complexity of loop programs. ACM '67: Proceedings of the 1967 22nd national conference. doi:10.1145/800196.806014.
- Müller, Markus (1993). "Reihenalgebra" (PDF). Archived from the original (PDF) on 2 December 2013. Retrieved 6 November 2021.
- Munafo, Robert (1999a). "Versions of Ackermann's Function". Large Numbers at MROB. Retrieved 28 August 2021.
- Munafo, Robert (1999b). "Inventing New Operators and Functions". Large Numbers at MROB. Retrieved 28 August 2021.
- Nambiar, K. K. (1995). "Ackermann Functions and Transfinite Ordinals". Applied Mathematics Letters. 8 (6): 51–53. doi:10.1016/0893-9659(95)00084-4.
- Perstein, Millard H. (1 June 1962). "Algorithm 93: General order arithmetic". Communications of the ACM. 5 (6). New York City: Association for Computing Machinery: 344. doi:10.1145/367766.368160. ISSN 0001-0782.
- Pinkiewicz, T.; Holmes, N.; Jamil, T. (2000). "Design of a composite arithmetic unit for rational numbers". Proceedings of the IEEE Southeast Con 2000. 'Preparing for the New Millennium' (Cat. No.00CH37105). Proceedings of the IEEE. pp. 245–252. doi:10.1109/SECON.2000.845571. ISBN 0-7803-6312-4. S2CID 7738926.
- Robbins, A. J. (November 2005). "Home of Tetration". Archived from the original on 13 June 2015. Retrieved 17 April 2009.
- Romerio, G. F. (21 January 2008). "Hyperoperations Terminology". Tetration Forum. Retrieved 21 April 2009.
- Rubtsov, C. A.; Romerio, G. F. (December 2005). "Ackermann's Function and New Arithmetical Operation". Retrieved 17 April 2009.
- Townsend, Adam (12 May 2016). "Names for large numbers". Chalkdust magazine.
- Weisstein, Eric W. (2003). CRC concise encyclopedia of mathematics, 2nd Edition. CRC Press. pp. 127–128. ISBN 1-58488-347-2.
- Wirz, Marc (1999). "Characterizing the Grzegorczyk hierarchy by safe recursion" (PDF). Bern: Institut für Informatik und angewandte Mathematik. CiteSeerX 10.1.1.42.3374. S2CID 117417812.
- Zimmermann, R. (1997). "Computer Arithmetic: Principles, Architectures, and VLSI Design" (PDF). Lecture notes, Integrated Systems Laboratory, ETH Zürich. Archived from the original (PDF) on 17 August 2013. Retrieved 17 April 2009.
- Zwillinger, Daniel (2002). CRC standard mathematical tables and formulae, 31st Edition. CRC Press. p. 4. ISBN 1-58488-291-3.
Hyperoperation
View on Grokipedia- (successor),
- For , is (n=1), 0 (n=2), or 1 (n ≥ 3);
- for , .
This yields: (addition as repeated successor), (multiplication as repeated addition), (exponentiation as repeated multiplication), as tetration (a power tower of copies of ), and so on for higher ranks. Goodstein's original formulation emphasized their role in embedding transfinite ordinals into primitive recursive functions, highlighting their foundational importance in proof theory and recursion.[1]
Definitions
Recursive Definition
The hyperoperation , where denotes the level of the operation, the base, and the height, generalizes the hierarchy of arithmetic operations into an infinite sequence defined over non-negative integers.[4] This sequence commences at level 0 with the successor function, , which increments regardless of and establishes a foundation for handling zero in natural number arithmetic. At level 1, it yields addition, ; level 2 gives multiplication, ; and level 3 produces exponentiation, .[4] Higher levels follow the core recursive schema, with level-specific base cases: [4] This formulation derives each subsequent operation by repeated application of the prior level: for instance, multiplication arises from additions of to 0, and exponentiation from multiplications of starting from 1, with the recursion enforcing right-associativity to align with standard conventions like iterated exponentiation. The successor's role at level 0 ensures the entire hierarchy remains well-defined for all non-negative integers, bridging unary and binary operations coherently.[4]Iterative Definition
The iterative definition of hyperoperations provides a constructive approach by defining each successive operation as the repeated application of the prior operation a finite number of times, enabling a bottom-up progression from basic successor functions to increasingly complex binary operations. This method emphasizes computability through explicit repetition, contrasting with more abstract recursive formulations. The general form defines the hyperoperation as the b-fold iteration of the function , applied to an initial base value that varies by level to ensure consistency with arithmetic identities.[5][6] For multiplication as , it arises from iterating addition starting from 0, performing b additions of a: The base case reflects the multiplicative identity for zero. Exponentiation builds similarly by iterating multiplication starting from 1, with b multiplications of a: Here, the base case is , aligning with the convention that any nonzero number raised to the power 0 equals 1. Tetration extends this by iterating exponentiation , effectively stacking b copies of a in a right-associative power tower, starting from a single a and applying the operation b-1 times to build the height. The base case maintains consistency for higher levels.[6][5] Base cases in the iterative definition adjust by operation level to preserve structural properties: for addition , returning the first argument as the additive identity; for multiplication and higher, they shift to 0 or 1 as appropriate to match established arithmetic behaviors. This level-dependent handling ensures the sequence integrates seamlessly with natural number foundations.[5] This iterative construction mirrors the progression in Peano arithmetic, where the successor function iteratively generates natural numbers, addition iterates the successor on the second argument, and multiplication iterates addition, all within finite steps to guarantee primitive recursive computability without invoking direct self-reference.[7]Hyperoperation Sequence
Addition and Multiplication
In the sequence of hyperoperations, the first operation, denoted , corresponds to addition, defined as for natural numbers and . Note: Indexing conventions vary; here we follow the standard where is addition, consistent with Goodstein and common literature. Within the framework of Peano arithmetic, addition is constructed as the repeated application of the successor function times to , where the successor yields the next natural number after .[8] This iterative view aligns addition with the foundational structure of the natural numbers, starting from 0 and building through successors.[9] Addition exhibits key algebraic properties that underpin its role in arithmetic. It is commutative, satisfying for all natural numbers and , a result provable by induction on .[8] Similarly, addition is associative: , which follows from inductive arguments establishing compatibility with the recursive definition.[8] These properties ensure that addition forms a commutative monoid under the natural numbers with identity 0.[9] The second hyperoperation, multiplication, is denoted and arises as the repeated addition of , times. For instance, , illustrating multiplication as copies of summed together.[9] Formally, it is defined recursively: and , enabling computation via induction.[8] Multiplication inherits and extends the structure of addition with its own properties. It is commutative: , provable by induction using the recursive definition and commutativity of addition.[8] Associativity holds: , again via inductive verification.[10] Crucially, multiplication distributes over addition from the right: , and symmetrically from the left, establishing a semiring structure on the natural numbers.[10] Together, addition and multiplication serve as the foundational binary operations in the Peano axioms for natural numbers, where addition embodies repeated succession and multiplication repeated addition, forming the arithmetic core extended by higher hyperoperations.[9]Exponentiation and Tetration
In the hyperoperation sequence, the third level, denoted , corresponds to exponentiation, where represents the result of multiplying by itself times.[5] This operation builds directly on multiplication as its iterated form, with and .[6] Unlike addition and multiplication, exponentiation is neither commutative nor associative; specifically, it employs right-associativity, meaning expressions like are evaluated as rather than .[11] This convention aligns with the recursive structure of hyperoperations and prevents ambiguity in chained exponents.[5] The fourth hyperoperation, , is tetration, introduced by Reuben Goodstein in 1947 as the natural extension of iterated exponentiation.[6] Denoted , it stacks copies of in an exponentiation tower, evaluated right-associatively: for example, .[12] The recursive definition is and , producing finite but extraordinarily large values even for modest inputs.[5] A notable illustration is , highlighting tetration's role in generating numbers central to notations for extremely large quantities.[12] Regarding growth rates, exponentiation yields exponential increase, which manifests as linear behavior on a logarithmic scale—for instance, .[6] In contrast, tetration escalates to superexponential growth through power towers, where each additional iteration vastly amplifies the result, far outpacing polynomials or single exponentials.[5] This rapid escalation underscores the non-commutativity and towering structure inherent to these mid-level hyperoperations, distinguishing them from the more gradual progressions of addition and multiplication.[11]Pentation and Higher Operations
Pentation, the fifth operation in the hyperoperation sequence and denoted , consists of iterated tetrations of the base .[13] This builds directly on tetration as its foundational iteration, extending the hierarchy to produce numbers of unprecedented magnitude. For example, , a power tower of four 2's. For , the general hyperoperation applies the previous level () iteratively times to , resulting in growth rates reminiscent of the Ackermann function, where each increment in dramatically outpaces all preceding levels in rapidity. These operations exhibit strict right-associativity, meaning , and are non-commutative for , as in general.[5] Moreover, for fixed , the sequence diverges to infinity as , with the rate accelerating hyper-exponentially at each step.[13] While each fixed-level hyperoperation is primitive recursive, the growth rates increase so rapidly with that diagonalizations over the hierarchy—such as those akin to the Ackermann function—escape the bounds of primitive recursion, pointing toward uncomputability when considering the full transfinite extension of the sequence. This rapid ascent underscores their role in exploring the limits of recursive function theory and ordinal arithmetic.[5]Notations
Knuth's Up-Arrow Notation
Knuth's up-arrow notation provides a concise symbolic representation for hyperoperations, extending beyond standard arithmetic to denote extremely large integers through iterated exponentiation. Introduced by Donald Knuth in 1976, this notation uses a sequence of upward-pointing arrows between two operands and (where and are positive integers) to specify the level of the hyperoperation.[14] The notation begins with a single arrow, where , equivalent to the standard power function or the third hyperoperation . Adding arrows increases the operation's hierarchy: denotes tetration, or , the repeated exponentiation of to height , corresponding to ; for instance, and . A triple arrow represents pentation, , which iterates tetration times. In general, (with arrows) defines the hyperoperation of rank , denoted , where evaluation builds nested structures from higher to lower levels.[15][14] Operations in this notation are right-associative, meaning expressions are evaluated from right to left to form power towers. For example, . This associativity ensures a unique interpretation for chains of arrows, avoiding ambiguity in complex expressions.[15] The primary advantage of Knuth's up-arrow notation lies in its compactness for expressing numbers of immense scale that would require verbose recursive definitions otherwise, making it invaluable for theoretical discussions of growth rates and large integers. It is extensively employed in googology, the mathematical study of extraordinarily large numbers, to define and compare constructs like Graham's number, which uses up-arrows to bound Ramsey theory results.[14][16]Conway's Chain Arrow Notation
Conway's chained arrow notation, introduced by mathematicians John Horton Conway and Richard K. Guy in their 1996 book The Book of Numbers, provides a compact way to express hyperoperations beyond binary forms, particularly suited for combinatorial problems involving extremely large numbers.[17][18] This notation uses sequences of positive integers connected by right-pointing arrows (→), allowing chains of arbitrary length to denote iterated operations that grow far faster than standard exponentiation.[19] The notation begins with the simplest chain: for two terms, , which corresponds to standard exponentiation.[19] A three-term chain represents the -fold hyperoperation applied to and , equivalent to in Knuth's up-arrow notation, where the number of arrows indicates the level of iteration (e.g., one arrow for exponentiation, two for tetration).[17] Longer chains, such as , extend this further by nesting operations recursively.[19] Evaluation proceeds right-associatively, reducing the chain length one step at a time according to specific rules: a chain ending in 1 evaluates to (the leading term); otherwise, it iterates the preceding subchain the specified number of times.[19] For instance, , demonstrating the rapid growth even at low levels.[19] Unlike notations limited to fixed binary operations, Conway's system handles multi-dimensional arrays and higher-order iterations naturally, making it ideal for expressing bounds in combinatorial contexts like Ramsey theory.[18] It is used to define extremely large numbers, such as the Graham-Conway number , which vastly exceeds Graham's number.[19][20] Knuth's up-arrow notation appears as a special case for chains of length two or three.[17]Computation
Recursion Schemes
Hyperoperations are defined recursively, with higher levels built by iterating lower-level operations. The standard right-recursive scheme iterates on the second argument : with base cases depending on : , , , and for . A left-recursive variant iterates on the first argument : with appropriate bases like for low . These schemes ensure finite recursion depth of or steps per level, though overall complexity grows tower-like with .Algorithmic Implementation
Hyperoperations can be computed for small arguments using recursive functions mirroring the definitions, with memoization to avoid redundant calculations. The following pseudocode implements the standard scheme (with : successor, : addition, : multiplication, : exponentiation, etc.):function hyper(a, n, b):
if n == 0:
return b + 1
if b == 0:
if n == 1:
return a
elif n == 2:
return 0
else:
return 1
if n == 1:
return a + b
if n == 2:
return a * b
if n == 3:
return a ** b # [Exponentiation](/page/Exponentiation) (use built-in or binary method)
return hyper(a, n-1, hyper(a, n, b-1))
function hyper(a, n, b):
if n == 0:
return b + 1
if b == 0:
if n == 1:
return a
elif n == 2:
return 0
else:
return 1
if n == 1:
return a + b
if n == 2:
return a * b
if n == 3:
return a ** b # [Exponentiation](/page/Exponentiation) (use built-in or binary method)
return hyper(a, n-1, hyper(a, n, b-1))
int supports this in principle but faces time/memory limits for . Libraries like GMP enable larger computations in other languages. The Ackermann function, which diagonalizes over hyperoperations, exemplifies non-primitive-recursive computation via similar recursion.[21]
