Recent from talks
Nothing was collected or created yet.
Iterated logarithm
View on Wikipedia
In computer science, the iterated logarithm of , written log* (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to .[1] The simplest formal definition is the result of this recurrence relation:
In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than , not only for base and base e. The "super-logarithm" function is "essentially equivalent" to the base iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.[2]
Analysis of algorithms
[edit]The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
- Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n log* n) time.[3]
- Fürer's algorithm for integer multiplication: O(n log n 2O(lg* n)).
- Finding an approximate maximum (element at least as large as the median): lg* n − 1 ± 3 parallel operations.[4]
- Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(log* n) synchronous communication rounds.[5]
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
the inverse grows much slower: .
For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
| x | lg* x |
|---|---|
| (−∞, 1] | 0 |
| (1, 2] | 1 |
| (2, 4] | 2 |
| (4, 16] | 3 |
| (16, 65536] | 4 |
| (65536, 265536] | 5 |
Higher bases give smaller iterated logarithms.
Other applications
[edit]The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is .
In computational complexity theory, Santhanam[6] shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to
See also
[edit]- Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory
References
[edit]- ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. "The iterated logarithm function, in Section 3.2: Standard notations and common functions". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 58–59. ISBN 0-262-03384-4.
- ^ Furuya, Isamu; Kida, Takuya (2019). "Compaction of Church numerals". Algorithms. 12 (8) 159: 159. doi:10.3390/a12080159. hdl:2115/75613. MR 3998658.
- ^ Devillers, Olivier (March 1992). "Randomization yields simple algorithms for difficult problems" (PDF). International Journal of Computational Geometry & Applications. 2 (1): 97–111. arXiv:cs/9810007. doi:10.1142/S021819599200007X. MR 1159844. S2CID 60203.
- ^ Alon, Noga; Azar, Yossi (April 1989). "Finding an approximate maximum" (PDF). SIAM Journal on Computing. 18 (2): 258–267. doi:10.1137/0218017. MR 0986665.
- ^ Cole, Richard; Vishkin, Uzi (July 1986). "Deterministic coin tossing with applications to optimal parallel list ranking" (PDF). Information and Control. 70 (1): 32–53. doi:10.1016/S0019-9958(86)80023-7. MR 0853994.
- ^ Santhanam, Rahul (2001). "On separators, segregators and time versus space" (PDF). Proceedings of the 16th Annual IEEE Conference on Computational Complexity, Chicago, Illinois, USA, June 18-21, 2001. IEEE Computer Society. pp. 286–294. doi:10.1109/CCC.2001.933895. ISBN 0-7695-1053-1.
Iterated logarithm
View on GrokipediaDefinition and notation
Formal definition
The iterated logarithm of a positive real number , commonly denoted and read as "log star of ," counts the number of times the logarithm function must be iteratively applied to before the result is less than or equal to 1.[5] This process begins with and repeatedly takes the logarithm until the value drops to or below 1, with the total count of applications yielding .[5] The standard recursive formulation in computer science uses the binary logarithm and is given by: [5] For , the function is typically undefined, as the logarithm is not defined for non-positive arguments in the real numbers.[6] When , by the base case, since no applications are needed. For , , so .[5] This definition generalizes to an arbitrary base , denoted , by replacing the binary logarithm with in the recursion: [6] The iteration proceeds similarly, counting the applications of until the result is at most 1, with the same handling for base cases and domain restrictions.[6]Notations and conventions
The iterated logarithm is commonly denoted as (read as "log star of "), where the asterisk indicates iteration of the logarithm function until the result is at most 1. In computer science contexts, a variant is frequently used to specify the base-2 logarithm, reflecting the binary nature of computational analyses. In mathematical analysis, often defaults to the natural logarithm (base ).[7] The choice of base follows contextual conventions: base 2 is the default in computer science literature due to its alignment with binary representations and algorithmic efficiencies, while base predominates in theoretical analysis for its compatibility with continuous functions and limits. When the base must be explicitly stated, the notation is used, where is the base.[7] These notations emerged in the 1970s within computer science, notably introduced by Robert Tarjan in his analysis of union-find algorithms, where the iterated logarithm provided tight bounds on operation complexities.[8] Earlier mathematical texts occasionally used variations such as repeated explicit logs (e.g., ), but the compact form standardized in algorithmic contexts post-Tarjan. The function is typically defined for positive real numbers , with when , as no iterations are needed to reach the threshold. For non-integer values, the definition extends naturally via the real logarithm, provided . Negative values and complex extensions are not standard, as the logarithm is undefined for non-positive reals in this context, restricting applications to positive inputs.[7]Properties
Computational values
The iterated logarithm , assuming the binary logarithm base, can be computed by repeatedly applying the logarithm until the result is at most 1, counting the number of applications required. For example, consider : , , , requiring three applications, so .[9] The following table illustrates for select values, particularly powers of 2 expressed in Knuth's up-arrow notation, highlighting the extremely slow growth:| Expression in up-arrow notation | ||
|---|---|---|
| 1 | - | 0 |
| 2 | - | 1 |
| 4 | 2 | |
| 16 | 3 | |
| 65{,}536 | 4 | |
| 5 |
function log_star_2(n):
if n <= 1:
return 0
count = 0
while n > 1:
n = log2(n)
count += 1
return count
function log_star_2(n):
if n <= 1:
return 0
count = 0
while n > 1:
n = log2(n)
count += 1
return count
