Hubbry Logo
Iterated logarithmIterated logarithmMain
Open search
Iterated logarithm
Community hub
Iterated logarithm
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Iterated logarithm
Iterated logarithm
from Wikipedia
Figure 1. Demonstrating log* 4 = 2 for the base-e iterated logarithm. The value of the iterated logarithm can be found by "zig-zagging" on the curve y = logb(x) from the input n, to the interval [0,1]. In this case, b = e. The zig-zagging entails starting from the point (n, 0) and iteratively moving to (n, logb(n) ), to (0, logb(n) ), to (logb(n), 0 ).

In computer science, the iterated logarithm of , written log*  (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to .[1] The simplest formal definition is the result of this recurrence relation:

In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than , not only for base and base e. The "super-logarithm" function is "essentially equivalent" to the base iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.[2]

Analysis of algorithms

[edit]

The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:

The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:

the inverse grows much slower: .

For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.

The base-2 iterated logarithm
x lg* x
(−∞, 1] 0
(1, 2] 1
(2, 4] 2
(4, 16] 3
(16, 65536] 4
(65536, 265536] 5

Higher bases give smaller iterated logarithms.

Other applications

[edit]

The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is .

In computational complexity theory, Santhanam[6] shows that the computational resources DTIMEcomputation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The iterated logarithm, commonly denoted as logbx\log^*_b x (or simply logx\log^* x for base 2 or ee), is a function in and that quantifies the minimum number of iterations required to apply the base-bb logarithm to a positive x>1x > 1 until the result is less than or equal to 1. Formally, it is defined recursively: logbx=0\log^*_b x = 0 if x1x \leq 1, and logbx=1+logb(logbx)\log^*_b x = 1 + \log^*_b (\log_b x) otherwise, making it the discrete inverse of or a super-logarithm in the hierarchy of fast-growing functions. This function exhibits extraordinarily slow growth, increasing by 1 only after the argument surpasses an exponential tower of 2's whose height equals the previous value; for example, log22=1\log^*_2 2 = 1, log24=2\log^*_2 4 = 2, log216=3\log^*_2 16 = 3, log265536=4\log^*_2 65536 = 4, and log2265536=5\log^*_2 2^{65536} = 5. Such sluggish growth renders logn\log^* n effectively constant (at most 5) for all practical computational inputs up to n1010101010n \approx 10^{10^{10^{10^{10}}}}, distinguishing it from standard logarithms or polylogarithms in . In , the iterated logarithm prominently features in the amortized of the union-find (, where optimizations like union by rank and path compression achieve nearly linear performance bounded by the inverse α(n)\alpha(n), which grows even more slowly than logn\log^* n, enabling efficient handling of in graphs and other applications. In mathematics, it arises in , including as a , though it is distinct from the probabilistic , which describes fluctuation limits in random processes.

Definition and notation

Formal definition

The iterated logarithm of a positive nn, commonly denoted logn\log^* n and read as "log star of nn," counts the number of times the logarithm function must be iteratively applied to nn before the result is less than or equal to 1. This process begins with nn and repeatedly takes the logarithm until the value drops to or below 1, with the total count of applications yielding logn\log^* n. The standard recursive formulation in computer science uses the binary logarithm lgn=log2n\lg n = \log_2 n and is given by: lgn={0if n1,1+lg(lgn)if n>1.\lg^* n = \begin{cases} 0 & \text{if } n \leq 1, \\ 1 + \lg^* (\lg n) & \text{if } n > 1. \end{cases} For n0n \leq 0, the function is typically undefined, as the logarithm is not defined for non-positive arguments in the real numbers. When 0<n10 < n \leq 1, lgn=0\lg^* n = 0 by the base case, since no applications are needed. For 1<n21 < n \leq 2, lgn1\lg n \leq 1, so lgn=1+lg(lgn)=1+0=1\lg^* n = 1 + \lg^*(\lg n) = 1 + 0 = 1. This definition generalizes to an arbitrary base b>1b > 1, denoted logbn\log_b^* n, by replacing the with logbn\log_b n in the : logbn={0if n1,1+logb(logbn)if n>1.\log_b^* n = \begin{cases} 0 & \text{if } n \leq 1, \\ 1 + \log_b^* (\log_b n) & \text{if } n > 1. \end{cases} The iteration proceeds similarly, counting the applications of logb\log_b until the result is at most 1, with the same handling for base cases and domain restrictions.

Notations and conventions

The iterated logarithm is commonly denoted as logn\log^* n (read as "log star of nn"), where the asterisk indicates iteration of the logarithm function until the result is at most 1. In contexts, a variant lgn\lg^* n is frequently used to specify the base-2 logarithm, reflecting the binary nature of computational analyses. In , logn\log^* n often defaults to the logarithm (base ee). The choice of base follows contextual conventions: base 2 is the default in literature due to its alignment with binary representations and algorithmic efficiencies, while base ee predominates in theoretical analysis for its compatibility with continuous functions and limits. When the base must be explicitly stated, the notation logbn\log_b^* n is used, where b>1b > 1 is the base. These notations emerged in the 1970s within , notably introduced by in his analysis of union-find algorithms, where the iterated logarithm provided tight bounds on operation complexities. Earlier mathematical texts occasionally used variations such as repeated explicit logs (e.g., loglogn\log \log n), but the compact logn\log^* n form standardized in algorithmic contexts post-Tarjan. The function is typically defined for positive real numbers n>0n > 0, with logn=0\log^* n = 0 when n1n \leq 1, as no iterations are needed to reach the threshold. For non-integer values, the definition extends naturally via the real logarithm, provided n>0n > 0. Negative values and complex extensions are not standard, as the logarithm is undefined for non-positive reals in this context, restricting applications to positive inputs.

Properties

Computational values

The iterated logarithm log2n\log^*_2 n, assuming the binary logarithm base, can be computed by repeatedly applying the logarithm until the result is at most 1, counting the number of applications required. For example, consider n=16n = 16: log216=4>1\log_2 16 = 4 > 1, log24=2>1\log_2 4 = 2 > 1, log22=11\log_2 2 = 1 \leq 1, requiring three applications, so log216=3\log^*_2 16 = 3. The following table illustrates log2n\log^*_2 n for select values, particularly powers of 2 expressed in , highlighting the extremely slow growth:
nnExpression in up-arrow notationlog2n\log^*_2 n
1-0
2-1
4222 \uparrow\uparrow 22
16232 \uparrow\uparrow 33
65{,}536242 \uparrow\uparrow 44
265,5362^{65{,}536}252 \uparrow\uparrow 55
A practical to compute log2n\log^*_2 n uses an iterative loop, as shown in the following :

function log_star_2(n): if n <= 1: return 0 count = 0 while n > 1: n = log2(n) count += 1 return count

function log_star_2(n): if n <= 1: return 0 count = 0 while n > 1: n = log2(n) count += 1 return count

This procedure is computationally trivial, with the loop iterating at most five times for any nn encountered in real-world applications, owing to the function's minimal growth. For all practical purposes, including values of nn on the scale of the (approximately 108010^{80} particles, or roughly 22662^{266}), log2n5\log^*_2 n \leq 5.

Asymptotic growth

The iterated logarithm exhibits an extraordinarily slow rate of asymptotic growth, rendering it effectively constant for all computationally relevant values of nn. It increases by 1 only when nn surpasses an exponential tower whose corresponds to the prior value, such as reaching 5 for n>65536=24n > 65536 = 2 \uparrow\uparrow 4 and remaining 5 until n=265536=25n = 2^{65536} = 2 \uparrow\uparrow 5. This behavior implies that logn=Θ(loglogn)\log^* n = \Theta(\log^* \log n), as applying the logarithm once more typically decreases the iteration count by exactly 1 for sufficiently large nn. A concrete illustration of this sluggishness is the bound logn5\log^* n \leq 5 for all n265536n \leq 2^{65536}, beyond which it reaches 6 but stays there for numbers vastly exceeding any physical scale. In general, lognloglognlogloglogn+O(1)\log^* n \leq \frac{\log \log n}{\log \log \log n} + O(1), offering a simplified upper estimate that underscores its sub-polylogarithmic pace while remaining unbounded as nn \to \infty. Compared to other logarithmic functions, the iterated logarithm grows asymptotically slower than any fixed number of iterations of the plain logarithm; specifically, for any constant k1k \geq 1, logn=o(log(k)n)\log^* n = o(\log^{(k)} n), where log(k)n\log^{(k)} n denotes the kk-fold iterated logarithm. Thus, it advances far more gradually than loglogn\log \log n, which in turn lags behind logn\log n. The function is non-decreasing for integers n>1n > 1, though its stepwise nature means it plateaus over exponentially expanding intervals before incrementing. For real arguments greater than 0, extensions using or operations preserve monotonicity while approximating a continuous variant suitable for analysis.

Applications in algorithms

Union-find data structure

The , also known as the disjoint-set union structure, maintains a collection of supporting two primary operations: find, which determines the representative (root) of the set containing a given element, and union, which merges two sets into one. In a naive implementation using singly linked lists, each operation takes O(n) time in the worst case, where n is the number of elements, due to potential linear traversals along degenerate chains or trees formed during repeated unions. To improve efficiency, the structure employs two heuristics: union by rank, which attaches the shorter tree to the root of the taller one during merges (using approximate tree heights as ranks), and path compression, which flattens the tree by setting each node along the find path directly to the root. These optimizations ensure that the amortized for a sequence of m operations on n elements is O(α(m, n)), where α is a functional inverse of the . This bound was established in Robert Tarjan's seminal 1975 analysis, which provided the first tight worst-case amortized characterization for the structure, demonstrating that α(m, n) grows slower than any iterated logarithm and serves as an effectively constant factor for all practical purposes. The inverse Ackermann function grows even more slowly than the iterated logarithm. Tarjan's proof uses an via a potential function that accounts for the "level" of each node based on ranks and path lengths, charging the cost of operations to changes in this potential. Specifically, union by rank limits rank increases to O(log n) levels, while path compression reduces path lengths; each find operation traverses at most α(n) "effective" levels before compression, as higher levels are rarely created due to the slow growth of the Ackermann hierarchy. The total potential change over m operations bounds the amortized cost per operation by O(α(n)), with the inverse Ackermann emerging as the maximum depth traversable in the rank structure. Modern simplifications, such as those using subtree-size potentials, confirm this by showing that path halvings or compressions occur at most O(α(n)) times per node across all operations. In practice, α(n) ≤ 4 for n up to 2^{2^{65536}}—a number vastly exceeding the atoms in the observable universe—making union-find operations effectively constant time even for astronomical scales.

Other data structures

The iterated logarithm plays a key role in analyzing decremental dynamic connectivity algorithms for sparse graphs, such as planar graphs, where updates involve only edge deletions. In such settings, an algorithm processes n updates in O(n log* n) total time while supporting connectivity queries in O(log* n) time, achieving nearly linear performance by recursively applying graph reductions and r-divisions with r = log² n. This bound improves upon prior O(n log n) results and leverages the slow growth of log* n to handle sparse structures efficiently without full recomputation. In sorting and searching data structures, the iterated logarithm emerges in advanced variants that build upon basic O(log log n) operations. The van Emde Boas tree supports predecessor searches over a universe of size u in O(log log u) time using a recursive structure of square-root subuniverses. Parallel algorithms in the PRAM model use the iterated logarithm to bound the number of rounds in work-depth tradeoffs for problems like prefix sums, sorting, and related primitives. On a Sum-CRCW PRAM, prefix sums can be computed in O(log* n) time with optimal work O(n), by iteratively reducing the problem size through pointer doubling or coloring techniques that halve active elements per phase until convergence in log* n steps. Similarly, linear-time integer sorting achieves O(log* n) parallel time by partitioning keys into exponentially decreasing ranges, with log* n iterations to resolve dependencies in sparse input distributions. These bounds highlight log* n's utility in parallel processing, where it minimizes depth without excessive processors. In geometric data structures, the iterated logarithm appears in update and query bounds for incremental constructions, such as orthogonal point enclosure in 3D, which supports insertions and reports intersections with axis-aligned rectangles in O(log n + k) query time using O(n log* n) space via shallow cuttings and interval trees. For incremental in higher dimensions or dynamic variants, log* n factors arise in amortized analyses of conflict graphs or hull maintenance, yielding near-linear total time O(n log* n) for n point insertions while preserving the empty-circle property. These structures exploit log* n's near-constancy to enable practical scalability in sparse geometric inputs, such as scattered point sets.

Inverse Ackermann function

The inverse Ackermann function, commonly denoted α(n)\alpha(n), is defined as the smallest nonnegative kk such that A(k)nA(k) \geq n, where A(k)A(k) denotes the kk-th value in the diagonal of the , often taken as A(k,k)A(k, k). This definition arises in to bound the growth of certain algorithmic complexities, where the full Ackermann function is too unwieldy for direct computation. In the context of the Ackermann hierarchy, the iterated logarithm logn\log^* n corresponds to the inverse at the third level, roughly inverting the exponentiation stage A(3,n)2nA(3, n) \approx 2^{n}. The inverse Ackermann function inverts higher levels of the hierarchy, such as tetration and beyond, resulting in even slower growth; for instance, a computable variant defines a sequence αk(n)\alpha_k(n) where α2(n)=log2n\alpha_2(n) = \lceil \log_2 n \rceil, α3(n)=logn\alpha_3(n) = \log^* n, and α(n)=min{kαk(n)3}\alpha(n) = \min \{ k \mid \alpha_k(n) \leq 3 \}. Consequently, α(n)<logn+3\alpha(n) < \log^* n + 3 for all n1n \geq 1, and α(n)4\alpha(n) \leq 4 for all practical values of nn encountered in computing (e.g., up to n=2265536n = 2^{2^{65536}}). This slower growth makes α(n)\alpha(n) valuable for providing tighter asymptotic bounds in algorithm analysis compared to logn\log^* n. For example, in the union-find data structure with union by rank and path compression, the amortized time per operation is Θ(α(n))\Theta(\alpha(n)), which is stricter than the looser O(logn)O(\log^* n) bound.

Generalizations

The iterated logarithm function, typically defined for a fixed base, can be generalized to different bases b>1b > 1, where the number of iterations required to reduce the argument below a threshold varies by at most a small constant depending on the ratio of the bases. For instance, the values of log2n\log^*_2 n and logen\log^*_e n differ by at most 3 for all n1n \geq 1, since changing the base corresponds to multiplying the argument by a constant factor log2e\log_2 e, which shifts the iteration count by a bounded amount. A further extension involves applying the iterated logarithm componentwise to vectors in higher-dimensional spaces, often in the context of multivariate processes or higher-dimensional analytic estimates. In such settings, the function is computed using a norm, such as log(v)\log^*(\| \mathbf{v} \|), to measure the "" or growth in multi-dimensional analysis, appearing in generalizations of central limit theorems or discrepancy estimates for vector-valued sequences. For example, in multivariate laws for empirical processes, iterated logarithms scale the fluctuations in vector norms. Fractional or continuous versions of the iterated logarithm arise through functional iteration theory, where non-integer iterates are constructed by solving equations like the Schröder functional equation for the exponential function, the inverse of the logarithm. This allows defining logαn\log^{ \alpha * } n for real α>0\alpha > 0, interpolating between integer iterations, and is useful for analyzing continuous dynamical systems or solving equations involving partial iterations. Such constructions rely on embedding the logarithm into a flow of functions via its Abel function. These generalizations find applications in , where multiple iterated logarithms refine bounds on prime gaps; for instance, the largest gap between primes up to xx exceeds clogxloglogxloglogloglogxlogloglogxc \frac{\log x \log \log x \log \log \log \log x}{\log \log \log x} for some constant c>0c > 0, capturing finer asymptotic scales. In physics, particularly in theory, iterated logarithms model successive scaling transformations in , leading to logarithmic corrections in correlation lengths, while models exhibit distributions with iterated logarithmic tails during aggregation or processes.
Add your contribution
Related Hubs
User Avatar
No comments yet.