Hubbry Logo
search
logo
1742580

Regular number

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

A Hasse diagram of divisibility relationships among the regular numbers up to 400. The vertical scale is logarithmic.[1]

Regular numbers are numbers that evenly divide powers of 60 (or, equivalently, powers of 30). Equivalently, they are the numbers whose only prime divisors are 2, 3, and 5. As an example, 602 = 3600 = 48 × 75, so as divisors of a power of 60 both 48 and 75 are regular.

These numbers arise in several areas of mathematics and its applications, and have different names coming from their different areas of study.

  • In number theory, these numbers are called 5-smooth, because they can be characterized as having only 2, 3, or 5 as their prime factors. This is a specific case of the more general k-smooth numbers, the numbers that have no prime factor greater than k.
  • In the study of Babylonian mathematics, the divisors of powers of 60 are called regular numbers or regular sexagesimal numbers, and are of great importance in this area because of the sexagesimal (base 60) number system that the Babylonians used for writing their numbers, and that was central to Babylonian mathematics.
  • In music theory, regular numbers occur in the ratios of tones in five-limit just intonation. In connection with music theory and related theories of architecture, these numbers have been called the harmonic whole numbers.
  • In computer science, regular numbers are often called Hamming numbers, after Richard Hamming, who proposed the problem of finding computer algorithms for generating these numbers in ascending order. This problem has been used as a test case for functional programming.

Number theory

[edit]

Formally, a regular number is an integer of the form , for nonnegative integers , , and . Such a number is a divisor of . The regular numbers are also called 5-smooth, indicating that their greatest prime factor is at most 5.[2] More generally, a k-smooth number is a number whose greatest prime factor is at most k.[3]

The first few regular numbers are[2]

1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, ... (sequence A051037 in the OEIS)

Several other sequences at the On-Line Encyclopedia of Integer Sequences have definitions involving 5-smooth numbers.[4]

Although the regular numbers appear dense within the range from 1 to 60, they are quite sparse among the larger integers. A regular number is less than or equal to some threshold if and only if the point belongs to the tetrahedron bounded by the coordinate planes and the plane as can be seen by taking logarithms of both sides of the inequality . Therefore, the number of regular numbers that are at most can be estimated as the volume of this tetrahedron, which is Even more precisely, using big O notation, the number of regular numbers up to is and it has been conjectured that the error term of this approximation is actually .[2] A similar formula for the number of 3-smooth numbers up to is given by Srinivasa Ramanujan in his first letter to G. H. Hardy.[5]

Babylonian mathematics

[edit]
AO 6456, a table of reciprocals of regular numbers from Seleucid Uruk, copied from an unknown earlier source

In the Babylonian sexagesimal notation, the reciprocal of a regular number has a finite representation. If divides , then the sexagesimal representation of is just that for , shifted by some number of places. This allows for easy division by these numbers: to divide by , multiply by , then shift.[6]

For instance, consider division by the regular number 54 = 2133. 54 is a divisor of 603, and 603/54 = 4000, so dividing by 54 in sexagesimal can be accomplished by multiplying by 4000 and shifting three places. In sexagesimal 4000 = 1×3600 + 6×60 + 40×1, or (as listed by Joyce) 1:6:40. Thus, 1/54, in sexagesimal, is 1/60 + 6/602 + 40/603, also denoted 1:6:40 as Babylonian notational conventions did not specify the power of the starting digit. Conversely 1/4000 = 54/603, so division by 1:6:40 = 4000 can be accomplished by instead multiplying by 54 and shifting three sexagesimal places.

The Babylonians used tables of reciprocals of regular numbers, some of which still survive.[7] These tables existed relatively unchanged throughout Babylonian times.[6] One tablet from Seleucid times, by someone named Inaqibıt-Anu, contains the reciprocals of 136 of the 231 six-place regular numbers whose first place is 1 or 2, listed in order. It also includes reciprocals of some numbers of more than six places, such as 323 (2 1 4 8 3 0 7 in sexagesimal), whose reciprocal has 17 sexagesimal digits. Noting the difficulty of both calculating these numbers and sorting them, Donald Knuth in 1972 hailed Inaqibıt-Anu as "the first man in history to solve a computational problem that takes longer than one second of time on a modern electronic computer!" (Two tables are also known giving approximations of reciprocals of non-regular numbers, one of which gives reciprocals for all the numbers from 56 to 80.)[8][9]

Although the primary reason for preferring regular numbers to other numbers involves the finiteness of their reciprocals, some Babylonian calculations other than reciprocals also involved regular numbers. For instance, tables of regular squares have been found[6] and the broken tablet Plimpton 322 has been interpreted by Neugebauer as listing Pythagorean triples generated by and both regular and less than 60.[10] Fowler and Robson discuss the calculation of square roots, such as how the Babylonians found an approximation to the square root of 2, perhaps using regular number approximations of fractions such as 17/12.[9]

Music theory

[edit]

In music theory, the just intonation of the diatonic scale involves regular numbers: the pitches in a single octave of this scale have frequencies proportional to the numbers in the sequence 24, 27, 30, 32, 36, 40, 45, 48 of nearly consecutive regular numbers.[11] Thus, for an instrument with this tuning, all pitches are regular-number harmonics of a single fundamental frequency. This scale is called a 5-limit tuning, meaning that the interval between any two pitches can be described as a product 2i3j5k of powers of the prime numbers up to 5, or equivalently as a ratio of regular numbers.[12]

5-limit musical scales other than the familiar diatonic scale of Western music have also been used, both in traditional musics of other cultures and in modern experimental music: Honingh & Bod (2005) list 31 different 5-limit scales, drawn from a larger database of musical scales. Each of these 31 scales shares with diatonic just intonation the property that all intervals are ratios of regular numbers.[12] Euler's tonnetz provides a convenient graphical representation of the pitches in any 5-limit tuning, by factoring out the octave relationships (powers of two) so that the remaining values form a planar grid.[12] Some music theorists have stated more generally that regular numbers are fundamental to tonal music itself, and that pitch ratios based on primes larger than 5 cannot be consonant.[13] However the equal temperament of modern pianos is not a 5-limit tuning,[14] and some modern composers have experimented with tunings based on primes larger than five.[15]

In connection with the application of regular numbers to music theory, it is of interest to find pairs of regular numbers that differ by one. There are exactly ten such pairs and each such pair defines a superparticular ratio that is meaningful as a musical interval. These intervals are 2/1 (the octave), 3/2 (the perfect fifth), 4/3 (the perfect fourth), 5/4 (the just major third), 6/5 (the just minor third), 9/8 (the just major tone), 10/9 (the just minor tone), 16/15 (the just diatonic semitone), 25/24 (the just chromatic semitone), and 81/80 (the syntonic comma).[16]

In the Renaissance theory of universal harmony, musical ratios were used in other applications, including the architecture of buildings. In connection with the analysis of these shared musical and architectural ratios, for instance in the architecture of Palladio, the regular numbers have also been called the harmonic whole numbers.[17]

Algorithms

[edit]

Algorithms for calculating the regular numbers in ascending order were popularized by Edsger Dijkstra. Dijkstra (1976, 1981) attributes to Hamming the problem of building the infinite ascending sequence of all 5-smooth numbers; this problem is now known as Hamming's problem, and the numbers so generated are also called the Hamming numbers. Dijkstra's ideas to compute these numbers are the following:

  • The sequence of Hamming numbers begins with the number 1.
  • The remaining values in the sequence are of the form , , and , where is any Hamming number.
  • Therefore, the sequence may be generated by outputting the value 1, and then merging the sequences , , and .

This algorithm is often used to demonstrate the power of a lazy functional programming language, because (implicitly) concurrent efficient implementations, using a constant number of arithmetic operations per generated value, are easily constructed as described above. Similarly efficient strict functional or imperative sequential implementations are also possible whereas explicitly concurrent generative solutions might be non-trivial.[18]

In the Python programming language, lazy functional code for generating regular numbers is used as one of the built-in tests for correctness of the language's implementation.[19]

A related problem, discussed by Knuth (1972), is to list all -digit sexagesimal numbers in ascending order (see #Babylonian mathematics above). In algorithmic terms, this is equivalent to generating (in order) the subsequence of the infinite sequence of regular numbers, ranging from to .[8] See Gingerich (1965) for an early description of computer code that generates these numbers out of order and then sorts them;[20] Knuth describes an ad hoc algorithm, which he attributes to Bruins (1970), for generating the six-digit numbers more quickly but that does not generalize in a straightforward way to larger values of .[8] Eppstein (2007) describes an algorithm for computing tables of this type in linear time for arbitrary values of .[21]

Other applications

[edit]

Heninger, Rains & Sloane (2006) show that, when is a regular number and is divisible by 8, the generating function of an -dimensional extremal even unimodular lattice is an th power of a polynomial.[22]

As with other classes of smooth numbers, regular numbers are important as problem sizes in computer programs for performing the fast Fourier transform, a technique for analyzing the dominant frequencies of signals in time-varying data. For instance, the method of Temperton (1992) requires that the transform length be a regular number.[23]

Book VIII of Plato's Republic involves an allegory of marriage centered on the highly regular number 604 = 12,960,000 and its divisors (see Plato's number). Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage.[24]

Certain species of bamboo release large numbers of seeds in synchrony (a process called masting) at intervals that have been estimated as regular numbers of years, with different intervals for different species, including examples with intervals of 10, 15, 16, 30, 32, 48, 60, and 120 years.[25] It has been hypothesized that the biological mechanism for timing and synchronizing this process lends itself to smooth numbers, and in particular in this case to 5-smooth numbers. Although the estimated masting intervals for some other species of bamboo are not regular numbers of years, this may be explainable as measurement error.[25]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In number theory, a regular number is a positive integer whose prime factorization consists solely of the primes 2, 3, and 5, expressible in the form 2a×3b×5c2^a \times 3^b \times 5^c where aa, bb, and cc are non-negative integers.[1] These numbers are equivalently defined as 5-smooth numbers, meaning no prime factor exceeds 5. Regular numbers hold historical significance in ancient Babylonian mathematics, where the sexagesimal (base-60) numeral system was employed; since 60 factors as 22×3×52^2 \times 3 \times 5, the reciprocals of regular numbers terminate in finite sexagesimal expansions, facilitating division without long divisions.[1] Babylonian scribes compiled extensive reciprocal tables limited to regular numbers up to 3600 or higher, using them for practical computations in astronomy, land measurement, and commerce, while non-regular numbers (those with other prime factors like 7 or 11) required approximations or irregular methods.[2] In modern contexts, regular numbers are studied within the broader class of smooth numbers, which appear in analytic number theory, particularly in estimates of the distribution of integers with small prime factors, such as in the context of the Dickman-de Bruijn function that quantifies their density. The infinite sequence of regular numbers in ascending order—beginning 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, ...—is generated multiplicatively and has applications in cryptography for assessing the smoothness of integers in factorization algorithms like the quadratic sieve.[3] In computer science, regular numbers are commonly termed Hamming numbers after Richard Hamming, who posed the challenge of efficiently generating the sequence using computer algorithms in the mid-20th century, highlighting issues in priority queue management and lazy evaluation.[4] This problem has become a classic benchmark for functional programming languages, demonstrating techniques like stream processing to avoid duplicates and ensure ordered output without exhaustive search.

Definition and Properties

Definition

Regular numbers are positive integers whose only prime factors are 2, 3, and 5, expressible in the form 2a×3b×5c2^a \times 3^b \times 5^c, where aa, bb, and cc are non-negative integers.[5][6] These numbers form a subset of the natural numbers and are characterized by their limited prime factorization, making them particularly amenable to certain computational and historical contexts. Equivalently, regular numbers are the positive divisors of 60k60^k for any positive integer kk, since 60=22×3×560 = 2^2 \times 3 \times 5 and increasing kk allows for arbitrarily high exponents in the prime factorization.[7] Examples of regular numbers include 1 (20×30×502^0 \times 3^0 \times 5^0), 2, 3, 4 (222^2), 5, 6 (2×32 \times 3), 8 (232^3), 9 (323^2), 10 (2×52 \times 5), 12 (22×32^2 \times 3), 15 (3×53 \times 5), 16 (242^4), 18 (2×322 \times 3^2), 20 (22×52^2 \times 5), 24 (23×32^3 \times 3), 25 (525^2), 27 (333^3), and 30 (2×3×52 \times 3 \times 5).[5] In number theory, regular numbers are synonymous with 5-smooth numbers, denoting integers with no prime factors exceeding 5.[5][8] In computer science, they are commonly referred to as Hamming numbers, a term originating from a problem posed by Richard Hamming on efficiently generating such numbers, which Edsger Dijkstra attributed to him in the 1976 book A Discipline of Programming.[9][10] Regular numbers hold historical significance in the Babylonian sexagesimal system, where their reciprocals have finite representations.[2]

Basic Properties

Regular numbers, being positive integers of the form 2a3b5c2^a 3^b 5^c where a,b,c0a, b, c \geq 0 are integers, form a multiplicative semigroup under integer multiplication. The product of any two such numbers is again a regular number, as the exponents add accordingly while preserving the restricted prime factors. This structure arises directly from the closure property inherent in the prime factorization limited to 2, 3, and 5.[11] The density of regular numbers up to a real number x>1x > 1 is given by the counting function Ψ(x,5)\Psi(x, 5), which enumerates the nonnegative integer solutions (a,b,c)(a, b, c) to 2a3b5cx2^a 3^b 5^c \leq x. Due to the linear independence of log2\log 2, log3\log 3, and log5\log 5 over the rationals, each regular number has a unique representation, and Ψ(x,5)\Psi(x, 5) equals the number of lattice points in the tetrahedron defined by alog2+blog3+clog5logxa \log 2 + b \log 3 + c \log 5 \leq \log x with a,b,c0a, b, c \geq 0. The asymptotic approximation is Ψ(x,5)(logx)36log2log3log5\Psi(x, 5) \sim \frac{(\log x)^3}{6 \log 2 \cdot \log 3 \cdot \log 5}, obtained as the volume of this region divided by the factorial of the number of primes involved (here, 3). Lower-order terms are O((logx)2)O((\log x)^2), ensuring the leading term dominates for large xx.[11] The regular numbers form a strictly increasing sequence h1=1<h2<h3<h_1 = 1 < h_2 < h_3 < \cdots, ordered by magnitude. This sequence can be generated recursively by starting with 1 and repeatedly adjoining the smallest integer greater than the current maximum that is a multiple of an earlier term by 2, 3, or 5, ensuring no duplicates due to unique factorizations. Equivalently, each subsequent term is the minimum among the sets {2hi:ik}\{2 h_i : i \leq k\}, {3hj:jk}\{3 h_j : j \leq k\}, and {5hm:mk}\{5 h_m : m \leq k\} excluding previously appearing values, where kk is the current sequence length. This process yields the sequence 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, ... .[11] A subset of regular numbers corresponds to highly composite numbers when restricted to the primes 2, 3, and 5. Specifically, those with non-increasing exponents abc0a \geq b \geq c \geq 0 in 2a3b5c2^a 3^b 5^c are the 5-smooth highly composite numbers, as this satisfies the condition that exponents decrease (or stay equal) with increasing primes, a defining property of highly composite numbers. Examples include 1 (a=b=c=0a=b=c=0), 2 (1,0,01,0,0), 6 (1,1,01,1,0), 12 (2,1,02,1,0), and 60 (2,1,12,1,1). These form a proper subsequence of all regular numbers.[12] The nnth regular number hnh_n is defined as the smallest positive integer such that Ψ(hn,5)=n\Psi(h_n, 5) = n. Determining hnh_n thus requires inverting the counting function, typically by solving (loghn)36log2log3log5n\frac{(\log h_n)^3}{6 \log 2 \cdot \log 3 \cdot \log 5} \approx n asymptotically, yielding loghn(6nlog2log3log5)1/3\log h_n \sim (6 n \log 2 \cdot \log 3 \cdot \log 5)^{1/3}, though exact computation involves enumerating the lattice points up to the boundary. This inversion highlights the connection between the geometric volume and the ordering of the sequence.[11] These foundational properties underpin applications of regular numbers in number theory, such as addressing factorization challenges.[11]

Historical Development

Babylonian Mathematics

The Babylonians utilized a sexagesimal (base-60) positional numeral system in their arithmetic from the Old Babylonian period, approximately 2000–1600 BCE, which inherently supported fractional computations through the use of reciprocals. Regular numbers—natural numbers whose prime factors are exclusively 2, 3, and 5—played a central role as denominators for unit fractions, as their reciprocals possess finite expansions in sexagesimal notation, permitting exact representations without recurring or infinite digits. Scribes compiled extensive tables of these reciprocals to streamline division, a fundamental operation in their mathematical practices, transforming it into multiplication by the precomputed inverse. Clay tablets from this era, such as those cataloged in collections like the Yale Babylonian Collection, demonstrate the systematic documentation of reciprocals for regular numbers up to 81, covering all one- and two-digit values in the system. A prominent example is the Plimpton 322 tablet, dated to around 1800 BCE and housed at Columbia University, which employs reciprocals of regular numbers in generating tables of Pythagorean triples for astronomical and geometric purposes. In this artifact, parameters like the reciprocals of 48 and 60 (both regular) facilitate the derivation of side ratios in right triangles, underscoring their utility in precise mensuration tasks. Specific reciprocals illustrate the system's elegance: the inverse of 2 is expressed as 0;30 (equivalent to 30/60 or 1/2), that of 3 as 0;20 (20/60 or 1/3), and that of 5 as 0;12 (12/60 or 1/5), all terminating after a single sexagesimal place. These finite forms enabled scribes to perform exact arithmetic on fractions that would require infinite series in base-10 systems, such as 1/7, by approximating with nearby regular denominators when necessary. The adoption of regular numbers in the sexagesimal framework offered a significant advantage over other ancient numeral systems, like the Egyptian base-10, by allowing divisions by non-divisors of 60 to be managed through finite approximations or exact reciprocals, thus enhancing accuracy in fields like land measurement and celestial tracking.[13] This approach, rooted in the factorization of 60 as 22×3×52^2 \times 3 \times 5, minimized computational errors and supported complex problem-solving evident in surviving cuneiform records.

Modern Recognition

The rediscovery of regular numbers in Western mathematics gained momentum during the Renaissance, as scholars engaged with ancient Greek texts that transmitted Babylonian sexagesimal methods. Translations such as George of Trebizond's 1451 Latin version of Ptolemy's Almagest exposed mathematicians to sexagesimal computations, where regular numbers—those factorable solely by 2, 3, and 5—enabled finite fractional representations essential for astronomical tables. John Dee, a leading 16th-century figure, highlighted the value of these numerical systems in his preface to the 1570 English translation of Euclid's Elements, advocating their application in astronomy and navigation amid ongoing debates over decimal versus sexagesimal divisions.[14] In the 19th century, breakthroughs in cuneiform decipherment by Henry Rawlinson and Edward Hincks in the 1840s–1850s unlocked original Babylonian tablets, revealing extensive tables of reciprocals and multiplications centered on regular numbers. This direct evidence integrated the concept into European number theory, identifying them as 5-smooth numbers whose limited prime factors facilitated precise calculations. Their relevance emerged in Diophantine approximation studies, as exemplified by Joseph Liouville's 1844 work on transcendental numbers, where rationals with smooth denominators demonstrated exceptionally close approximations to irrationals, influencing subsequent theorems by Charles Hermite and others on the distribution and utility of such forms. The 20th century brought formalization through Otto Neugebauer's seminal analyses of Babylonian mathematics, where he coined "regular numbers" to describe their role in producing terminating sexagesimal expansions and enabling efficient reciprocal computations. Neugebauer's Quellen und Studien zur Geschichte der Mathematik (1935–1939) and The Exact Sciences in Antiquity (1957) established their foundational importance, drawing parallels to modern number-theoretic properties. Concurrently, in computational contexts, in the mid-20th century, Richard Hamming posed the challenge of efficiently generating the ordered sequence of these numbers without redundancy or overflow. Edsger Dijkstra's structured solution in A Discipline of Programming (1976) popularized it as a benchmark for algorithmic elegance, with the numbers thereafter known as Hamming numbers in computer science; the problem persists in programming contests like the International Olympiad in Informatics.

Mathematical Applications

Number Theory

In number theory, regular numbers, also known as 5-smooth numbers, represent the extremal case within the broader theory of smooth numbers, where all prime factors are bounded by a fixed value; specifically, they consist solely of the primes 2, 3, and 5, making them the "smoothest" integers under this bound of 5.[11] This classification arises from the general study of y-smooth numbers, whose prime factors do not exceed y, and regular numbers exemplify the simplest multiplicative structure in this hierarchy, facilitating analysis in Diophantine problems and algorithmic efficiency.[11] The Dirichlet series associated with the indicator function of regular numbers connects directly to the Riemann zeta function through restricted Euler products. Formally, the generating function is given by
ζ(s,5)=n=1n regular1ns=p{2,3,5}11ps=ζ(s)p>5(1ps), \zeta(s,5) = \sum_{\substack{n=1 \\ n \text{ regular}}}^\infty \frac{1}{n^s} = \prod_{p \in \{2,3,5\}} \frac{1}{1 - p^{-s}} = \zeta(s) \prod_{p > 5} (1 - p^{-s}),
for (s)>1\Re(s) > 1, where the partial product over the first three primes yields the sum over regular numbers, and the inverse product over larger primes isolates the contribution from 5-smooth terms. Partial sums of this series, such as the summatory function Ψ(x,5)\Psi(x,5) counting regular numbers up to xx, inherit asymptotic behaviors from zeta's analytic properties, including links to zero-free regions that influence error terms in distribution estimates. Regular numbers play a targeted role in integer factorization, particularly as candidate divisors in trial division methods for integers suspected to have only small prime factors. In such procedures, one tests divisibility by all regular numbers up to the square root of the target, leveraging their complete enumeration via powers of 2, 3, and 5 to efficiently identify factors when the number is itself 5-smooth or has a smooth cofactor.[15] This approach optimizes early stages of factorization algorithms, as the density of regular numbers allows rapid sieving compared to general trial division over all integers.[16] The distribution of regular numbers up to xx is asymptotically estimated using the Hardy-Littlewood circle method, which decomposes the counting function Ψ(x,5)\Psi(x,5) into major and minor arc contributions via exponential sums. This method yields precise formulas, such as Ψ(x,5)xρ(logx/log5)\Psi(x,5) \sim x \cdot \rho(\log x / \log 5), where ρ\rho is the Dickman-de Bruijn function, and the circle method's singular series aligns with the restricted Euler product over primes 2, 3, and 5 to capture the main term.[17] Such estimates underpin conjectural refinements in analytic number theory, linking the sparse distribution of regular numbers to broader prime factorization patterns.[17]

Music Theory

In music theory, regular numbers—products of the prime factors 2, 3, and 5—form the basis of consonant intervals derived from the harmonic series, where overtones produce simple frequency ratios that align with human perception of harmony. The octave corresponds to the ratio 2:12:1, the perfect fifth to 3:23:2, and the major third to 5:45:4, all of which are ratios of regular numbers and appear prominently in the lower partials of a vibrating string or air column. These ratios ensure minimal beating between simultaneously sounded tones, promoting a sense of purity and stability in chords, as opposed to intervals involving higher primes that introduce greater complexity and dissonance.[18][19] Just intonation scales are constructed as products of powers of 2, 3, and 5, limiting the prime factors to these three to maintain simple ratios and avoid the dissonant intervals arising from higher primes like 7 or 11, which yield more intricate fractions such as 7:47:4 or 11:811:8. This 5-limit approach allows for scales like the major scale with ratios 1:1, 9:8, 5:4, 4:3, 3:2, 5:3, 15:8, and 2:1, where each step preserves harmonic consonance without accumulating irregularities from extraneous factors. By restricting to regular numbers, musicians achieve intervals that closely match the natural overtones, enhancing the acoustic coherence of polyphonic music.[20][19] Historically, Pythagorean tuning relied solely on powers of 2 and 3 to generate a diatonic scale through stacked perfect fifths, producing pure octaves and fifths but imperfect thirds like the major third at 81:6481:64. This system, while consonant in two-voice counterpoint, revealed limitations in triadic harmony, prompting extensions in meantone temperaments that incorporated the prime 5 to refine major and minor thirds to 5:45:4 and 6:56:5, respectively. Such adjustments, popularized in Renaissance keyboard practice, improved chordal sweetness at the expense of slight fifth tempering.[21][19] The mathematical foundation for these applications lies in the simplicity of regular number ratios, which minimizes the denominator and numerator values to foster acoustic consonance through low-beat frequencies. A key irregularity in Pythagorean tuning, the syntonic comma of 81:8081:80—the discrepancy between the Pythagorean major third (81:6481:64) and the just major third (5:45:4)—arises from excluding 5, leading to a slight detuning of about 21.5 cents that disrupts triads. Including 5 resolves this comma by aligning ratios more closely with the harmonic series, enabling more stable harmonic progressions in just intonation systems.[20][21]

Computational Aspects

Algorithms for Generation

One straightforward method for generating regular numbers involves enumerating all possible non-negative integer exponents aa, bb, and cc such that 2a×3b×5c2^a \times 3^b \times 5^c does not exceed a predefined upper bound, collecting these products, sorting them in ascending order, and removing any duplicates. Appropriate bounds for the exponents are determined by the logarithms: alog2La \leq \log_2 L, blog3Lb \leq \log_3 L, and clog5Lc \leq \log_5 L, where LL is the limit, leading to approximately O((logL)3)O((\log L)^3) candidate numbers to generate and sort. To address the challenge of efficiently generating the ordered sequence of regular numbers without an explicit limit—known as Hamming's problem, posed by Richard Hamming for computing the 1000th such number—a more sophisticated approach uses a priority queue (min-heap) to produce terms incrementally while avoiding duplicates. This method, popularized by Edsger W. Dijkstra in his manuscript EWD 792, leverages the recursive property that every regular number greater than 1 is obtained by multiplying a smaller regular number by 2, 3, or 5. It initializes the queue with 1 and iteratively extracts the minimum value, generates its multiples by 2, 3, and 5 (if they do not exceed bounds and are unique), and inserts them back into the queue. Duplicate removal can be handled via a set tracking seen values. The resulting sequence is strictly increasing by construction.[22] The following pseudocode illustrates a heap-based implementation in Python-like syntax for generating the first nn regular numbers:
import heapq

def generate_regular_numbers(n):
    if n == 0:
        return []
    heap = [1]
    seen = set([1])
    result = []
    while len(result) < n:
        x = heapq.heappop(heap)
        result.append(x)
        for factor in [2, 3, 5]:
            y = x * factor
            if y not in seen:
                seen.add(y)
                heapq.heappush(heap, y)
    return result
This algorithm ensures each regular number is generated exactly once and has a time complexity of O(nlogn)O(n \log n) for the first nn terms, as each of the O(n)O(n) insertions and extractions into the heap takes O(logn)O(\log n) time; this is superior to the O(n2)O(n^2) complexity of a naive brute-force enumeration without optimized bounds or merging.[23] An alternative dynamic programming formulation maintains separate lists or indices for multiples of 2, 3, and 5 from previously generated regular numbers, merging them in order similar to a multi-way merge. Starting with an array containing 1, indices i2=i3=i5=0i_2 = i_3 = i_5 = 0 track the next candidates: the next term is the minimum of h[i2]×2h[i_2] \times 2, h[i3]×3h[i_3] \times 3, and h[i5]×5h[i_5] \times 5, where hh is the growing array of regular numbers; the corresponding index is advanced only when that multiple is selected, preventing duplicates through careful incrementing. This achieves O(n)O(n) time overall, as each term requires constant-time operations.[9]

Implementations and Efficiency

Practical implementations of regular number generation commonly leverage priority queues to produce the sequence in ascending order while minimizing computational overhead. In Python, the heapq module facilitates this by maintaining a min-heap of candidate numbers derived from multiplying existing regular numbers by the primes 2, 3, and 5; duplicates are avoided either through a set to track seen values or by merging sorted iterator streams.[24] A comparable approach in C++ employs std::priority_queue as a min-heap to store potential next values, paired with std::set for duplicate elimination, enabling scalable generation without redundant computations.[24] Key optimizations focus on lazy evaluation, where the sequence is generated on-demand via generators in Python or iterators in C++, deferring computation until values are requested and conserving memory by avoiding full precomputation.[4] Performance benchmarks demonstrate that heap-based methods can generate the first 1,000,000 regular numbers in under 1 second on modern hardware, highlighting their suitability for large-scale applications.[24] Extensions to k-smooth numbers involve expanding the prime set to the first k primes (e.g., including 7 for 7-smooth), then applying the same priority queue merging to incorporate multiples of these additional factors.[25] Common pitfalls in these implementations include integer overflow from high exponents, mitigated by adopting arbitrary-precision arithmetic such as Python's built-in int or libraries like GMP in C++; ensuring duplicate-free output also necessitates rigorous set checks or monotonic merging, as naive multiplications can introduce redundancies otherwise.[24]

Other Uses

Timekeeping and Calendars

The division of time into units based on regular numbers has deep historical roots in systems designed for precise astronomical tracking. The modern sexagesimal structure of timekeeping—60 seconds per minute and 60 minutes per hour—originates from the ancient Babylonian adoption of a base-60 numeral system, where 60 factors as 22×3×52^2 \times 3 \times 5, a regular number with abundant divisors that permitted fractional divisions without complex remainders. This facilitated accurate observations of celestial bodies, as calculations in sexagesimal arithmetic allowed for straightforward subdivisions of angles and intervals in astronomy.[26][27] A full 24-hour day equates to 86,400 seconds, which factors as 27×33×522^7 \times 3^3 \times 5^2, another regular number whose extensive divisors (96 in total) support even partitioning for scheduling and scientific computations. This numerical property enhances the practicality of time divisions in observational contexts, enabling clean multiples and submultiples that align with periodic astronomical phenomena without introducing awkward fractions.[28][29] In calendar systems, regular numbers similarly underpin cyclical structures. The ancient Maya utilized a vigesimal (base-20) system in their Long Count and ritual calendars, with 20 factoring as 22×52^2 \times 5, allowing for hierarchical counts that integrated solar, lunar, and ritual periods into a cohesive framework. Modern timekeeping incorporates approximations to handle irregularities, such as leap seconds added to Coordinated Universal Time (UTC) to reconcile the atomic second's regularity with the Earth's variable rotation, ensuring clocks remain aligned with solar observations while retaining the benefits of regular number-based units.[30][31] These applications underscore the advantages of regular numbers in timekeeping: their prime factors limited to 2, 3, and 5 yield highly divisible units ideal for fraction-free divisions in astronomical and calendrical contexts, promoting precision across diverse cultures.[26]

Signal Processing

In digital signal processing for audio, sampling rates that are 5-smooth numbers—products of powers of 2, 3, and 5—facilitate efficient computation in tasks like rate conversion and buffering due to their divisibility by small integers, aligning well with hardware and algorithmic requirements.[32] For instance, the standard 48 kHz rate factors as 27×3×532^7 \times 3 \times 5^3, making it preferable over 44.1 kHz (which includes a factor of 7) for minimizing computational overhead in polyphase resampling filters and block processing. This property extends to higher rates like 192 kHz used in Blu-ray audio, which factors as 29×3×532^9 \times 3 \times 5^3 and supports lossless high-resolution playback with optimized digital workflows.[33] Filter design in audio synthesis, particularly for just-intonation systems relying on 2-3-5 prime ratios for harmonic intervals, uses these ratios to generate tones with simple frequency relationships like 3:2 or 5:4.[34][35] Modern FFT implementations in signal processing libraries leverage 5-smooth lengths for input arrays to optimize performance via mixed-radix algorithms, which decompose the transform into efficient stages using factors of 2, 3, and 5. For example, functions like SciPy's next_fast_len select the nearest 5-smooth number greater than or equal to a target length, ensuring faster execution on real-time audio data compared to prime or highly composite lengths requiring more complex factoring. This approach is rooted in libraries like FFTPACK and FFTW, where small-prime factorizations minimize multiplications and enable adaptive planning for spectral analysis in audio applications.[36][37]

References

User Avatar
No comments yet.