Recent from talks
Nothing was collected or created yet.
Horner's method
View on WikipediaThis article may require cleanup to meet Wikipedia's quality standards. The specific problem is: See Talk:Horner's method#This Article is about Two Different Algorithms. (November 2018) |
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians.[1] After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.
The algorithm is based on Horner's rule, in which a polynomial is written in nested form:
This allows the evaluation of a polynomial of degree n with only multiplications and additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations.[2]
Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.
Polynomial evaluation and long division
[edit]Given the polynomial where are constant coefficients, the problem is to evaluate the polynomial at a specific value of
For this, a new sequence of constants is defined recursively as follows:
| 1 |
Then is the value of .
To see why this works, the polynomial can be written in the form
Thus, by iteratively substituting the into the expression,
Now, it can be proven that;
| 2 |
This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; with (which is equal to ) being the division's remainder, as is demonstrated by the examples below. If is a root of , then (meaning the remainder is ), which means you can factor as .
To finding the consecutive -values, you start with determining , which is simply equal to . You then work recursively using the following formula: till you arrive at .
Examples
[edit]Evaluate for .
We use synthetic division as follows:
x0│ x3 x2 x1 x0
3 │ 2 −6 2 −1
│ 6 0 6
└────────────────────────
2 0 2 5
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is 5.
But by the polynomial remainder theorem, we know that the remainder is . Thus, .
In this example, if we can see that , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by . The remainder is 5. This makes Horner's method useful for polynomial long division.
Divide by :
2 │ 1 −6 11 −6
│ 2 −8 6
└────────────────────────
1 −4 3 0
The quotient is .
Let and . Divide by using Horner's method.
0.5 │ 4 −6 0 3 −5
│ 2 −2 −1 1
└───────────────────────
2 −2 −1 1 −4
The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is
Efficiency
[edit]Evaluation using the monomial form of a degree polynomial requires at most additions and multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to additions and multiplications by evaluating the powers of by iteration.
If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately times the number of bits of : the evaluated polynomial has approximate magnitude , and one must also store itself. By contrast, Horner's method requires only additions and multiplications, and its storage requirements are only times the number of bits of . Alternatively, Horner's method can be computed with fused multiply–adds. Horner's method can also be extended to evaluate the first derivatives of the polynomial with additions and multiplications.[3]
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal.[4] Victor Pan proved in 1966 that the number of multiplications is minimal.[5] However, when is a matrix, Horner's method is not optimal.
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- polynomial can be evaluated using only ⌊n/2⌋+2 multiplications and additions.[6]
Parallel evaluation
[edit]A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.
If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:
More generally, the summation can be broken into k parts: where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math[citation needed]. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism.
Application to floating-point multiplication and division
[edit]Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) , and . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), , so powers of 2 are repeatedly factored out.
Example
[edit]For example, to find the product of two numbers (0.15625) and m:
Method
[edit]To find the product of two binary numbers d and m:
- A register holding the intermediate result is initialized to d.
- Begin with the least significant (rightmost) non-zero bit in m.
- Count (to the left) the number of bit positions to the next most significant non-zero bit. If there are no more-significant bits, then take the value of the current bit position.
- Using that value, perform a left-shift operation by that number of bits on the register holding the intermediate result
- If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m.
Derivation
[edit]In general, for a binary number with bit values () the product is At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:
The denominators all equal one (or the term is absent), so this reduces to or equivalently (as consistent with the "method" described above)
In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.[7]
Other applications
[edit]Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.[8]
Polynomial root finding
[edit]Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros make some initial guess such that . Now iterate the following two steps:
- Using Newton's method, find the largest zero of using the guess .
- Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess .
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.[9]
Example
[edit]
Consider the polynomial which can be expanded to
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain which is shown in green and found to have a zero at −3. This polynomial is further reduced to which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
Divided difference of a polynomial
[edit]Horner's method can be modified to compute the divided difference Given the polynomial (as before) proceed as follows[10]
At completion, we have This computation of the divided difference is subject to less round-off error than evaluating and separately, particularly when .
Derivative of a polynomial
[edit]Substituting in this method gives , the derivative of . Evaluating a polynomial and its derivative at a point is useful for root-finding via Newton's method.
History
[edit]result: x=840[11]
Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",[12] was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823.[12] Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer[permanent dead link] in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller[13] showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).
Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.
Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:
- Paolo Ruffini in 1809 (see Ruffini's rule)[14][15]
- Isaac Newton in 1669[16][17]
- the Chinese mathematician Zhu Shijie in the 14th century[15]
- the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century
- the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation)[18]
- the Chinese mathematician Jia Xian in the 11th century (Song dynasty)
- The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century).[19]
Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:
"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."[20]
Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.[21] The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.
See also
[edit]- Clenshaw algorithm to evaluate polynomials in Chebyshev form
- De Boor's algorithm to evaluate splines in B-spline form
- De Casteljau's algorithm to evaluate polynomials in Bézier form
- Estrin's scheme to facilitate parallelization on modern computer architectures
- Lill's method to approximate roots graphically
- Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r
Notes
[edit]- ^ 600 years earlier, by the Chinese mathematician Qin Jiushao and 700 years earlier, by the Persian mathematician Sharaf al-Dīn al-Ṭūsī
- ^ Pan 1966
- ^ Pankiewicz 1968.
- ^ Ostrowski 1954.
- ^ Pan 1966.
- ^ Knuth 1997.
- ^ Kripasagar 2008, p. 62.
- ^ Higham 2002, Section 5.4.
- ^ Kress 1991, p. 112.
- ^ Fateman & Kahan 2000
- ^ Libbrecht 2005, pp. 181–191.
- ^ a b Horner 1819.
- ^ Fuller 1999, pp. 29–51.
- ^ Cajori 1911.
- ^ a b O'Connor, John J.; Robertson, Edmund F., "Horner's method", MacTutor History of Mathematics Archive, University of St Andrews
- ^ Analysis Per Quantitatum Series, Fluctiones ac Differentias : Cum Enumeratione Linearum Tertii Ordinis, Londini. Ex Officina Pearsoniana. Anno MDCCXI, p. 10, 4th paragraph.
- ^ Newton's collected papers, the edition 1779, in a footnote, vol. I, p. 270-271
- ^ Berggren 1990, pp. 304–309.
- ^ Temple 1986, p. 142.
- ^ Mikami 1913, p. 77
- ^ Libbrecht 2005, p. 208.
References
[edit]- Berggren, J. L. (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat". Journal of the American Oriental Society. 110 (2): 304–309. doi:10.2307/604533. JSTOR 604533.
- Cajori, Florian (1911). "Horner's method of approximation anticipated by Ruffini". Bulletin of the American Mathematical Society. 17 (8): 409–414. doi:10.1090/s0002-9904-1911-02072-9. Archived from the original on 2017-09-04. Retrieved 2012-03-04. Read before the Southwestern Section of the American Mathematical Society on November 26, 1910.
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein10.1016/0315-0860(81)90069-0, Clifford (2009). "Introduction to Algorithms". Historia Mathematica. 8 (3) (3rd ed.). MIT Press: 277–318. doi:10.1016/0315-0860(81)90069-0.
{{cite journal}}: CS1 maint: numeric names: authors list (link) - Fateman, R. J.; Kahan, W. (2000). Improving exact integrals from symbolic algebra systems (PDF) (Report). PAM. University of California, Berkeley: Center for Pure and Applied Mathematics. Archived from the original (PDF) on 2017-08-14. Retrieved 2018-05-17.
- Fuller, A. T. (1999). "Horner versus Holdred: An Episode in the History of Root Computation". Historia Mathematica. 26: 29–51. doi:10.1006/hmat.1998.2214.
- Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms. SIAM. ISBN 978-0-89871-521-7.
- Holdred, T. (1820). A New Method of Solving Equations with Ease and Expedition; by which the True Value of the Unknown Quantity is Found Without Previous Reduction. With a Supplement, Containing Two Other Methods of Solving Equations, Derived from the Same Principle (PDF). Richard Watts. Archived from the original (PDF) on 2014-01-06. Retrieved 2012-12-10.
- Holdred's method is in the supplement following page numbered 45 (which is the 52nd page of the pdf version).
- Horner, William George (July 1819). "A new method of solving numerical equations of all orders, by continuous approximation". Philosophical Transactions. 109. Royal Society of London: 308–335. doi:10.1098/rstl.1819.0023. JSTOR 107508. S2CID 186210512.
- Directly available online via the link, but also reprinted with appraisal in D.E. Smith: A Source Book in Mathematics, McGraw-Hill, 1929; Dover reprint, 2 vols, 1959.
- Knuth, Donald (1997). The Art of Computer Programming. Vol. 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley. pp. 486–488 in section 4.6.4. ISBN 978-0-201-89684-8.
- Kress, Rainer (1991). Numerical Analysis. Springer.
- Kripasagar, Venkat (March 2008). "Efficient Micro Mathematics – Multiplication and Division Techniques for MCUs". Circuit Cellar Magazine (212).
- Libbrecht, Ulrich (2005). "Chapter 13". Chinese Mathematics in the Thirteenth Century (2nd ed.). Dover. ISBN 978-0-486-44619-6. Archived from the original on 2017-06-06. Retrieved 2016-08-23.
- Mikami, Yoshio (1913). "Chapter 11. Ch'in Chiu-Shao". The Development of Mathematics in China and Japan (1st ed.). Chelsea Publishing Co reprint. pp. 74–77.
- Ostrowski, Alexander M. (1954). "On two problems in abstract algebra connected with Horner's rule". Studies in Mathematics and Mechanics presented to Richard von Mises. Academic Press. pp. 40–48. ISBN 978-1-4832-3272-0. Archived from the original on 2019-04-15. Retrieved 2016-08-23.
{{cite book}}: ISBN / Date incompatibility (help) - Pan, Y. Ja (1966). "On means of calculating values of polynomials". Russian Math. Surveys. 21: 105–136. doi:10.1070/rm1966v021n01abeh004147. S2CID 250869179.
- Pankiewicz, W. (1968). "Algorithm 337: calculation of a polynomial and its derivative values by Horner scheme". Communications of the ACM. 11 (9). ACM: 633. doi:10.1145/364063.364089. S2CID 52859619.
- Spiegel, Murray R. (1956). Schaum's Outline of Theory and Problems of College Algebra. McGraw-Hill. ISBN 9780070602267.
{{cite book}}: ISBN / Date incompatibility (help) - Temple, Robert (1986). The Genius of China: 3,000 Years of Science, Discovery, and Invention. Simon and Schuster. ISBN 978-0-671-62028-8.
- Whittaker, E.T.; Robinson, G. (1924). The Calculus of Observations. London: Blackie.
- Wylie, Alexander (1897). "Jottings on the Science of Chinese Arithmetic". Chinese Researches. Shanghai. pp. 159–194.
{{cite book}}: CS1 maint: location missing publisher (link)- Reprinted from issues of The North China Herald (1852).
External links
[edit]- "Horner scheme", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
- Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.)
- For more on the root-finding application see [1] Archived 2018-09-28 at the Wayback Machine
Horner's method
View on GrokipediaPolynomial Evaluation
Basic Formulation
Horner's method, also known as the Horner scheme or synthetic division in its tabular form, is an algorithm for efficiently evaluating polynomials by rewriting them in a nested multiplication form that minimizes the number of arithmetic operations required.[1] Developed by the English mathematician William George Horner and published in 1819, it addresses the inefficiency of the naive approach to polynomial evaluation, which computes each power of separately and results in approximately multiplications and additions for a degree- polynomial.[6] In contrast, Horner's method achieves the evaluation using exactly multiplications and additions, making it optimal in terms of arithmetic operations for sequential computation.[1] Consider a polynomial of degree expressed in the standard form: where are the coefficients with . Horner's method reformulates this as a nested expression by factoring out successive powers of : This nested structure reveals the polynomial as a sequence of linear evaluations, where each inner parenthesis represents a lower-degree polynomial evaluated and then scaled and shifted by the outer terms.[7] Equivalently, the method can be written in a fully parenthesized iterative form: This representation highlights the recursive nature, starting from the leading coefficient and iteratively incorporating the remaining terms.[1] The algorithm for Horner's method proceeds as follows: Initialize the result . Then, for each from down to , update the result by . The final value equals .[6] In pseudocode, this is implemented efficiently with a single loop:result = a_n
for k = n-1 downto 0:
result = x * result + a_k
return result
result = a_n
for k = n-1 downto 0:
result = x * result + a_k
return result
Evaluation Examples
To illustrate the generality of Horner's method, consider the evaluation of a linear polynomial at . The coefficients are arranged as 3 (for ) and 1 (constant term). Starting with the leading coefficient 3, multiply by 4 to get 12, then add the constant term 1, yielding 13. This matches the direct computation , requiring one multiplication and one addition in both cases, demonstrating the method's simplicity for low degrees.[8] For a quadratic polynomial, evaluate at . The coefficients are 2, 3, -1. In Horner's method, bring down the leading coefficient 2. Multiply by 2 to obtain 4, add to the next coefficient 3 to get 7. Then multiply 7 by 2 to get 14, and add the constant -1 to yield 13. Thus, . In contrast, direct expansion computes (one multiplication), then (one multiplication), (one multiplication), and adds (two additions), totaling three multiplications and two additions—highlighting Horner's reduction to two multiplications and two additions.[9][8] A cubic example is evaluated at , with coefficients 1, -4, 5, -2. Horner's method uses a synthetic division table for clarity:| 1 | 1 | -4 | 5 | -2 |
|---|---|---|---|---|
| 1 | -3 | 2 | ||
| --- | ----- | ----- | ---- | ---- |
| 1 | -3 | 2 | 0 |
Computational Efficiency
Horner's method achieves significant computational efficiency in polynomial evaluation by minimizing the number of arithmetic operations required. For a polynomial of degree , expressed as , the method performs exactly multiplications and additions through its nested multiplication scheme.[1] This contrasts sharply with naive direct evaluation approaches, where computing each power independently without reuse can demand up to multiplications for the powers alone, plus an additional multiplications for scaling by coefficients and additions for summation, leading to quadratic complexity in the worst case.[10] Even optimized naive variants that incrementally compute powers require approximately multiplications and additions, still exceeding Horner's linear tally.[11] Beyond operation counts, Horner's method enhances numerical stability in finite-precision arithmetic by constraining the growth of intermediate values. In the nested form , each intermediate result represents a partial polynomial evaluation, typically remaining closer in magnitude to the final value than the potentially explosive terms like in direct summation.[12] This reduction in intermediate swell limits the propagation and accumulation of rounding errors, as each multiplication and addition introduces bounded relative perturbations that do not amplify as severely as in methods producing large transient values.[12] Consequently, the computed result is often as accurate as if evaluated in higher precision before rounding to working precision, particularly when scaling is applied to keep values within the representable range.[1] In terms of memory usage, Horner's method is highly space-efficient, requiring only additional storage beyond the array of coefficients, as it iteratively updates a single accumulator variable.[1] This contrasts with certain recursive or tree-based evaluation strategies that may necessitate auxiliary space for stacking partial results or temporary powers. The overall floating-point operation (flop) count stands at , comprising the multiplications and additions, establishing it as asymptotically optimal for sequential, non-vectorized polynomial evaluation on standard architectures.[1]Parallel Variants
Parallel variants of Horner's method adapt the algorithm for concurrent execution in multi-processor or vectorized environments, utilizing tree-based reductions to partition the polynomial coefficients into subsets for parallel subpolynomial evaluation, followed by combination through nested multiplications and additions. This approach maintains the numerical stability of the original scheme while reducing the computational depth. In Estrin's scheme, a seminal divide-and-conquer parallelization, the coefficients of a degree-n polynomial (padded to the nearest power of 2 if needed) are recursively split into even- and odd-powered subpolynomials, enabling parallel computation at each level of a binary tree. Subpolynomials are evaluated concurrently, with results combined via expressions like , where and are the parallel subresults; this recursion continues until base cases of degree 0 or 1 are reached. The method assigns subchains of coefficients to separate processors, computes partial Horner-like results in parallel (e.g., linear combinations ), and folds them upward through multiplications by successive powers of .[13] The time complexity of this parallel variant is with processors, as the tree depth is logarithmic and each level performs constant-time operations per processor, compared to the sequential time of standard Horner's method. This logarithmic depth facilitates efficient load balancing and minimizes synchronization overhead in parallel architectures.[13] A variant using parallel prefix computation reframes Horner's method as a prefix scan over coefficients, where each step applies a non-associative operation (multiply by then add the next coefficient); this can be parallelized using Ladner-Fischer or Brent-Kung prefix networks to achieve time with processors while preserving total work.[11] The following pseudocode illustrates Estrin's scheme for parallel evaluation, assuming the polynomial degree is padded to a power of 2 and parallel loops are used (e.g., via OpenMP):function estrin_eval(coeffs[0..n-1], x):
npow2 = next_power_of_two(n)
pad_coeffs = coeffs padded with zeros to length npow2
num_levels = log2(npow2)
# Initialize powers of x^2
powers[1] = x
for i = 2 to num_levels:
powers[i] = powers[i-1] ^ 2
# coeff_matrix[level][j] stores intermediate results
coeff_matrix[0][j] = pad_coeffs[j] for j = 0 to npow2-1
for level = 1 to num_levels:
parallel for j = 0 to (npow2 / 2^level) - 1:
idx1 = 2*j
idx2 = 2*j + 1
coeff_matrix[level][j] = coeff_matrix[level-1][idx1] * powers[level] + coeff_matrix[level-1][idx2]
return coeff_matrix[num_levels][0]
function estrin_eval(coeffs[0..n-1], x):
npow2 = next_power_of_two(n)
pad_coeffs = coeffs padded with zeros to length npow2
num_levels = log2(npow2)
# Initialize powers of x^2
powers[1] = x
for i = 2 to num_levels:
powers[i] = powers[i-1] ^ 2
# coeff_matrix[level][j] stores intermediate results
coeff_matrix[0][j] = pad_coeffs[j] for j = 0 to npow2-1
for level = 1 to num_levels:
parallel for j = 0 to (npow2 / 2^level) - 1:
idx1 = 2*j
idx2 = 2*j + 1
coeff_matrix[level][j] = coeff_matrix[level-1][idx1] * powers[level] + coeff_matrix[level-1][idx2]
return coeff_matrix[num_levels][0]
Synthetic Division and Root Finding
Connection to Long Division
Horner's method serves as a compact algorithmic representation of synthetic division, a technique for dividing a polynomial by a linear factor , where the process yields both the quotient and the remainder efficiently. This method, originally described by Paolo Ruffini in 1804 as Ruffini's rule,[14] predates its popularization by William George Horner in 1819, who applied it to polynomial evaluation; the two are essentially equivalent in their step-by-step computation for such divisions. Synthetic division streamlines the long division algorithm by eliminating the need to write out repeated subtractions and multiplications of the divisor, focusing instead on operations with the value .[1] The tabular process of synthetic division begins by arranging the coefficients of in descending order of powers in a row. The value is placed to the left of this row. The leading coefficient is brought down unchanged to form the first entry in the bottom row. This value is then multiplied by and added to the next coefficient above, producing the next bottom-row entry. This multiplication-by--and-addition step is repeated across all coefficients until the final addition yields the remainder.[15] The entries in the bottom row, excluding the last (the remainder), form the coefficients of the quotient polynomial, which has one degree less than .[1] In this division, the remainder directly equals , aligning with the remainder theorem, while the quotient represents the deflated polynomial after removing the factor .[1] If is a root of , the remainder is zero, and the quotient is the reduced polynomial for further analysis.[15] For a concrete illustration, consider the cubic polynomial divided by . The synthetic division table is as follows:| 2 | 1 | 1 | -4 | 3 | |
|---|---|---|---|---|---|
| 2 | 6 | 4 | |||
| --- | ---- | ---- | ---- | ---- | ---- |
| 1 | 3 | 2 | 7 |
Root Isolation Procedure
Horner's method plays a key role in root isolation for polynomials by enabling efficient evaluation to detect sign changes, which indicate potential root locations within intervals. The procedure begins by evaluating the polynomial at the endpoints of a chosen interval using Horner's nested scheme, which minimizes computational operations while providing accurate sign determinations. If the signs differ, a root exists in the interval by the intermediate value theorem; the process can be refined by bisection or further subdivision, with each evaluation leveraging Horner's method for speed and stability. For more robust isolation, especially with multiple roots, a Sturm sequence can be constructed, where polynomial remainders are computed via synthetic division akin to Horner's, and sign variations at interval points are counted to isolate distinct real roots. This approach ensures the number of sign changes equals the number of roots in the interval, with Horner's evaluations applied to each sequence member for efficiency.[16] Once an approximate root is identified, deflation reduces the polynomial degree by factoring out the linear term . Horner's synthetic division performs this by iteratively multiplying coefficients by and accumulating remainders, yielding the quotient polynomial's coefficients directly. The process is: where , for down to 0, and is the remainder (ideally zero for an exact root). This deflation isolates the remaining roots in the lower-degree quotient, allowing iterative application until the polynomial is fully factored. Stability is enhanced by ordering deflations: forward for small- roots and backward for large- to minimize error accumulation in coefficients.[1][17] For complex roots, which occur in conjugate pairs for real-coefficient polynomials, Bairstow's method extends Horner's synthetic division to quadratic factors . It iteratively refines initial guesses and using Newton-Raphson on the remainders from double synthetic division, treating the quadratic as a divisor. Horner's scheme computes the necessary polynomial and derivative evaluations during iterations, ensuring efficient convergence to the quadratic factor, after which deflation proceeds similarly to the linear case. This variant is particularly useful for higher-degree polynomials where real roots are sparse.[18] In iterative root-finding methods like Newton-Raphson, Horner's stability reduces error propagation by avoiding catastrophic cancellation in evaluations. The method's nested form minimizes rounding errors compared to naive power summation, providing reliable function and derivative values for updates . Deflation errors, if present, can amplify in subsequent iterations, but polishing the approximate root against the original polynomial—again using Horner—corrects perturbations, preserving convergence. This stability is crucial for ill-conditioned polynomials, where small coefficient changes can shift roots significantly, but Horner's forward or backward variants mitigate propagation in the deflation chain.[1][17]Root Finding Example
To illustrate the application of Horner's method in root finding, consider the cubic polynomial . This polynomial has exact integer roots at , , and , making it suitable for demonstrating the deflation technique where synthetic division (Horner's method) is used to factor out linear terms sequentially. In practice, an initial guess or evaluation at test points, such as possible rational roots from the rational root theorem (), helps isolate root intervals or confirm exact roots.[19] Begin by evaluating at integer points to locate sign changes or zero values. Compute , , indicating an exact root at . To deflate the polynomial and obtain the quadratic factor, apply Horner's method via synthetic division with the root 1: The quotient is , and the remainder is 0, confirming as a root, so .[19] Next, find roots of the quadratic quotient by evaluating at test points: , , revealing an exact root at . Deflate using synthetic division with 2: The quotient is , with remainder 0, so . Thus, , and the roots are , , .[19] This example uses exact integer roots for pedagogical simplicity, allowing verification without approximation errors; in real-world scenarios with non-rational roots, Horner's method facilitates iterative refinement (e.g., via Newton's method) by efficiently evaluating the polynomial and its derivatives during deflation, though numerical approximations and error bounds are typically required.[20]Numerical Applications
Floating-Point Implementation
In floating-point arithmetic, the naive evaluation of polynomials—computing successive powers of and scaling by coefficients—often generates excessively large intermediate values, leading to overflow when and the degree is high, or underflow when . For instance, evaluating a degree-100 polynomial at in the naive approach requires computing , far exceeding typical floating-point ranges like IEEE 754 double precision (up to about ). Horner's method addresses this by rewriting the polynomial in nested form, , which performs only multiplications by and additions, keeping intermediate results closer in magnitude to the final value and thus bounding the exponent range to prevent such overflows or underflows.[12] To implement Horner's method effectively in floating-point systems, especially for ill-conditioned polynomials where small changes in coefficients amplify errors, select the nesting order that minimizes the dynamic range of intermediates; for example, evaluate from the highest-degree coefficient if to avoid early underflow. Additional scaling can be applied by factoring out powers of the floating-point base (e.g., 2 or 10) from coefficients to keep all values within a safe exponent interval, such as , before applying the nesting—this adjusts the polynomial as where is the base and is chosen to normalize intermediates, then rescale the result. Such guidelines ensure robustness without increasing the operation count beyond the standard multiplications and additions for degree .[12] A simple implementation in Python for evaluating a high-degree polynomial, say with coefficients at , uses a loop to nest the operations:def horner_eval(coeffs, x):
result = 0.0
for coef in reversed(coeffs):
result = result * x + coef
return result
# Example
a = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10, 11]
x = 1.5
p_value = horner_eval(a, x)
print(p_value) # Outputs approximate value, e.g., 1234.5678901234567 in double precision
def horner_eval(coeffs, x):
result = 0.0
for coef in reversed(coeffs):
result = result * x + coef
return result
# Example
a = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10, 11]
x = 1.5
p_value = horner_eval(a, x)
print(p_value) # Outputs approximate value, e.g., 1234.5678901234567 in double precision
Derivation for Arithmetic Operations
Horner's method, originally formulated for efficient polynomial evaluation, can be extended to perform arithmetic operations such as multiplication and division by a constant through modifications to its nested structure, thereby sharing computational steps and reducing the number of operations compared to naive approaches.[1] Consider a polynomial . To compute for a constant , the naive method involves scaling each coefficient by (requiring multiplications) and then evaluating the scaled polynomial using standard Horner's method (another multiplications by and additions), for a total of multiplications and additions. In contrast, the efficient approach leverages the nested form of : . Thus, , which requires only multiplications by , additions to compute , and one final multiplication by , totaling multiplications and additions. This derivation factors the constant outside the nesting, sharing the computations for the powers of across all terms.[1] Similarly, for division by a constant , compute . The naive approach scales each coefficient by ( divisions) before Horner's evaluation ( multiplications and additions), yielding divisions, multiplications, and additions. The optimized method evaluates using Horner's nesting ( multiplications and additions) and then performs a single division by , resulting in multiplications, additions, and one division. This shares the nested computations and defers the division to the end, minimizing expensive operations. In cases where exact division is not possible (e.g., in integer or modular arithmetic), a remainder adjustment can be applied post-evaluation by computing , where , but for floating-point contexts, the scaling suffices without explicit remainder.[1] For scaled evaluation, where the argument itself is multiplied by a constant as in , the derivation integrates the scaling directly into the Horner's nesting to avoid computing separate powers of . Start with where , so naively this requires multiplications for powers of , for powers of , and additional scalings per term, exceeding multiplications. The nested form yields: This requires only multiplications (each by ) and additions, as the scaling by successive powers of is embedded in the repeated multiplication by . For division by constant in the argument, follows analogously with nesting using : again using multiplications/divisions by and additions. In floating-point arithmetic, when or leading to large arguments, numerical stability is maintained by reversing the polynomial coefficients and evaluating at the reciprocal scaled argument, adjusting the result by the appropriate power: , where is the reversed polynomial, computed via Horner's method on . This backward recursion incorporates division by the constant at each step while avoiding overflow.[1]Divided Difference Computation
Divided differences form the basis for Newton interpolation, where they act as coefficients in the divided-difference form of the interpolating polynomial. The zeroth-order divided difference is defined as , the function evaluated at the point . Higher-order divided differences are computed recursively: for , This recursion extends the first-order case .[23][24] Horner's method adapts to divided difference computation by interpreting the divided difference table as a sequence of synthetic divisions performed on coefficients derived from the function values at the interpolation points . In this framework, the process begins by treating the function values as initial "coefficients" and applies synthetic division iteratively to extract higher-order differences, mirroring the deflation in polynomial root-finding but focused on interpolation coefficients.[24] The algorithm constructs the divided difference table by initializing the zeroth column with for to , then filling subsequent columns using the recursive formula in a forward manner: each entry . This table-building process requires operations for points but benefits from Horner's nested structure in subsequent evaluations, allowing optimized passes over the data. The leading diagonal entries for to directly yield the Newton coefficients.[23][24] This adaptation, typically implemented via the table for numerical stability, facilitates computing the differences compared to direct recursion without tabulation.[25][24] By providing these divided differences, Horner's method enables the direct assembly of the Newton interpolation polynomial , avoiding the need for a full Vandermonde matrix solve and reducing overhead in interpolation tasks.[24]Additional Uses
Horner's method extends beyond basic polynomial evaluation and root-finding to facilitate efficient computation in automatic differentiation. By nesting the polynomial structure, it enables simultaneous evaluation of the function and its derivatives through a single forward pass, akin to forward-mode automatic differentiation. This approach computes Taylor coefficients recursively, reducing the number of operations required for derivative estimation. The method's historical roots trace back to the Ch'in-Horner algorithm, recognized as an early precursor to modern automatic differentiation techniques for generating polynomial Taylor series.[26] In control systems, Horner's method supports stability analysis by enabling synthetic division to test potential roots of characteristic polynomials, aiding in the verification of system stability criteria. For instance, it is employed to deflate polynomials during root isolation, which complements the Routh-Hurwitz criterion by efficiently handling cases involving known or suspected roots on the imaginary axis or in special array configurations. This application is particularly useful in feedback control design, where rapid polynomial manipulation helps assess Hurwitz stability without full factorization. Standard control engineering texts highlight its role in root determination for linear time-invariant systems.[27] Within computer algebra systems, Horner's method underpins polynomial factoring and greatest common divisor (GCD) computations by providing an efficient mechanism for repeated divisions in the Euclidean algorithm. It allows for modular evaluation and deflation during subresultant remainder sequences, minimizing arithmetic overhead in primitive remainder sequence calculations. This is essential for symbolic manipulation in systems like Maple or Magma, where Horner's nesting optimizes the representation and reduction of polynomials over finite fields. Influential works on algorithmic algebra emphasize its centrality to these operations, ensuring numerical stability and computational efficiency.[1][28] In modern machine learning, Horner's method finds application in evaluating polynomial activations and kernels within neural networks, offering a compact way to compute high-degree polynomials with reduced operations. It is particularly valuable for learnable polynomial functions in deep networks, where it supports efficient forward propagation and derivative computation for activations like rectified power units (RePUs). GPU implementations leverage its sequential structure for batched evaluations in graph neural networks or kernel methods, enhancing scalability for large-scale training. Recent advancements in encrypted inference and approximation theory underscore its role in optimizing polynomial-based models.[29][30][31]Historical Context
Origins and Development
The origins of Horner's method extend to ancient and medieval mathematics, with early forms of nested evaluation appearing in the works of 11th-century Persian scholar al-Nasaw, 12th-century Arab mathematician al-Samaw’al al-Maghribi for root extraction, and 13th-century Chinese algebraist Zhu Shijie in his polynomial methods.[3][5] In the early 19th century, Italian mathematician Paolo Ruffini developed the method further as part of his work on polynomial factorization and root determination. In his 1804 dissertation Sopra la determinazione delle radici nelle equazioni numeriche di qualunque grado, Ruffini outlined an efficient algorithm for dividing polynomials by linear factors, leveraging the factor theorem to simplify computations without full long division, which laid foundational concepts for later root isolation techniques in European algebra.[32] William George Horner, an English schoolmaster and mathematician, advanced these ideas significantly in 1819 through his paper "A new method of solving numerical equations of all orders, by continuous approximation," published in the Philosophical Transactions of the Royal Society. Horner's approach introduced a nested multiplication scheme for polynomial evaluation, enabling iterative approximations of roots via successive substitutions, which proved particularly effective for high-degree equations where traditional methods were cumbersome.[33] Horner's initial applications emphasized practical numerical computation, targeting the solution of algebraic equations encountered in astronomy and physics, such as those arising in the construction of trigonometric and logarithmic tables. This focus on approximation addressed the limitations of exact symbolic methods for real-world calculations, making the technique accessible for manual computation by practitioners.[3] By the mid-19th century, Horner's method gained widespread adoption in British algebra textbooks, where it was termed "Horner's process" for efficient polynomial division and root finding. Mathematicians like Augustus De Morgan promoted its use in educational contexts, integrating it into standard curricula and ensuring its role as a staple tool in English algebraic instruction through the end of the century.[3]Naming and Recognition
Horner's method is named after the British mathematician William George Horner, who described the algorithm in his 1819 paper "A new method of solving numerical equations of all orders, by continuous approximation," published in the Philosophical Transactions of the Royal Society.[33] The method gained prominence in the English-speaking world through Horner's publication, which emphasized its practical utility for approximating roots of polynomials via successive approximations, leading to its widespread adoption in British and American mathematical education. However, controversies over priority arose soon after, including Theophilus Holdred's independent publication of a similar method in 1820, sparking debates on originality.[3] The name "Horner's method" was specifically applied by Augustus De Morgan in his writings during the mid-19th century, reflecting Horner's role in popularizing the technique among English readers despite its earlier appearances elsewhere.[3] Alternative names for the method include synthetic division, reflecting its use in polynomial division, and the Ruffini–Horner method, acknowledging the contributions of the Italian mathematician Paolo Ruffini, who anticipated the algorithm in his 1804 work Sopra la determinazione delle radici nelle equazioni numeriche di qualunque grado.[32] It is also referred to as nested multiplication or an application of the factor theorem in some contexts.[34] By the 1830s, the method appeared in standard algebra textbooks in England and the United States, such as those by De Morgan himself, establishing its place in curricula for solving polynomial equations efficiently.[3] Throughout the 19th and early 20th centuries, it held a prominent position in English and American educational texts, often presented as a novel computational tool. In the 20th century, historical analyses sparked debates on priority, with Florian Cajori's 1911 article in the Bulletin of the American Mathematical Society arguing that Ruffini's earlier description warranted recognition as the originator, influencing later attributions to both figures.[32] These discussions highlighted the method's independent rediscoveries, including even earlier traces in medieval mathematics, but the nomenclature persisted due to Horner's influential exposition.[3]References
- https://www.[encyclopedia.com](/page/Encyclopedia.com)/science/dictionaries-thesauruses-pictures-and-press-releases/horner-william-george
