Hubbry Logo
Horner's methodHorner's methodMain
Open search
Horner's method
Community hub
Horner's method
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Horner's method
Horner's method
from Wikipedia

In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians.[1] After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.

The algorithm is based on Horner's rule, in which a polynomial is written in nested form:

This allows the evaluation of a polynomial of degree n with only multiplications and additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations.[2]

Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.

Polynomial evaluation and long division

[edit]

Given the polynomial where are constant coefficients, the problem is to evaluate the polynomial at a specific value of

For this, a new sequence of constants is defined recursively as follows:

Then is the value of .

To see why this works, the polynomial can be written in the form

Thus, by iteratively substituting the into the expression,

Now, it can be proven that;

This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; with (which is equal to ) being the division's remainder, as is demonstrated by the examples below. If is a root of , then (meaning the remainder is ), which means you can factor as .

To finding the consecutive -values, you start with determining , which is simply equal to . You then work recursively using the following formula: till you arrive at .

Examples

[edit]

Evaluate for .

We use synthetic division as follows:

 x0x3    x2    x1    x0
 3 │   2    −6     2    −1
   │         6     0     6
   └────────────────────────
       2     0     2     5

The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is 5.

But by the polynomial remainder theorem, we know that the remainder is . Thus, .

In this example, if we can see that , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.

As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by . The remainder is 5. This makes Horner's method useful for polynomial long division.

Divide by :

 2 │   1    −6    11    −6
   │         2    −8     6
   └────────────────────────
       1    −4     3     0

The quotient is .

Let and . Divide by using Horner's method.

  0.5 │ 4  −6   0   3  −5
      │     2  −2  −1   1
      └───────────────────────
        2  −2  −1   1  −4

The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is

Efficiency

[edit]

Evaluation using the monomial form of a degree polynomial requires at most additions and multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to additions and multiplications by evaluating the powers of by iteration.

If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately times the number of bits of : the evaluated polynomial has approximate magnitude , and one must also store itself. By contrast, Horner's method requires only additions and multiplications, and its storage requirements are only times the number of bits of . Alternatively, Horner's method can be computed with fused multiply–adds. Horner's method can also be extended to evaluate the first derivatives of the polynomial with additions and multiplications.[3]

Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal.[4] Victor Pan proved in 1966 that the number of multiplications is minimal.[5] However, when is a matrix, Horner's method is not optimal.

This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- polynomial can be evaluated using only n/2+2 multiplications and additions.[6]

Parallel evaluation

[edit]

A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.

If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:

More generally, the summation can be broken into k parts: where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math[citation needed]. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism.

Application to floating-point multiplication and division

[edit]

Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) , and . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), , so powers of 2 are repeatedly factored out.

Example

[edit]

For example, to find the product of two numbers (0.15625) and m:

Method

[edit]

To find the product of two binary numbers d and m:

  1. A register holding the intermediate result is initialized to d.
  2. Begin with the least significant (rightmost) non-zero bit in m.
    1. Count (to the left) the number of bit positions to the next most significant non-zero bit. If there are no more-significant bits, then take the value of the current bit position.
    2. Using that value, perform a left-shift operation by that number of bits on the register holding the intermediate result
  3. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m.

Derivation

[edit]

In general, for a binary number with bit values () the product is At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:

The denominators all equal one (or the term is absent), so this reduces to or equivalently (as consistent with the "method" described above)

In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.

The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.[7]

Other applications

[edit]

Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.[8]

Polynomial root finding

[edit]

Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros make some initial guess such that . Now iterate the following two steps:

  1. Using Newton's method, find the largest zero of using the guess .
  2. Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess .

These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.[9]

Example

[edit]
Polynomial root finding using Horner's method

Consider the polynomial which can be expanded to

From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain which is shown in green and found to have a zero at −3. This polynomial is further reduced to which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.

Divided difference of a polynomial

[edit]

Horner's method can be modified to compute the divided difference Given the polynomial (as before) proceed as follows[10]

At completion, we have This computation of the divided difference is subject to less round-off error than evaluating and separately, particularly when .

Derivative of a polynomial

[edit]

Substituting in this method gives , the derivative of . Evaluating a polynomial and its derivative at a point is useful for root-finding via Newton's method.

History

[edit]
Qin Jiushao's algorithm for solving the quadratic polynomial equation
result: x=840[11]

Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",[12] was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823.[12] Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer[permanent dead link] in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller[13] showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).

Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.

Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:

Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:

"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."[20]

Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.[21] The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Horner's method is an efficient in and for evaluating a p(x)=anxn+an1xn1++a1x+a0p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0 at a specific value x=cx = c, as well as for deflating polynomials by dividing them by linear factors (xc)(x - c), using only nn multiplications and nn additions for a degree-nn . The method rewrites the polynomial in a nested, factored form—such as p(c)=a0+c(a1+c(a2++c(an)))p(c) = a_0 + c(a_1 + c(a_2 + \cdots + c(a_n))\cdots)—which is computed iteratively through a recursive relation bk+1=cbk+akb_{k+1} = c \cdot b_k + a_k starting from b0=anb_0 = a_n, minimizing arithmetic operations and often improving in floating-point computations compared to direct expansion. Named after the British mathematician and educator (1786–1837), the method was formally described by him in a paper titled "A new method of solving numerical equations of all orders, by continuous approximation," presented to the Royal Society on July 1, 1819, and published in the Philosophical Transactions. However, the technique predates Horner; it was anticipated by Italian mathematician in the late 18th century for root-finding, and similar nested evaluation schemes appear in earlier works, including those of the 12th-century Arab mathematician al-Samaw’al al-Maghribi for root extraction and the 13th-century Chinese algebraist Zhu Shijie, with roots traceable to the 11th-century scholar al-Nasaw. Horner's presentation popularized the algorithm in , though controversies arose over priority, such as Theophilus Holdred's independent publication in 1820, leading to debates on originality. In practice, Horner's method serves as synthetic division when deflating polynomials, producing both the quotient and remainder efficiently, and extends to computing derivatives or divided differences with minimal additional operations—requiring 2n2n arithmetic steps for the polynomial and its first derivative. Its importance lies in reducing computational cost and error propagation, making it a of , , and systems, where it ensures faithful rounding under floating-point standards when certain conditions on coefficients and evaluation points are met. The algorithm's stability is particularly favorable for c<1|c| < 1 in forward recursion or c>1|c| > 1 in reverse, influencing its application in root-finding and polynomial factoring across scientific and fields.

Polynomial Evaluation

Basic Formulation

Horner's method, also known as the Horner scheme or in its tabular form, is an for efficiently evaluating polynomials by rewriting them in a nested form that minimizes the number of arithmetic operations required. Developed by the English mathematician and published in 1819, it addresses the inefficiency of the naive approach to , which computes each power of xx separately and results in approximately O(n2)O(n^2) and additions for a degree-nn . In contrast, Horner's method achieves the evaluation using exactly nn and nn additions, making it optimal in terms of arithmetic operations for sequential computation. Consider a polynomial p(x)p(x) of degree nn expressed in the standard form: p(x)=anxn+an1xn1++a1x+a0,p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0, where an,an1,,a0a_n, a_{n-1}, \dots, a_0 are the coefficients with an0a_n \neq 0. Horner's method reformulates this as a nested expression by factoring out successive powers of xx: p(x)=a0+x(a1+x(a2++x(an1+xan))).p(x) = a_0 + x \left( a_1 + x \left( a_2 + \cdots + x (a_{n-1} + x a_n) \cdots \right) \right). This nested structure reveals the polynomial as a sequence of linear evaluations, where each inner parenthesis represents a lower-degree polynomial evaluated and then scaled and shifted by the outer terms. Equivalently, the method can be written in a fully parenthesized iterative form: p(x)=(((anx+an1)x+an2)x++a1)x+a0.p(x) = \left( \cdots \left( (a_n x + a_{n-1}) x + a_{n-2} \right) x + \cdots + a_1 \right) x + a_0. This representation highlights the recursive nature, starting from the leading coefficient and iteratively incorporating the remaining terms. The algorithm for Horner's method proceeds as follows: Initialize the result bn=anb_n = a_n. Then, for each kk from n1n-1 down to 00, update the result by bk=ak+xbk+1b_k = a_k + x \cdot b_{k+1}. The final value b0b_0 equals p(x)p(x). In , this is implemented efficiently with a single loop:

result = a_n for k = n-1 downto 0: result = x * result + a_k return result

result = a_n for k = n-1 downto 0: result = x * result + a_k return result

This step-by-step process ensures that only one and one are performed per after the leading one, directly corresponding to the nested form and avoiding redundant power computations inherent in the naive method.

Evaluation Examples

To illustrate the generality of Horner's method, consider the evaluation of a linear polynomial p(x)=3x+1p(x) = 3x + 1 at x=4x = 4. The coefficients are arranged as 3 (for x1x^1) and 1 (constant term). Starting with the leading coefficient 3, multiply by 4 to get 12, then add the constant term 1, yielding 13. This matches the direct computation 34+1=133 \cdot 4 + 1 = 13, requiring one multiplication and one addition in both cases, demonstrating the method's simplicity for low degrees. For a quadratic polynomial, evaluate p(x)=2x2+3x1p(x) = 2x^2 + 3x - 1 at x=2x = 2. The are 2, 3, -1. In Horner's method, bring down the leading 2. Multiply by 2 to obtain 4, add to the next 3 to get 7. Then multiply 7 by 2 to get 14, and add the constant -1 to yield 13. Thus, p(2)=13p(2) = 13. In contrast, direct expansion computes 22=42^2 = 4 (one ), then 24=82 \cdot 4 = 8 (one ), 32=63 \cdot 2 = 6 (one ), and adds 8+61=138 + 6 - 1 = 13 (two additions), totaling three multiplications and two additions—highlighting Horner's reduction to two multiplications and two additions. A cubic example is p(x)=x34x2+5x2p(x) = x^3 - 4x^2 + 5x - 2 evaluated at x=1x = 1, with coefficients 1, -4, 5, -2. Horner's method uses a synthetic division table for clarity:
11-45-2
1-32
---------------------
1-320
Bring down 1. Multiply by 1 to get 1, add to -4 to obtain -3. Multiply -3 by 1 to get -3, add to 5 to get 2. Multiply 2 by 1 to get 2, add to -2 to yield 0. Thus, p(1)=0p(1) = 0. Direct expansion requires computing 13=11^3 = 1 (two multiplications for power), 11=11 \cdot 1 = 1, (4)12=4(-4) \cdot 1^2 = -4 (one multiplication for power, one for coefficient), 51=55 \cdot 1 = 5 (one multiplication), then three additions/subtractions for 14+52=01 - 4 + 5 - 2 = 0, totaling five multiplications and three additions—whereas Horner's uses three multiplications and three additions.

Computational Efficiency

Horner's method achieves significant computational efficiency in by minimizing the number of arithmetic operations required. For a of degree nn, expressed as p(x)=anxn+an1xn1++a1x+a0p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0, the method performs exactly nn s and nn additions through its nested scheme. This contrasts sharply with naive approaches, where computing each power xix^i independently without reuse can demand up to n(n+1)2\frac{n(n+1)}{2} multiplications for the powers alone, plus an additional nn multiplications for scaling by coefficients and n1n-1 additions for , leading to quadratic in the worst case. Even optimized naive variants that incrementally compute powers require approximately 2n12n-1 multiplications and nn additions, still exceeding Horner's linear tally. Beyond operation counts, Horner's method enhances in finite-precision arithmetic by constraining the growth of intermediate values. In the nested form p(x)=a0+x(a1+x(a2++xan))p(x) = a_0 + x(a_1 + x(a_2 + \cdots + x a_n \cdots )), each intermediate result represents a partial , typically remaining closer in magnitude to the final value than the potentially explosive terms like anxna_n x^n in direct . This reduction in intermediate swell limits the propagation and accumulation of errors, as each and introduces bounded relative perturbations that do not amplify as severely as in methods producing large transient values. Consequently, the computed result is often as accurate as if evaluated in higher precision before to working precision, particularly when scaling is applied to keep values within the representable range. In terms of memory usage, Horner's method is highly space-efficient, requiring only O(1)O(1) additional storage beyond the array of n+1n+1 coefficients, as it iteratively updates a single accumulator variable. This contrasts with certain recursive or tree-based evaluation strategies that may necessitate O(n)O(n) auxiliary space for stacking partial results or temporary powers. The overall floating-point operation (flop) count stands at 2n2n, comprising the nn multiplications and nn additions, establishing it as asymptotically optimal for sequential, non-vectorized on standard architectures.

Parallel Variants

Parallel variants of Horner's method adapt the algorithm for concurrent execution in multi-processor or vectorized environments, utilizing tree-based reductions to partition the polynomial coefficients into subsets for parallel subpolynomial evaluation, followed by combination through nested multiplications and additions. This approach maintains the numerical stability of the original scheme while reducing the computational depth. In Estrin's scheme, a seminal divide-and-conquer parallelization, the coefficients of a degree-n polynomial (padded to the nearest power of 2 if needed) are recursively split into even- and odd-powered subpolynomials, enabling parallel computation at each level of a binary tree. Subpolynomials are evaluated concurrently, with results combined via expressions like p(x)=q(x2)x+r(x2)p(x) = q(x^2) \cdot x + r(x^2), where qq and rr are the parallel subresults; this recursion continues until base cases of degree 0 or 1 are reached. The method assigns subchains of coefficients to separate processors, computes partial Horner-like results in parallel (e.g., linear combinations c2i+c2i+1xc_{2i} + c_{2i+1} x), and folds them upward through multiplications by successive powers of x2x^2. The of this parallel variant is O(logn)O(\log n) with O(n)O(n) processors, as the is logarithmic and each level performs constant-time operations per processor, compared to the O(n)O(n) sequential time of standard Horner's method. This logarithmic depth facilitates efficient load balancing and minimizes synchronization overhead in parallel architectures. A variant using parallel prefix computation reframes Horner's method as a prefix scan over s, where each step applies a non-associative operation (multiply by xx then add the next ); this can be parallelized using Ladner-Fischer or Brent-Kung prefix networks to achieve O(logn)O(\log n) time with O(n/logn)O(n / \log n) processors while preserving O(n)O(n) total work. The following pseudocode illustrates Estrin's scheme for parallel evaluation, assuming the polynomial degree is padded to a power of 2 and parallel loops are used (e.g., via ):

function estrin_eval(coeffs[0..n-1], x): npow2 = next_power_of_two(n) pad_coeffs = coeffs padded with zeros to length npow2 num_levels = log2(npow2) # Initialize powers of x^2 powers[1] = x for i = 2 to num_levels: powers[i] = powers[i-1] ^ 2 # coeff_matrix[level][j] stores intermediate results coeff_matrix[0][j] = pad_coeffs[j] for j = 0 to npow2-1 for level = 1 to num_levels: parallel for j = 0 to (npow2 / 2^level) - 1: idx1 = 2*j idx2 = 2*j + 1 coeff_matrix[level][j] = coeff_matrix[level-1][idx1] * powers[level] + coeff_matrix[level-1][idx2] return coeff_matrix[num_levels][0]

function estrin_eval(coeffs[0..n-1], x): npow2 = next_power_of_two(n) pad_coeffs = coeffs padded with zeros to length npow2 num_levels = log2(npow2) # Initialize powers of x^2 powers[1] = x for i = 2 to num_levels: powers[i] = powers[i-1] ^ 2 # coeff_matrix[level][j] stores intermediate results coeff_matrix[0][j] = pad_coeffs[j] for j = 0 to npow2-1 for level = 1 to num_levels: parallel for j = 0 to (npow2 / 2^level) - 1: idx1 = 2*j idx2 = 2*j + 1 coeff_matrix[level][j] = coeff_matrix[level-1][idx1] * powers[level] + coeff_matrix[level-1][idx2] return coeff_matrix[num_levels][0]

Synthetic Division and Root Finding

Connection to Long Division

Horner's method serves as a compact algorithmic representation of , a technique for dividing a p(x)p(x) by a linear factor (xc)(x - c), where the process yields both the quotient and the remainder efficiently. This method, originally described by in 1804 as Ruffini's rule, predates its popularization by in 1819, who applied it to ; the two are essentially equivalent in their step-by-step computation for such divisions. streamlines the by eliminating the need to write out repeated subtractions and multiplications of the , focusing instead on operations with the value cc. The tabular process of synthetic division begins by arranging the coefficients of p(x)p(x) in descending order of powers in a row. The value cc is placed to the left of this row. The leading is brought down unchanged to form the first entry in the bottom row. This value is then multiplied by cc and added to the next above, producing the next bottom-row entry. This multiplication-by-cc-and-addition step is repeated across all until the final addition yields the . The entries in the bottom row, excluding the last (the ), form the of the , which has one degree less than p(x)p(x). In this division, the remainder directly equals p(c)p(c), aligning with the remainder theorem, while the quotient represents the deflated polynomial after removing the factor (xc)(x - c). If cc is a root of p(x)p(x), the remainder is zero, and the quotient is the reduced polynomial for further analysis. For a concrete illustration, consider the cubic polynomial p(x)=x3+x24x+3p(x) = x^3 + x^2 - 4x + 3 divided by (x2)(x - 2). The synthetic division table is as follows:
211-43
264
-----------------------
1327
Here, the bottom row gives the quotient x2+3x+2x^2 + 3x + 2 and remainder 7, confirming p(2)=7p(2) = 7.

Root Isolation Procedure

Horner's method plays a key role in root isolation for s by enabling efficient evaluation to detect sign changes, which indicate potential root locations within intervals. The procedure begins by evaluating the polynomial at the endpoints of a chosen interval using Horner's nested scheme, which minimizes computational operations while providing accurate sign determinations. If the signs differ, a root exists in the interval by the ; the process can be refined by or further subdivision, with each evaluation leveraging Horner's method for speed and stability. For more robust isolation, especially with multiple roots, a can be constructed, where polynomial remainders are computed via akin to Horner's, and sign variations at interval points are counted to isolate distinct real roots. This approach ensures the number of sign changes equals the number of roots in the interval, with Horner's evaluations applied to each sequence member for efficiency. Once an approximate rr is identified, reduces the degree by factoring out the linear term (xr)(x - r). Horner's performs this by iteratively multiplying coefficients by rr and accumulating remainders, yielding the polynomial's coefficients directly. The process is: ranan1a1a0bn1rb1rb0rbnbn1b1b0\begin{array}{r|r} r & a_n & a_{n-1} & \cdots & a_1 & a_0 \\ & & b_{n-1}r & \cdots & b_1 r & b_0 r \\ \hline & b_n & b_{n-1} & \cdots & b_1 & b_0 \\ \end{array} where bn=anb_n = a_n, bk=ak+bk+1rb_k = a_k + b_{k+1} r for k=n1k = n-1 down to 0, and b0b_0 is the (ideally zero for an exact ). This deflation isolates the remaining in the lower-degree , allowing iterative application until the is fully factored. Stability is enhanced by ordering deflations: forward for small-r|r| and backward for large-r|r| to minimize error accumulation in coefficients. For complex roots, which occur in conjugate pairs for real-coefficient polynomials, Bairstow's method extends Horner's synthetic division to quadratic factors x2sxtx^2 - s x - t. It iteratively refines initial guesses ss and tt using Newton-Raphson on the remainders from double synthetic division, treating the quadratic as a divisor. Horner's scheme computes the necessary polynomial and derivative evaluations during iterations, ensuring efficient convergence to the quadratic factor, after which deflation proceeds similarly to the linear case. This variant is particularly useful for higher-degree polynomials where real roots are sparse. In iterative root-finding methods like Newton-Raphson, Horner's stability reduces error by avoiding in evaluations. The method's nested form minimizes errors compared to naive power , providing reliable function and values for updates xk+1=xkf(xk)/f(xk)x_{k+1} = x_k - f(x_k)/f'(x_k). errors, if present, can amplify in subsequent iterations, but polishing the approximate root against the original —again using Horner—corrects perturbations, preserving convergence. This stability is crucial for ill-conditioned polynomials, where small changes can shift significantly, but Horner's forward or backward variants mitigate in the deflation chain.

Root Finding Example

To illustrate the application of Horner's method in root finding, consider the cubic polynomial p(x)=x36x2+11x6p(x) = x^3 - 6x^2 + 11x - 6. This polynomial has exact integer roots at x=1x = 1, x=2x = 2, and x=3x = 3, making it suitable for demonstrating the deflation technique where synthetic division (Horner's method) is used to factor out linear terms sequentially. In practice, an initial guess or evaluation at test points, such as possible rational roots from the rational root theorem (±1,±2,±3,±6\pm1, \pm2, \pm3, \pm6), helps isolate root intervals or confirm exact roots. Begin by evaluating p(x)p(x) at integer points to locate sign changes or zero values. Compute p(0)=6<0p(0) = -6 < 0, p(1)=16+116=0p(1) = 1 - 6 + 11 - 6 = 0, indicating an exact root at x=1x = 1. To deflate the polynomial and obtain the quadratic factor, apply Horner's method via synthetic division with the root 1: 1161161561560\begin{array}{r|r} 1 & 1 & -6 & 11 & -6 \\ & & 1 & -5 & 6 \\ \hline & 1 & -5 & 6 & 0 \\ \end{array} The quotient is x25x+6x^2 - 5x + 6, and the remainder is 0, confirming x=1x = 1 as a root, so p(x)=(x1)(x25x+6)p(x) = (x - 1)(x^2 - 5x + 6). Next, find roots of the quadratic quotient q(x)=x25x+6q(x) = x^2 - 5x + 6 by evaluating at test points: q(0)=6>0q(0) = 6 > 0, q(2)=410+6=0q(2) = 4 - 10 + 6 = 0, revealing an exact root at x=2x = 2. Deflate q(x)q(x) using with 2: 215626130\begin{array}{r|r} 2 & 1 & -5 & 6 \\ & & 2 & -6 \\ \hline & 1 & -3 & 0 \\ \end{array} The is x3x - 3, with 0, so q(x)=(x2)(x3)q(x) = (x - 2)(x - 3). Thus, p(x)=(x1)(x2)(x3)p(x) = (x - 1)(x - 2)(x - 3), and the roots are x=1x = 1, x=2x = 2, x=3x = 3. This example uses exact integer roots for pedagogical simplicity, allowing verification without approximation errors; in real-world scenarios with non-rational roots, Horner's method facilitates iterative refinement (e.g., via ) by efficiently evaluating the and its derivatives during , though numerical approximations and error bounds are typically required.

Numerical Applications

Floating-Point Implementation

In , the naive evaluation of polynomials—computing successive powers of xx and scaling by coefficients—often generates excessively large intermediate values, leading to overflow when x>1|x| > 1 and the degree is high, or underflow when x<1|x| < 1. For instance, evaluating a degree-100 polynomial at x=100x = 100 in the naive approach requires computing x100=10200x^{100} = 10^{200}, far exceeding typical floating-point ranges like IEEE 754 double precision (up to about 1030810^{308}). Horner's method addresses this by rewriting the polynomial in nested form, p(x)=a0+x(a1+x(a2++xan))p(x) = a_0 + x(a_1 + x(a_2 + \cdots + x a_n \cdots )), which performs only multiplications by xx and additions, keeping intermediate results closer in magnitude to the final value and thus bounding the exponent range to prevent such overflows or underflows. To implement Horner's method effectively in floating-point systems, especially for ill-conditioned where small changes in coefficients amplify errors, select the nesting order that minimizes the dynamic range of intermediates; for example, evaluate from the highest-degree coefficient if x>1|x| > 1 to avoid early underflow. Additional scaling can be applied by factoring out powers of the floating-point base (e.g., 2 or 10) from coefficients to keep all values within a safe exponent interval, such as [103,103][10^{-3}, 10^3], before applying the nesting—this adjusts the as p(x)=βrq(x)p(x) = \beta^r q(x) where β\beta is the base and rr is chosen to normalize intermediates, then rescale the result. Such guidelines ensure robustness without increasing the operation count beyond the standard nn multiplications and nn additions for degree nn. A simple implementation in Python for evaluating a high-degree , say p(x)=i=010aixip(x) = \sum_{i=0}^{10} a_i x^i with coefficients a=[1,2,3,4,5,6,7,8,9,10,11]a = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10, 11] at x=1.5x = 1.5, uses a loop to nest the operations:

python

def horner_eval(coeffs, x): result = 0.0 for coef in reversed(coeffs): result = result * x + coef return result # Example a = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10, 11] x = 1.5 p_value = horner_eval(a, x) print(p_value) # Outputs approximate value, e.g., 1234.5678901234567 in double precision

def horner_eval(coeffs, x): result = 0.0 for coef in reversed(coeffs): result = result * x + coef return result # Example a = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10, 11] x = 1.5 p_value = horner_eval(a, x) print(p_value) # Outputs approximate value, e.g., 1234.5678901234567 in double precision

This pseudocode directly translates to other languages and handles the nesting to maintain numerical control. Horner's method exhibits backward stability in floating-point arithmetic, meaning the computed result p^(x)\hat{p}(x) equals the exact evaluation p(x~;a~)p(\tilde{x}; \tilde{a}) of a slightly perturbed input, where the coefficient perturbations satisfy a~iaiγnai|\tilde{a}_i - a_i| \leq \gamma_n |a_i| with γn=nu/(1nu)nu\gamma_n = n u / (1 - n u) \approx n u for small unit roundoff uu (e.g., u253u \approx 2^{-53} in double precision) and degree nn. This follows from modeling each floating-point operation as fl(ab)=(ab)(1+δ)fl(a \oplus b) = (a \oplus b)(1 + \delta) with δu|\delta| \leq u, propagating through the nn steps to bound the relative perturbations by accumulating at most nn error factors per coefficient; the forward error is then controlled by the polynomial's condition number [κ(x)=aixi/p(x)[\kappa(x) = \sum |a_i x^i| / |p(x)|](/page/Condition_number), yielding p^(x)p(x)/p(x)κ(x)O(nu)|\hat{p}(x) - p(x)| / |p(x)| \leq \kappa(x) \cdot O(n u). This analysis, originally established by Wilkinson, confirms Horner's superiority over naive methods for stability.

Derivation for Arithmetic Operations

Horner's method, originally formulated for efficient , can be extended to perform arithmetic operations such as multiplication and division by a constant through modifications to its nested structure, thereby sharing computational steps and reducing the number of operations compared to naive approaches. Consider a p(x)=anxn+an1xn1++a1x+a0p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0. To compute q(x)=kp(x)q(x) = k \cdot p(x) for a constant kk, the naive method involves scaling each coefficient aia_i by kk (requiring nn multiplications) and then evaluating the scaled using standard Horner's method (another nn multiplications by xx and nn additions), for a total of 2n2n multiplications and nn additions. In contrast, the efficient approach leverages the nested form of p(x)p(x): p(x)=a0+x(a1+x(a2++xan))p(x) = a_0 + x(a_1 + x(a_2 + \cdots + x a_n \cdots )). Thus, q(x)=kp(x)=k(a0+x(a1+x(a2++xan)))q(x) = k \cdot p(x) = k \cdot (a_0 + x(a_1 + x(a_2 + \cdots + x a_n \cdots ))), which requires only nn multiplications by xx, nn additions to compute p(x)p(x), and one final multiplication by kk, totaling n+1n+1 multiplications and nn additions. This derivation factors the constant kk outside the nesting, sharing the computations for the powers of xx across all terms. Similarly, for division by a constant dd, compute r(x)=p(x)/d=(1/d)p(x)r(x) = p(x) / d = (1/d) \cdot p(x). The naive approach scales each coefficient by 1/d1/d (nn divisions) before Horner's evaluation (nn multiplications and nn additions), yielding nn divisions, nn multiplications, and nn additions. The optimized method evaluates p(x)p(x) using Horner's nesting (nn multiplications and nn additions) and then performs a single division by dd, resulting in nn multiplications, nn additions, and one division. This shares the nested computations and defers the division to the end, minimizing expensive operations. In cases where exact division is not possible (e.g., in or ), a adjustment can be applied post-evaluation by computing p(x)=dq(x)+rp(x) = d \cdot q(x) + r, where r=p(x)moddr = p(x) \mod d, but for floating-point contexts, the scaling suffices without explicit remainder. For scaled evaluation, where the argument itself is multiplied by a constant kk as in p(kx)p(kx), the derivation integrates the scaling directly into the Horner's nesting to avoid computing separate powers of kk. Start with p(y)=i=0naiyip(y) = \sum_{i=0}^n a_i y^i where y=kxy = kx, so naively this requires nn multiplications for powers of kk, nn for powers of xx, and additional scalings per term, exceeding 2n2n multiplications. The nested form yields: p(kx)=a0+kx(a1+kx(a2++kx(an)))p(kx) = a_0 + kx (a_1 + kx (a_2 + \cdots + kx (a_n) \cdots )) This requires only nn multiplications (each by kxkx) and nn additions, as the scaling by successive powers of kk is embedded in the repeated multiplication by kxkx. For division by constant dd in the argument, p(x/d)p(x/d) follows analogously with nesting using x/dx/d: p(xd)=a0+xd(a1+xd(a2++xdan))p\left(\frac{x}{d}\right) = a_0 + \frac{x}{d} \left( a_1 + \frac{x}{d} (a_2 + \cdots + \frac{x}{d} a_n \cdots ) \right) again using nn multiplications/divisions by x/dx/d and nn additions. In floating-point arithmetic, when k>1|k| > 1 or d<1|d| < 1 leading to large arguments, numerical stability is maintained by reversing the polynomial coefficients and evaluating at the reciprocal scaled argument, adjusting the result by the appropriate power: p(kx)=(kx)nq(1/(kx))p(kx) = (kx)^n \cdot q(1/(kx)), where qq is the reversed polynomial, computed via Horner's method on 1/(kx)1/(kx). This backward recursion incorporates division by the constant kxkx at each step while avoiding overflow.

Divided Difference Computation

Divided differences form the basis for Newton , where they act as coefficients in the divided-difference form of the interpolating polynomial. The zeroth-order divided difference is defined as f[x0]=f(x0)f[x_0] = f(x_0), the function evaluated at the point x0x_0. Higher-order divided differences are computed recursively: for k1k \geq 1, f[x0,x1,,xk]=f[x1,,xk]f[x0,,xk1]xkx0.f[x_0, x_1, \dots, x_k] = \frac{f[x_1, \dots, x_k] - f[x_0, \dots, x_{k-1}]}{x_k - x_0}. This recursion extends the first-order case f[x0,x1]=f(x1)f(x0)x1x0f[x_0, x_1] = \frac{f(x_1) - f(x_0)}{x_1 - x_0}. Horner's method adapts to divided difference computation by interpreting the divided difference table as a of synthetic divisions performed on coefficients derived from the function values at the interpolation points x0,x1,,xnx_0, x_1, \dots, x_n. In this framework, the process begins by treating the function values as initial "coefficients" and applies iteratively to extract higher-order differences, mirroring the in polynomial root-finding but focused on coefficients. The algorithm constructs the divided difference table by initializing the zeroth column with f(xi)f(x_i) for i=0i = 0 to nn, then filling subsequent columns using the recursive formula in a forward manner: each entry f[xi,,xi+j]=f[xi,,xi+j1]f[xi1,,xi+j1]xi+jxif[x_i, \dots, x_{i+j}] = \frac{f[x_i, \dots, x_{i+j-1}] - f[x_{i-1}, \dots, x_{i+j-1}]}{x_{i+j} - x_i}. This table-building process requires O(n2)O(n^2) operations for n+1n+1 points but benefits from Horner's nested structure in subsequent evaluations, allowing optimized passes over the data. The leading diagonal entries f[x0,,xk]f[x_0, \dots, x_k] for k=0k = 0 to nn directly yield the Newton coefficients. This adaptation, typically implemented via the table for , facilitates computing the differences compared to direct without tabulation. By providing these , Horner's method enables the direct assembly of the Newton p(x)=f[x0]+f[x0,x1](xx0)++f[x0,,xn]j=0n1(xxj)p(x) = f[x_0] + f[x_0, x_1](x - x_0) + \cdots + f[x_0, \dots, x_n] \prod_{j=0}^{n-1} (x - x_j), avoiding the need for a full solve and reducing overhead in tasks.

Additional Uses

Horner's method extends beyond basic and root-finding to facilitate efficient computation in . By nesting the polynomial structure, it enables simultaneous evaluation of the function and its s through a single forward pass, akin to forward-mode . This approach computes Taylor coefficients recursively, reducing the number of operations required for estimation. The method's historical roots trace back to the Ch'in-Horner , recognized as an early precursor to modern techniques for generating Taylor series. In control systems, Horner's method supports stability analysis by enabling to test potential of characteristic polynomials, aiding in the verification of system stability criteria. For instance, it is employed to polynomials during root isolation, which complements the Routh-Hurwitz criterion by efficiently handling cases involving known or suspected on the imaginary axis or in special array configurations. This application is particularly useful in feedback control design, where rapid polynomial manipulation helps assess Hurwitz stability without full factorization. Standard texts highlight its role in root determination for linear time-invariant systems. Within systems, Horner's method underpins factoring and (GCD) computations by providing an efficient mechanism for repeated divisions in the . It allows for modular evaluation and during subresultant sequences, minimizing arithmetic overhead in primitive sequence calculations. This is essential for symbolic manipulation in systems like or , where Horner's nesting optimizes the representation and reduction of over finite fields. Influential works on algorithmic algebra emphasize its centrality to these operations, ensuring and computational efficiency. In modern , Horner's method finds application in evaluating activations and kernels within neural networks, offering a compact way to compute high-degree polynomials with reduced operations. It is particularly valuable for learnable functions in deep networks, where it supports efficient forward propagation and derivative computation for activations like rectified power units (RePUs). GPU implementations leverage its sequential structure for batched evaluations in graph neural networks or kernel methods, enhancing scalability for large-scale training. Recent advancements in encrypted inference and approximation theory underscore its role in optimizing -based models.

Historical Context

Origins and Development

The origins of Horner's method extend to ancient and medieval mathematics, with early forms of nested evaluation appearing in the works of 11th-century Persian scholar al-Nasaw, 12th-century Arab mathematician al-Samaw’al al-Maghribi for root extraction, and 13th-century Chinese algebraist Zhu Shijie in his methods. In the early , Italian mathematician developed the method further as part of his work on factorization and root determination. In his 1804 dissertation Sopra la determinazione delle radici nelle equazioni numeriche di qualunque grado, Ruffini outlined an efficient algorithm for dividing by linear factors, leveraging the to simplify computations without full , which laid foundational concepts for later root isolation techniques in European algebra. William George Horner, an English schoolmaster and mathematician, advanced these ideas significantly in 1819 through his paper "A new method of solving numerical equations of all orders, by continuous ," published in the Philosophical Transactions of the Royal Society. Horner's approach introduced a nested multiplication scheme for , enabling iterative approximations of via successive substitutions, which proved particularly effective for high-degree equations where traditional methods were cumbersome. Horner's initial applications emphasized practical numerical computation, targeting the solution of algebraic equations encountered in astronomy and physics, such as those arising in the of trigonometric and logarithmic tables. This focus on addressed the limitations of exact methods for real-world calculations, making the technique accessible for manual computation by practitioners. By the mid-19th century, Horner's method gained widespread adoption in British algebra textbooks, where it was termed "Horner's process" for efficient division and root finding. Mathematicians like promoted its use in educational contexts, integrating it into standard curricula and ensuring its role as a staple tool in English algebraic instruction through the end of the century.

Naming and Recognition

Horner's method is named after the British mathematician , who described the algorithm in his 1819 paper "A new method of solving numerical equations of all orders, by continuous approximation," published in the Philosophical Transactions of the Royal Society. The method gained prominence in the English-speaking world through Horner's publication, which emphasized its practical utility for approximating of polynomials via successive approximations, leading to its widespread adoption in British and American mathematical education. However, controversies over priority arose soon after, including Theophilus Holdred's independent publication of a similar method in 1820, sparking debates on originality. The name "Horner's method" was specifically applied by in his writings during the mid-19th century, reflecting Horner's role in popularizing the technique among English readers despite its earlier appearances elsewhere. Alternative names for the method include , reflecting its use in division, and the Ruffini–Horner method, acknowledging the contributions of the Italian mathematician , who anticipated the algorithm in his 1804 work Sopra la determinazione delle radici nelle equazioni numeriche di qualunque grado. It is also referred to as nested multiplication or an application of the in some contexts. By the 1830s, the method appeared in standard algebra textbooks in and the , such as those by De Morgan himself, establishing its place in curricula for solving equations efficiently. Throughout the 19th and early 20th centuries, it held a prominent position in English and American educational texts, often presented as a novel computational tool. In the 20th century, historical analyses sparked debates on priority, with Florian Cajori's 1911 article in the Bulletin of the American Mathematical Society arguing that Ruffini's earlier description warranted recognition as the originator, influencing later attributions to both figures. These discussions highlighted the method's independent rediscoveries, including even earlier traces in medieval mathematics, but the nomenclature persisted due to Horner's influential exposition.

References

  1. https://www.[encyclopedia.com](/page/Encyclopedia.com)/science/dictionaries-thesauruses-pictures-and-press-releases/horner-william-george
Add your contribution
Related Hubs
User Avatar
No comments yet.