Hubbry Logo
Finite difference coefficientFinite difference coefficientMain
Open search
Finite difference coefficient
Community hub
Finite difference coefficient
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Finite difference coefficient
Finite difference coefficient
from Wikipedia

In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.

Central finite difference

[edit]

This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing:[1]

Derivative Accuracy −5 −4 −3 −2 −1 0 1 2 3 4 5
1 2 −1/2 0 1/2
4 1/12 −2/3 0 2/3 −1/12
6 −1/60 3/20 −3/4 0 3/4 −3/20 1/60
8 1/280 −4/105 1/5 −4/5 0 4/5 −1/5 4/105 −1/280
2 2 1 −2 1
4 −1/12 4/3 −5/2 4/3 −1/12
6 1/90 −3/20 3/2 −49/18 3/2 −3/20 1/90
8 −1/560 8/315 −1/5 8/5 −205/72 8/5 −1/5 8/315 −1/560
3 2 −1/2 1 0 −1 1/2
4 1/8 −1 13/8 0 −13/8 1 −1/8
6 −7/240 3/10 −169/120 61/30 0 −61/30 169/120 −3/10 7/240
4 2 1 −4 6 −4 1
4 −1/6 2 −13/2 28/3 −13/2 2 −1/6
6 7/240 −2/5 169/60 −122/15 91/8 −122/15 169/60 −2/5 7/240
5 2 −1/2 2 −5/2 0 5/2 −2 1/2
4 1/6 −3/2 13/3 −29/6 0 29/6 −13/3 3/2 −1/6
6 −13/288 19/36 −87/32 13/2 −323/48 0 323/48 −13/2 87/32 −19/36 13/288
6 2 1 −6 15 −20 15 −6 1
4 −1/4 3 −13 29 −75/2 29 −13 3 −1/4
6 13/240 −19/24 87/16 −39/2 323/8 −1023/20 323/8 −39/2 87/16 −19/24 13/240

For example, the third derivative with a second-order accuracy is

where represents a uniform grid spacing between each finite difference interval, and .

For the -th derivative with accuracy , there are central coefficients . These are given by the solution of the linear equation system

where the only non-zero value on the right hand side is in the -th row.

An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available.[2]
Given that the left-hand side matrix is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a -th order polynomial to a window of points. Consequently, the coefficients can also be computed as the -th order derivative of a fully determined Savitzky–Golay filter with polynomial degree and a window size of . For this, open source implementations are also available.[3] There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition.

The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients.[4] For the first six derivatives we have the following:

Derivative
1
2
3
4
5
6

where are generalized harmonic numbers.

Forward finite difference

[edit]

This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing:[1]

Derivative Accuracy 0 1 2 3 4 5 6 7 8
1 1 −1 1              
2 −3/2 2 −1/2            
3 −11/6 3 −3/2 1/3          
4 −25/12 4 −3 4/3 −1/4        
5 −137/60 5 −5 10/3 −5/4 1/5      
6 −49/20 6 −15/2 20/3 −15/4 6/5 −1/6    
2 1 1 −2 1            
2 2 −5 4 −1          
3 35/12 −26/3 19/2 −14/3 11/12        
4 15/4 −77/6 107/6 −13 61/12 −5/6      
5 203/45 −87/5 117/4 −254/9 33/2 −27/5 137/180    
6 469/90 −223/10 879/20 −949/18 41 −201/10 1019/180 −7/10  
3 1 −1 3 −3 1          
2 −5/2 9 −12 7 −3/2        
3 −17/4 71/4 −59/2 49/2 −41/4 7/4      
4 −49/8 29 −461/8 62 −307/8 13 −15/8    
5 −967/120 638/15 −3929/40 389/3 −2545/24 268/5 −1849/120 29/15  
6 −801/80 349/6 −18353/120 2391/10 −1457/6 4891/30 −561/8 527/30 −469/240
4 1 1 −4 6 −4 1        
2 3 −14 26 −24 11 −2      
3 35/6 −31 137/2 −242/3 107/2 −19 17/6    
4 28/3 −111/2 142 −1219/6 176 −185/2 82/3 −7/2  
5 1069/80 −1316/15 15289/60 −2144/5 10993/24 −4772/15 2803/20 −536/15 967/240

For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are

while the corresponding backward approximations are given by

Backward finite difference

[edit]

To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this:[5]

Derivative Accuracy −8 −7 −6 −5 −4 −3 −2 −1 0
1 1               −1 1
2             1/2 −2 3/2
3           −1/3 3/2 −3 11/6
2 1             1 −2 1
2           −1 4 −5 2
3 1           −1 3 −3 1
2         3/2 −7 12 −9 5/2
4 1         1 −4 6 −4 1
2       −2 11 −24 26 −14 3

Arbitrary stencil points

[edit]

For arbitrary stencil points and any derivative of order up to one less than the number of stencil points, the finite difference coefficients can be obtained by solving the linear equations [6]

where is the Kronecker delta, equal to one if , and zero otherwise.

Example, for , order of differentiation :

The order of accuracy of the approximation takes the usual form (or better in the case of central finite difference).[citation needed]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Finite difference coefficients are numerical weights used in finite difference methods to approximate derivatives of a function by linearly combining its values at discrete points on a grid, enabling the solution of differential equations through . These coefficients are derived to achieve a specified , typically by ensuring the is for polynomials up to a certain degree, and they form the basis of stencils such as forward, backward, or centered differences. The derivation of finite difference coefficients often relies on Taylor series expansions, where the function's expansion around a point is truncated and matched to the difference formula to cancel lower-order error terms. For instance, the centered difference approximation for the first derivative, f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}, uses coefficients [1/2,0,1/2]/h[-1/2, 0, 1/2]/h and achieves second-order accuracy with an error of O(h2)O(h^2). Higher-order coefficients can be obtained by solving linear systems involving Vandermonde matrices or using on more grid points, such as the fourth-order first-derivative with coefficients [1/12,8/12,0,8/12,1/12]/h[1/12, -8/12, 0, 8/12, -1/12]/h. These methods extend to second and higher derivatives, like the standard second-derivative approximation f(x)f(xh)2f(x)+f(x+h)h2f''(x) \approx \frac{f(x-h) - 2f(x) + f(x+h)}{h^2} with coefficients [1,2,1]/h2[1, -2, 1]/h^2 and O(h2)O(h^2) error. In practice, finite difference coefficients are applied in one-dimensional and multi-dimensional problems, including the numerical solution of partial differential equations like the or , where they discretize spatial operators on uniform or nonuniform grids. For boundary value problems, one-sided coefficients handle endpoints, such as the second-order backward difference for the first : f(x)3f(x)4f(xh)+f(x2h)2hf'(x) \approx \frac{3f(x) - 4f(x-h) + f(x-2h)}{2h}. Advanced variants, including compact or spectral-like schemes, optimize these coefficients for improved stability and accuracy in time-dependent simulations. Computational tools, such as functions for automated coefficient generation, facilitate their use in and scientific .

Introduction

Definition and Purpose

Finite difference coefficients refer to the specific numerical weights applied to function evaluations at discrete points to approximate the s of a function in . These coefficients transform sampled data into an estimate of the continuous , leveraging the underlying structure of the function without requiring its explicit analytical . The primary purpose of finite difference coefficients is to enable accurate approximations of , such as the first derivative f(x)f'(x), from discrete data points when the exact functional form is unavailable, complex, or only known numerically. This approach is fundamental in solving differential equations, simulating physical systems, and performing sensitivity analyses in fields like and , where direct differentiation is impractical. In standard notation, a consists of points xi=x+ihx_i = x + i h for integers ii within a , where hh is the step size. The coefficients cic_i are chosen such that cif(xi)hkf(k)(x)\sum c_i f(x_i) \approx h^k f^{(k)}(x), providing the kk-th order scaled appropriately by hh. For instance, a basic might employ coefficients on a two-point to yield cif(xi)/hf(x)\sum c_i f(x_i)/h \approx f'(x), illustrating how these weights balance the contributions from nearby points to mimic the 's behavior. Common implementations include forward, backward, and central finite differences, each selecting stencil points asymmetrically or symmetrically around xx to optimize accuracy or stability.

Historical Development

The development of finite difference coefficients has roots in the late with Newton's work on using s. It was further advanced in the early by Brook Taylor's treatise Methodus Incrementorum Directa et Inversa, which introduced the of finite differences. Key foundations were established later in the through advancements in difference , with Leonhard Euler in his 1755 work Institutiones calculi differentialis, where he explored the of finite differences and introduced notation such as Δy to denote increments. Euler's contributions emphasized the between finite differences and differential operations, providing a framework for later numerical approximations. Joseph-Louis Lagrange further enriched this area in the late through his papers on methods from 1783 to 1793, developing formulas that facilitated the use of finite differences in polynomial approximation and difference equations. In the , systematized these ideas in his 1860 A Treatise on the of Finite Differences, which detailed applications of finite differences to , summation, and the solution of difference equations, building directly on Eulerian and Lagrangian foundations. The 20th century marked the transition of finite difference methods, including coefficient derivations, to practical numerical solutions for partial differential equations (PDEs). This was pioneered by in his 1911 paper on approximate arithmetical solutions of physical problems, where he applied finite differences to two-dimensional stress analysis. During the and , further refinements came from researchers like Richard Vynne Southwell, who extended Richardson's approaches to elasticity problems, and the 1928 collaboration of , Kurt Friedrichs, and Hans Lewy, which introduced stability conditions essential for reliable PDE solvers. Mid-century advancements in the to 1950s, influenced by the emergence of digital computers, saw significant progress, with contributions from figures such as in developing numerical methods for PDEs using finite differences. These efforts laid the groundwork for finite differences in and broader . In the post-1970s era, the proliferation of digital computing integrated coefficients into software libraries, enabling automated generation and application in simulations; for instance, , developed by in the late 1970s and first released in 1984, incorporated functions like diff for -based differentiation, reflecting the method's maturation in computational tools.

Basic Approximations

Forward Finite Difference Coefficients

Forward finite difference coefficients are employed in numerical methods to approximate derivatives using a one-sided stencil that includes the evaluation point xx and subsequent points x+h,x+2h,,x+nhx + h, x + 2h, \dots, x + nh, where h>0h > 0 is the step size. This approach is particularly useful for estimating derivatives at boundaries or where data is only available in the forward direction. The general form for the first derivative approximation is f(x)k=0nckf(x+kh)/hf'(x) \approx \sum_{k=0}^{n} c_k f(x + k h) / h, where the coefficients ckc_k are chosen to achieve a desired order of accuracy by canceling lower-order terms in the Taylor expansion. For the first-order approximation of the first , the coefficients are [1,1][-1, 1], yielding f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x + h) - f(x)}{h}, with a of O(h)O(h). This simple two-point formula arises directly from the definition of the as a limit and provides a basic forward estimate suitable for introductory . To derive the error, the Taylor expansion of f(x+h)f(x + h) around xx is f(x+h)=f(x)+hf(x)+h22f(ξ)f(x + h) = f(x) + h f'(x) + \frac{h^2}{2} f''(\xi) for some ξ(x,x+h)\xi \in (x, x + h), leading to the leading error term h2f(ξ)\frac{h}{2} f''(\xi). A second-order accurate example for the first uses a three-point with coefficients [3,4,1][-3, 4, -1], giving f(x)3f(x)+4f(x+h)f(x+2h)2hf'(x) \approx \frac{-3 f(x) + 4 f(x + h) - f(x + 2h)}{2h}, and O(h2)O(h^2). These coefficients are determined by solving a system that matches the up to the second order, ensuring the linear and quadratic terms are exactly reproduced while the cubic term contributes to the . The leading term from the Taylor expansion is h23f(η)\frac{h^2}{3} f'''(\eta) for some η\eta in the interval. The primary advantage of forward finite difference coefficients lies in their simplicity and applicability at boundary points in simulations or data sets, where points to the left of xx may be unavailable, allowing straightforward implementation without requiring symmetric data. Compared to central differences, forward schemes generally offer lower accuracy for the same size due to their one-sided nature.

Backward Finite Difference Coefficients

Backward finite difference coefficients approximate at a point xx by employing function evaluations at xx and preceding points xh,x2h,,xnhx - h, x - 2h, \dots, x - nh, where h>0h > 0 is the uniform step size. This configuration is asymmetric, relying solely on points to the left of or at the evaluation point, making it ideal for scenarios where data to the right is unavailable. The first-order approximation for the first derivative uses the two-point with points xx and xhx - h, yielding the formula f(x)f(x)f(xh)h,f'(x) \approx \frac{f(x) - f(x - h)}{h}, corresponding to coefficients 1h\frac{1}{h} for f(x)f(x) and 1h-\frac{1}{h} for f(xh)f(x - h), or equivalently [1,1]/h[1, -1]/h when ordered as f(x),f(xh)f(x), f(x - h). The for this approximation is O(h)O(h). For the second derivative, a representative first-order accurate backward employs the three-point with points x2hx - 2h, xhx - h, and xx, given by f(x)f(x2h)2f(xh)+f(x)h2,f''(x) \approx \frac{f(x - 2h) - 2f(x - h) + f(x)}{h^2}, with coefficients [1,2,1]/h2[1, -2, 1]/h^2 ordered from f(x2h)f(x - 2h) to f(x)f(x). Although this formula exhibits symmetry akin to central differences, it fits the backward framework when applied at boundaries using only leftward points, and its error is also O(h)O(h). Error characteristics of backward finite differences mirror those of forward differences in magnitude but are particularly advantageous at the right endpoint of a computational domain, where forward stencils would extrapolate beyond available data. They are commonly applied in initial value problems, such as time-marching simulations, when future values cannot be accessed without solving the system implicitly. These coefficients relate to forward difference counterparts through a sign change in the step size hh.

Central Finite Difference Coefficients

Central finite difference coefficients approximate using function values at points symmetrically placed around the evaluation point xx, such as xhx - h, xx, and x+hx + h for the basic second-order schemes, where hh is the step size. This symmetry leverages the expansions from both sides to cancel lower-order error terms, resulting in approximations with even-powered leading errors. For the first derivative, the central finite difference uses the coefficients [1/2,0,1/2][-1/2, 0, 1/2] applied to the function values at [xh,x,x+h][x - h, x, x + h], scaled by 1/h1/h, yielding f(x)f(x+h)f(xh)2h,f'(x) \approx \frac{f(x + h) - f(x - h)}{2h}, with a truncation error of O(h2)O(h^2). For the second derivative, the coefficients [1,2,1][1, -2, 1], scaled by 1/h21/h^2, give f(x)f(xh)2f(x)+f(x+h)h2,f''(x) \approx \frac{f(x - h) - 2f(x) + f(x + h)}{h^2}, also with an error of O(h2)O(h^2). These formulations arise from equating the linear combination of Taylor expansions to the desired derivative, ensuring the constant and linear terms match while higher terms contribute to the error. Higher-order central approximations extend these patterns using wider symmetric stencils. For odd-order derivatives like the first, coefficients are antisymmetric (ci=cic_i = -c_{-i}, c0=0c_0 = 0); for even-order like the second, they are symmetric (ci=cic_i = c_{-i}). A fourth-order accurate first , for instance, employs a with coefficients [1/12,2/3,0,2/3,1/12][-1/12, 2/3, 0, -2/3, 1/12] at points [x2h,xh,x,x+h,x+2h][x - 2h, x - h, x, x + h, x + 2h], scaled by 1/h1/h, or equivalently f(x)f(x2h)+8f(xh)8f(x+h)+f(x+2h)12h,f'(x) \approx \frac{-f(x - 2h) + 8f(x - h) - 8f(x + h) + f(x + 2h)}{12h}, with error O(h4)O(h^4). This symmetry ensures that error terms involve only even powers of hh, enhancing accuracy by naturally suppressing odd-powered contributions that appear in asymmetric schemes.

General Formulations

Arbitrary Stencil Configurations

In finite difference methods, arbitrary stencil configurations allow for the approximation of derivatives using sets of points x0,x1,,xmx_0, x_1, \dots, x_m that are not necessarily equally spaced or symmetrically arranged around the evaluation point xˉ\bar{x}, typically taken as one of the stencil points such as x0=xˉx_0 = \bar{x}. This flexibility is particularly useful in applications involving irregular grids, adaptive meshes, or boundaries where uniform spacing is impractical. The general approach involves expressing the kk-th derivative f(k)(xˉ)f^{(k)}(\bar{x}) as a linear combination i=0mcif(xi)f(k)(xˉ)\sum_{i=0}^m c_i f(x_i) \approx f^{(k)}(\bar{x}), where the coefficients cic_i are determined to achieve a desired order of accuracy by matching the Taylor series expansion up to the appropriate terms. To find the coefficients, a Vandermonde system is solved based on the condition that the approximation is exact for polynomials up to degree mm. Specifically, for an (m+1)(m+1)-point stencil, the system is given by i=0mci(xixˉ)j={0if j=0,,k1,k!if j=k,0if j=k+1,,m,\sum_{i=0}^m c_i (x_i - \bar{x})^{j} = \begin{cases} 0 & \text{if } j = 0, \dots, k-1, \\ k! & \text{if } j = k, \\ 0 & \text{if } j = k+1, \dots, m, \end{cases}
Add your contribution
Related Hubs
User Avatar
No comments yet.