Recent from talks
Nothing was collected or created yet.
Finite difference coefficient
View on WikipediaIn mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.
Central finite difference
[edit]This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing:[1]
| Derivative | Accuracy | −5 | −4 | −3 | −2 | −1 | 0 | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 2 | −1/2 | 0 | 1/2 | ||||||||
| 4 | 1/12 | −2/3 | 0 | 2/3 | −1/12 | |||||||
| 6 | −1/60 | 3/20 | −3/4 | 0 | 3/4 | −3/20 | 1/60 | |||||
| 8 | 1/280 | −4/105 | 1/5 | −4/5 | 0 | 4/5 | −1/5 | 4/105 | −1/280 | |||
| 2 | 2 | 1 | −2 | 1 | ||||||||
| 4 | −1/12 | 4/3 | −5/2 | 4/3 | −1/12 | |||||||
| 6 | 1/90 | −3/20 | 3/2 | −49/18 | 3/2 | −3/20 | 1/90 | |||||
| 8 | −1/560 | 8/315 | −1/5 | 8/5 | −205/72 | 8/5 | −1/5 | 8/315 | −1/560 | |||
| 3 | 2 | −1/2 | 1 | 0 | −1 | 1/2 | ||||||
| 4 | 1/8 | −1 | 13/8 | 0 | −13/8 | 1 | −1/8 | |||||
| 6 | −7/240 | 3/10 | −169/120 | 61/30 | 0 | −61/30 | 169/120 | −3/10 | 7/240 | |||
| 4 | 2 | 1 | −4 | 6 | −4 | 1 | ||||||
| 4 | −1/6 | 2 | −13/2 | 28/3 | −13/2 | 2 | −1/6 | |||||
| 6 | 7/240 | −2/5 | 169/60 | −122/15 | 91/8 | −122/15 | 169/60 | −2/5 | 7/240 | |||
| 5 | 2 | −1/2 | 2 | −5/2 | 0 | 5/2 | −2 | 1/2 | ||||
| 4 | 1/6 | −3/2 | 13/3 | −29/6 | 0 | 29/6 | −13/3 | 3/2 | −1/6 | |||
| 6 | −13/288 | 19/36 | −87/32 | 13/2 | −323/48 | 0 | 323/48 | −13/2 | 87/32 | −19/36 | 13/288 | |
| 6 | 2 | 1 | −6 | 15 | −20 | 15 | −6 | 1 | ||||
| 4 | −1/4 | 3 | −13 | 29 | −75/2 | 29 | −13 | 3 | −1/4 | |||
| 6 | 13/240 | −19/24 | 87/16 | −39/2 | 323/8 | −1023/20 | 323/8 | −39/2 | 87/16 | −19/24 | 13/240 |
For example, the third derivative with a second-order accuracy is
where represents a uniform grid spacing between each finite difference interval, and .
For the -th derivative with accuracy , there are central coefficients . These are given by the solution of the linear equation system
where the only non-zero value on the right hand side is in the -th row.
An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available.[2]
Given that the left-hand side matrix is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a -th order polynomial to a window of points. Consequently, the coefficients can also be computed as the -th order derivative of a fully determined Savitzky–Golay filter with polynomial degree and a window size of . For this, open source implementations are also available.[3] There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition.
The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients.[4] For the first six derivatives we have the following:
| Derivative | ||
|---|---|---|
| 1 | ||
| 2 | ||
| 3 | ||
| 4 | ||
| 5 | ||
| 6 |
where are generalized harmonic numbers.
Forward finite difference
[edit]This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing:[1]
| Derivative | Accuracy | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1 | −1 | 1 | |||||||
| 2 | −3/2 | 2 | −1/2 | |||||||
| 3 | −11/6 | 3 | −3/2 | 1/3 | ||||||
| 4 | −25/12 | 4 | −3 | 4/3 | −1/4 | |||||
| 5 | −137/60 | 5 | −5 | 10/3 | −5/4 | 1/5 | ||||
| 6 | −49/20 | 6 | −15/2 | 20/3 | −15/4 | 6/5 | −1/6 | |||
| 2 | 1 | 1 | −2 | 1 | ||||||
| 2 | 2 | −5 | 4 | −1 | ||||||
| 3 | 35/12 | −26/3 | 19/2 | −14/3 | 11/12 | |||||
| 4 | 15/4 | −77/6 | 107/6 | −13 | 61/12 | −5/6 | ||||
| 5 | 203/45 | −87/5 | 117/4 | −254/9 | 33/2 | −27/5 | 137/180 | |||
| 6 | 469/90 | −223/10 | 879/20 | −949/18 | 41 | −201/10 | 1019/180 | −7/10 | ||
| 3 | 1 | −1 | 3 | −3 | 1 | |||||
| 2 | −5/2 | 9 | −12 | 7 | −3/2 | |||||
| 3 | −17/4 | 71/4 | −59/2 | 49/2 | −41/4 | 7/4 | ||||
| 4 | −49/8 | 29 | −461/8 | 62 | −307/8 | 13 | −15/8 | |||
| 5 | −967/120 | 638/15 | −3929/40 | 389/3 | −2545/24 | 268/5 | −1849/120 | 29/15 | ||
| 6 | −801/80 | 349/6 | −18353/120 | 2391/10 | −1457/6 | 4891/30 | −561/8 | 527/30 | −469/240 | |
| 4 | 1 | 1 | −4 | 6 | −4 | 1 | ||||
| 2 | 3 | −14 | 26 | −24 | 11 | −2 | ||||
| 3 | 35/6 | −31 | 137/2 | −242/3 | 107/2 | −19 | 17/6 | |||
| 4 | 28/3 | −111/2 | 142 | −1219/6 | 176 | −185/2 | 82/3 | −7/2 | ||
| 5 | 1069/80 | −1316/15 | 15289/60 | −2144/5 | 10993/24 | −4772/15 | 2803/20 | −536/15 | 967/240 |
For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are
while the corresponding backward approximations are given by
Backward finite difference
[edit]To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this:[5]
| Derivative | Accuracy | −8 | −7 | −6 | −5 | −4 | −3 | −2 | −1 | 0 |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1 | −1 | 1 | |||||||
| 2 | 1/2 | −2 | 3/2 | |||||||
| 3 | −1/3 | 3/2 | −3 | 11/6 | ||||||
| 2 | 1 | 1 | −2 | 1 | ||||||
| 2 | −1 | 4 | −5 | 2 | ||||||
| 3 | 1 | −1 | 3 | −3 | 1 | |||||
| 2 | 3/2 | −7 | 12 | −9 | 5/2 | |||||
| 4 | 1 | 1 | −4 | 6 | −4 | 1 | ||||
| 2 | −2 | 11 | −24 | 26 | −14 | 3 |
Arbitrary stencil points
[edit]For arbitrary stencil points and any derivative of order up to one less than the number of stencil points, the finite difference coefficients can be obtained by solving the linear equations [6]
where is the Kronecker delta, equal to one if , and zero otherwise.
Example, for , order of differentiation :
The order of accuracy of the approximation takes the usual form (or better in the case of central finite difference).[citation needed]
See also
[edit]References
[edit]- ^ a b Fornberg, Bengt (1988), "Generation of Finite Difference Formulas on Arbitrarily Spaced Grids", Mathematics of Computation, 51 (184): 699–706, doi:10.1090/S0025-5718-1988-0935077-0, ISSN 0025-5718.
- ^ "A Python package for finite difference numerical derivatives in arbitrary number of dimensions". GitHub. 14 October 2021.
- ^ "scipy.signal.savgol_filter". Scipy Online documentation. 2008–2024.
- ^ "Finite differences coefficients". StackExchange. 5 June 2023.
- ^ Taylor, Cameron (12 December 2019). "Finite Difference Coefficients Calculator". MIT.
- ^ "Finite Difference Coefficients Calculator".
Finite difference coefficient
View on GrokipediaIntroduction
Definition and Purpose
Finite difference coefficients refer to the specific numerical weights applied to function evaluations at discrete stencil points to approximate the derivatives of a function in numerical analysis.[3] These coefficients transform sampled data into an estimate of the continuous derivative, leveraging the underlying structure of the function without requiring its explicit analytical derivative.[4] The primary purpose of finite difference coefficients is to enable accurate approximations of derivatives, such as the first derivative , from discrete data points when the exact functional form is unavailable, complex, or only known numerically.[3] This approach is fundamental in solving differential equations, simulating physical systems, and performing sensitivity analyses in fields like engineering and computational science, where direct differentiation is impractical.[5] In standard notation, a stencil consists of points for integers within a finite set, where is the step size. The coefficients are chosen such that , providing the -th order derivative approximation scaled appropriately by .[3] For instance, a basic first-order approximation might employ coefficients on a two-point stencil to yield , illustrating how these weights balance the contributions from nearby points to mimic the derivative's behavior.[4] Common implementations include forward, backward, and central finite differences, each selecting stencil points asymmetrically or symmetrically around to optimize accuracy or stability.[3]Historical Development
The development of finite difference coefficients has roots in the late 17th century with Isaac Newton's work on interpolation using finite differences.[6] It was further advanced in the early 18th century by Brook Taylor's 1715 treatise Methodus Incrementorum Directa et Inversa, which introduced the calculus of finite differences.[7] Key foundations were established later in the 18th century through advancements in difference calculus, with Leonhard Euler in his 1755 work Institutiones calculi differentialis, where he explored the calculus of finite differences and introduced notation such as Δy to denote increments.[8] Euler's contributions emphasized the analogy between finite differences and differential operations, providing a framework for later numerical approximations.[8] Joseph-Louis Lagrange further enriched this area in the late 18th century through his papers on interpolation methods from 1783 to 1793, developing formulas that facilitated the use of finite differences in polynomial approximation and difference equations.[9] In the 19th century, George Boole systematized these ideas in his 1860 treatise A Treatise on the Calculus of Finite Differences, which detailed applications of finite differences to interpolation, summation, and the solution of difference equations, building directly on Eulerian and Lagrangian foundations.[10] The 20th century marked the transition of finite difference methods, including coefficient derivations, to practical numerical solutions for partial differential equations (PDEs). This was pioneered by Lewis Fry Richardson in his 1911 paper on approximate arithmetical solutions of physical problems, where he applied finite differences to two-dimensional stress analysis.[11] During the 1910s and 1920s, further refinements came from researchers like Richard Vynne Southwell, who extended Richardson's approaches to elasticity problems, and the 1928 collaboration of Richard Courant, Kurt Friedrichs, and Hans Lewy, which introduced stability conditions essential for reliable PDE solvers.[12] Mid-century advancements in the 1930s to 1950s, influenced by the emergence of digital computers, saw significant progress, with contributions from figures such as John von Neumann in developing numerical methods for PDEs using finite differences.[13] These efforts laid the groundwork for finite differences in computational fluid dynamics and broader numerical analysis. In the post-1970s era, the proliferation of digital computing integrated finite difference coefficients into software libraries, enabling automated generation and application in simulations; for instance, MATLAB, developed by Cleve Moler in the late 1970s and first released in 1984, incorporated functions likediff for finite difference-based differentiation, reflecting the method's maturation in computational tools.[14]
Basic Approximations
Forward Finite Difference Coefficients
Forward finite difference coefficients are employed in numerical methods to approximate derivatives using a one-sided stencil that includes the evaluation point and subsequent points , where is the step size. This approach is particularly useful for estimating derivatives at boundaries or where data is only available in the forward direction. The general form for the first derivative approximation is , where the coefficients are chosen to achieve a desired order of accuracy by canceling lower-order terms in the Taylor expansion.[15] For the first-order approximation of the first derivative, the coefficients are , yielding , with a truncation error of . This simple two-point formula arises directly from the definition of the derivative as a limit and provides a basic forward estimate suitable for introductory numerical differentiation. To derive the error, the Taylor expansion of around is for some , leading to the leading error term .[3][15] A second-order accurate example for the first derivative uses a three-point stencil with coefficients , giving , and truncation error . These coefficients are determined by solving a system that matches the Taylor series up to the second order, ensuring the linear and quadratic terms are exactly reproduced while the cubic term contributes to the error. The leading error term from the Taylor expansion is for some in the interval.[15] The primary advantage of forward finite difference coefficients lies in their simplicity and applicability at boundary points in simulations or data sets, where points to the left of may be unavailable, allowing straightforward implementation without requiring symmetric data. Compared to central differences, forward schemes generally offer lower accuracy for the same stencil size due to their one-sided nature.[3][15]Backward Finite Difference Coefficients
Backward finite difference coefficients approximate derivatives at a point by employing function evaluations at and preceding points , where is the uniform step size. This stencil configuration is asymmetric, relying solely on points to the left of or at the evaluation point, making it ideal for scenarios where data to the right is unavailable.[15] The first-order approximation for the first derivative uses the two-point stencil with points and , yielding the formula corresponding to coefficients for and for , or equivalently when ordered as . The truncation error for this approximation is .[16] For the second derivative, a representative first-order accurate backward approximation employs the three-point stencil with points , , and , given by with coefficients ordered from to . Although this formula exhibits symmetry akin to central differences, it fits the backward framework when applied at boundaries using only leftward points, and its error is also .[15] Error characteristics of backward finite differences mirror those of forward differences in magnitude but are particularly advantageous at the right endpoint of a computational domain, where forward stencils would extrapolate beyond available data. They are commonly applied in initial value problems, such as time-marching simulations, when future values cannot be accessed without solving the system implicitly.[3] These coefficients relate to forward difference counterparts through a sign change in the step size .[15]Central Finite Difference Coefficients
Central finite difference coefficients approximate derivatives using function values at points symmetrically placed around the evaluation point , such as , , and for the basic second-order schemes, where is the step size. This symmetry leverages the Taylor series expansions from both sides to cancel lower-order error terms, resulting in approximations with even-powered leading errors.[15] For the first derivative, the central finite difference uses the coefficients applied to the function values at , scaled by , yielding with a truncation error of .[15] For the second derivative, the coefficients , scaled by , give also with an error of . These formulations arise from equating the linear combination of Taylor expansions to the desired derivative, ensuring the constant and linear terms match while higher terms contribute to the error.[15] Higher-order central approximations extend these patterns using wider symmetric stencils. For odd-order derivatives like the first, coefficients are antisymmetric (, ); for even-order like the second, they are symmetric (). A fourth-order accurate first derivative, for instance, employs a five-point stencil with coefficients at points , scaled by , or equivalently with error .[17] This symmetry ensures that error terms involve only even powers of , enhancing accuracy by naturally suppressing odd-powered contributions that appear in asymmetric schemes.[15]General Formulations
Arbitrary Stencil Configurations
In finite difference methods, arbitrary stencil configurations allow for the approximation of derivatives using sets of points that are not necessarily equally spaced or symmetrically arranged around the evaluation point , typically taken as one of the stencil points such as . This flexibility is particularly useful in applications involving irregular grids, adaptive meshes, or boundaries where uniform spacing is impractical. The general approach involves expressing the -th derivative as a linear combination , where the coefficients are determined to achieve a desired order of accuracy by matching the Taylor series expansion up to the appropriate terms.[5] To find the coefficients, a Vandermonde system is solved based on the condition that the approximation is exact for polynomials up to degree . Specifically, for an -point stencil, the system is given by yielding accuracy of order at least . This linear system can be represented in matrix form as , where is the transposed Vandermonde matrix with entries (adjusted for the factorial scaling in some formulations), and is the unit vector with 1 in the -th position (0-indexed). The matrix is invertible provided the points are distinct, though numerical stability decreases for large due to the ill-conditioning of Vandermonde matrices.[5][18] For example, consider approximating the first derivative using the uneven stencil points , , . The Vandermonde system for second-order accuracy (, ) becomes with solution , , . Thus, , and the truncation error is where scales the stencil width. This illustrates how coefficients adapt to the point distribution, unlike fixed formulas for uniform grids.[5] Arbitrary stencils introduce challenges, including higher computational cost from solving the linear system at each evaluation point, especially on adaptive grids where stencils vary spatially. Additionally, sensitivity to point clustering can lead to ill-conditioned systems and reduced accuracy if points are too closely spaced relative to the function's smoothness, potentially amplifying rounding errors in floating-point arithmetic. These configurations relate to generalized difference operators, which extend classical forward, backward, or central differences to irregular supports while preserving the core linear combination structure for derivative approximation.[5][18]Higher-Order Coefficient Generation
To achieve higher-order accuracy in finite difference approximations, the stencil size is increased to incorporate more points, allowing the coefficients to cancel additional terms in the Taylor series expansion beyond the leading-order derivative. This results in a smaller truncation error, typically of the form , where is the order of accuracy and is the grid spacing. For central finite differences approximating the first derivative on a uniform grid, a stencil with points can yield accuracy of order , as the symmetric arrangement naturally eliminates even-powered error terms up to that degree.[19][1] A representative example is the fourth-order central approximation for the first derivative using a 5-point stencil () at points , , , , and : with coefficients , , , , respectively, and truncation error . This formulation cancels the and terms in the Taylor expansion, leaving the higher-order residual.[3][1] In general, the order of accuracy for the first derivative relates to the number of stencil points minus the derivative order, with central schemes achieving even orders due to symmetry; for instance, a 7-point stencil () supports sixth-order accuracy with error . Common coefficients for central first-derivative approximations up to sixth order (omitting the factor for brevity) are summarized in the following table, where coefficients are listed from left to right for points to :| Order | Stencil Points | Coefficients |
|---|---|---|
| 2 | 3 () | , , |
| 4 | 5 () | , , , , |
| 6 | 7 () | , , , , , , |
