Recent from talks
Nothing was collected or created yet.
Midpoint method
View on Wikipedia
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation,
The explicit midpoint method is given by the formula
| 1e |
the implicit midpoint method by
| 1i |
for Here, is the step size — a small positive number, and is the computed approximate value of The explicit midpoint method is sometimes also known as the modified Euler method,[1] the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method,[2] for further clarity see List of Runge–Kutta methods.
The name of the method comes from the fact that in the formula above, the function giving the slope of the solution is evaluated at the midpoint between at which the value of is known and at which the value of needs to be found.
A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at is computed using . The next value is found where the tangent intersects the vertical line . However, if the second derivative is only positive between and , or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of at the midpoint, then computing the slope of the tangent with . Finally, the improved tangent is used to calculate the value of from . This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of at the midpoint.
The local error at each step of the midpoint method is of order , giving a global error of order . Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as .
The methods are examples of a class of higher-order methods known as Runge–Kutta methods.
Derivation of the midpoint method
[edit]

The midpoint method is a refinement of the Euler method
and is derived in a similar manner. The key to deriving Euler's method is the approximate equality
| 2 |
which is obtained from the slope formula
| 3 |
and keeping in mind that
For the midpoint methods, one replaces (3) with the more accurate
when instead of (2) we find
| 4 |
One cannot use this equation to find as one does not know at . The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for :
which, when plugged in (4), gives us
and the explicit midpoint method (1e).
The implicit method (1i) is obtained by approximating the value at the half step by the midpoint of the line segment from to
and thus
Inserting the approximation for results in the implicit Runge-Kutta method
which contains the implicit Euler method with step size as its first part.
Because of the time symmetry of the implicit method, all terms of even degree in of the local error cancel, so that the local error is automatically of order . Replacing the implicit with the explicit Euler method in the determination of results again in the explicit midpoint method.
See also
[edit]Notes
[edit]- ^ Süli & Mayers 2003, p. 328
- ^ Burden & Faires 2010, p. 286
References
[edit]- Griffiths, D. V.; Smith, I. M. (1991). Numerical methods for engineers: a programming approach. Boca Raton: CRC Press. p. 218. ISBN 0-8493-8610-1.
- Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1.
- Burden, Richard; Faires, John (2010). Numerical Analysis. Richard Stratton. p. 286. ISBN 978-0-538-73351-9.
Midpoint method
View on GrokipediaMathematical formulation
Definition
The midpoint method is a specific numerical technique for approximating solutions to initial value problems in ordinary differential equations, serving as an explicit second-order Runge-Kutta method.[4] It improves upon basic approaches by evaluating the derivative at the midpoint of each time interval, providing a more accurate estimate of the solution's behavior over that step compared to first-order methods.[5] This method requires two evaluations of the right-hand side function per step, balancing computational efficiency with enhanced precision.[4] The midpoint method addresses initial value problems of the form , , where a fixed step size is used to generate a sequence of approximations with .[6] It incorporates a predictor step to estimate the solution at the interval's midpoint, then uses the slope there to advance the approximation, thereby capturing curvature effects that simpler linear extrapolations miss.[4] Historically, the midpoint method emerged as part of the early development of Runge-Kutta methods in the late 19th and early 20th centuries, initially introduced by Carl Runge in 1895 as an adaptation of midpoint quadrature for differential equations.[7] It was further refined by Karl Heun in 1900 and Wilhelm Kutta in 1901, who integrated midpoint evaluations into higher-order schemes, establishing its foundational role in numerical ODE solvers.[7] This evolution addressed the limitations of the forward Euler method, a simpler predecessor that relies solely on the initial slope and achieves only first-order accuracy.[5]General form for ODEs
The midpoint method is applied to initial value problems for ordinary differential equations (ODEs) of the form , , where is a sufficiently smooth function.[3] For a scalar ODE, the explicit midpoint method advances the numerical solution from to using a step size via the following iterative formula: where and serves as a temporary predictor at the midpoint.[3][8] This formulation follows a predictor-corrector style, with the first stage predicting the midpoint value explicitly using the slope at the current point, and the second stage correcting the full step using the slope at that predicted midpoint. This explicit midpoint method is distinct from implicit variants, such as the implicit midpoint rule, which solve an equation involving the unknown at both stages and are not addressed here.[3] The method extends naturally to systems of first-order ODEs, where is a vector, , and all operations apply component-wise. The iterative formula becomes For well-posedness and uniqueness of solutions, must satisfy a Lipschitz condition in uniformly in .[3]Derivation
Taylor series expansion approach
The Taylor series expansion provides a foundational approach to deriving the midpoint method for solving the initial value problem , , by matching the method's update formula to the exact solution's expansion up to second order. Consider the exact solution at , where is the step size. Expanding around , the solution satisfies Since , the first term is .[9] To incorporate the second-order term, differentiate using the chain rule, yielding This expression, evaluated at , captures the quadratic contribution to the expansion. The forward Euler method approximates only up to the linear term, achieving first-order accuracy with local truncation error .[9] The midpoint method improves accuracy by approximating the second derivative at a midpoint to match the term. Specifically, it uses an intermediate step to estimate the solution at , given by , and evaluates the right-hand side there: . The update formula then becomes This substitution ensures the method's expansion aligns with the exact series up to .[9] To verify the order, expand the method's output using Taylor series for around . The resulting series matches the exact expansion's constant, linear, and quadratic terms, leaving a local truncation error of , which confirms second-order accuracy. This derivation highlights the method's superiority over first-order schemes like Euler by systematically incorporating higher-order corrections via series matching.[9]Runge-Kutta interpretation
The midpoint method can be viewed as a two-stage explicit Runge-Kutta method of order 2 for solving the initial value problem , .[10] Explicit Runge-Kutta methods approximate the solution at via , where the intermediate stages are computed as for , with the coefficients , , and (for ) defining the method.[11] For the midpoint method, and the coefficients form the following Butcher tableau: Thus, , , and .[12] This configuration satisfies the order conditions for a second-order method, namely and .[11] In contrast to Ralston's method, another two-stage second-order Runge-Kutta method that optimizes stability by setting and , the midpoint method places all weight on the second stage.[13]Error analysis
Local truncation error
The local truncation error for the midpoint method is defined as the difference between the exact solution value at the next time step and the value obtained by applying a single step of the method starting from the exact solution at the current time step, denoted as , where is exact and represents the numerical approximation after one step.[14] Assuming the right-hand side function and its partial derivatives up to third order are continuous, the local truncation error can be derived using Taylor series expansions around . The expansions for and the intermediate terms in the method cancel out the constant, linear, and quadratic terms, leaving a remainder involving the third derivative.[14] Specifically, the error is for some , which is of order . This principal error term, involving the third derivative of the solution, demonstrates that the midpoint method achieves second-order accuracy.[14]Global error and convergence
The global error in the midpoint method for solving the initial value problem , , over a fixed time interval with steps of size , is defined as , where is the exact solution at the -th time point and is the numerical approximation. Under suitable conditions on , this error satisfies as for each fixed , and uniformly .[15][16] To establish this bound, consider the error recursion derived from the method's update and the exact solution's Taylor expansion. Assuming satisfies a Lipschitz condition in with constant , i.e., for all and relevant , the global error satisfies where the term arises from the local truncation error of the method. Iterating this inequality from (with ) yields a geometric series summation: for some constant independent of , using the bound . Thus, the error remains over the fixed interval.[15] The convergence of the midpoint method follows from a general theorem for one-step methods: if the method is consistent (local truncation error ) and the problem is well-posed (Lipschitz continuous ), then the global error is . For the explicit midpoint method, which is a second-order Runge-Kutta scheme with when is continuously differentiable () and Lipschitz in , the theorem implies for some independent of sufficiently small.[15][16] In practice, this quadratic convergence means that halving the step size reduces the global error by a factor of approximately 4, enabling efficient accuracy control in simulations by refining the grid.[15]Implementation and examples
Algorithm steps
The midpoint method approximates solutions to the initial value problem , over the interval using a fixed step size . The algorithm initializes the current time and solution value , then iteratively advances the solution until reaches or exceeds , producing a sequence of approximation points .[1] The required inputs are the right-hand side function , initial time , initial value , step size , and end time ; the output is the discrete solution trajectory where .[2] The core iteration follows this pseudocode structure:initialize t = t_0, y = y_0
while t < T:
k1 = f(t, y)
temp = y + (h/2) * k1
k2 = f(t + h/2, temp)
y = y + h * k2
t = t + h
output sequence (t_n, y_n)
initialize t = t_0, y = y_0
while t < T:
k1 = f(t, y)
temp = y + (h/2) * k1
k2 = f(t + h/2, temp)
y = y + h * k2
t = t + h
output sequence (t_n, y_n)
