Hubbry Logo
Midpoint methodMidpoint methodMain
Open search
Midpoint method
Community hub
Midpoint method
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Midpoint method
Midpoint method
from Wikipedia
Illustration of the midpoint method assuming that equals the exact value The midpoint method computes so that the red chord is approximately parallel to the tangent line at the midpoint (the green line).

In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation,

The explicit midpoint method is given by the formula

the implicit midpoint method by

for Here, is the step size — a small positive number, and is the computed approximate value of The explicit midpoint method is sometimes also known as the modified Euler method,[1] the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method,[2] for further clarity see List of Runge–Kutta methods.

The name of the method comes from the fact that in the formula above, the function giving the slope of the solution is evaluated at the midpoint between at which the value of is known and at which the value of needs to be found.

A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at is computed using . The next value is found where the tangent intersects the vertical line . However, if the second derivative is only positive between and , or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of at the midpoint, then computing the slope of the tangent with . Finally, the improved tangent is used to calculate the value of from . This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of at the midpoint.

The local error at each step of the midpoint method is of order , giving a global error of order . Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as .

The methods are examples of a class of higher-order methods known as Runge–Kutta methods.

Derivation of the midpoint method

[edit]
Illustration of numerical integration for the equation Blue: the Euler method, green: the midpoint method, red: the exact solution, The step size is
The same illustration for It is seen that the midpoint method converges faster than the Euler method.

The midpoint method is a refinement of the Euler method

and is derived in a similar manner. The key to deriving Euler's method is the approximate equality

which is obtained from the slope formula

and keeping in mind that

For the midpoint methods, one replaces (3) with the more accurate

when instead of (2) we find

One cannot use this equation to find as one does not know at . The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for :

which, when plugged in (4), gives us

and the explicit midpoint method (1e).

The implicit method (1i) is obtained by approximating the value at the half step by the midpoint of the line segment from to

and thus

Inserting the approximation for results in the implicit Runge-Kutta method

which contains the implicit Euler method with step size as its first part.

Because of the time symmetry of the implicit method, all terms of even degree in of the local error cancel, so that the local error is automatically of order . Replacing the implicit with the explicit Euler method in the determination of results again in the explicit midpoint method.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The midpoint method, also known as the modified or explicit midpoint rule, is a second-order explicit Runge-Kutta numerical technique for approximating solutions to initial value problems of ordinary differential equations (ODEs) of the form dudt=f(t,u)\frac{du}{dt} = f(t, u) with u(t0)=u0u(t_0) = u_0. It improves upon the first-order by evaluating the derivative at an intermediate "midpoint" after a half-step, then using that slope to advance the full step, yielding a local of O(h3)\mathcal{O}(h^3) and global error of O(h2)\mathcal{O}(h^2), where hh is the step size. The algorithm proceeds iteratively: starting from (ti,ui)(t_i, u_i), compute the predictor ui+1/2=ui+h2f(ti,ui)u_{i+1/2} = u_i + \frac{h}{2} f(t_i, u_i) and ti+1/2=ti+h2t_{i+1/2} = t_i + \frac{h}{2}, then update ui+1=ui+hf(ti+1/2,ui+1/2)u_{i+1} = u_i + h f(t_{i+1/2}, u_{i+1/2}) and ti+1=ti+ht_{i+1} = t_i + h. This two-stage process makes it computationally efficient while providing quadratic convergence, allowing for more accurate trajectory tracing compared to linear methods like Euler's, especially for nonlinear ODEs over finite intervals. As a member of the Runge-Kutta family, it serves as a foundational explicit integrator in scientific computing, often used in simulations of physical systems where higher-order methods may be overly costly.

Mathematical formulation

Definition

The midpoint method is a specific numerical technique for approximating solutions to initial value problems in ordinary differential equations, serving as an explicit second-order Runge-Kutta method. It improves upon basic approaches by evaluating the derivative at the of each time interval, providing a more accurate estimate of the solution's behavior over that step compared to methods. This method requires two evaluations of the right-hand side function per step, balancing computational efficiency with enhanced precision. The midpoint method addresses initial value problems of the form y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0, where a fixed step size hh is used to generate a sequence of approximations yny(tn)y_n \approx y(t_n) with tn=t0+nht_n = t_0 + n h. It incorporates a predictor step to estimate the solution at the interval's , then uses the there to advance the approximation, thereby capturing effects that simpler linear extrapolations miss. Historically, the midpoint method emerged as part of the early development of Runge-Kutta methods in the late 19th and early 20th centuries, initially introduced by in 1895 as an adaptation of midpoint quadrature for differential equations. It was further refined by Karl Heun in 1900 and Wilhelm Kutta in 1901, who integrated midpoint evaluations into higher-order schemes, establishing its foundational role in numerical ODE solvers. This evolution addressed the limitations of the , a simpler predecessor that relies solely on the initial slope and achieves only first-order accuracy.

General form for ODEs

The midpoint method is applied to initial value problems for ordinary differential equations (ODEs) of the form y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0, where ff is a sufficiently smooth function. For a scalar ODE, the explicit midpoint method advances the numerical solution from (tn,yn)(t_n, y_n) to (tn+1,yn+1)(t_{n+1}, y_{n+1}) using a step size h>0h > 0 via the following iterative formula: k1=f(tn,yn),k_1 = f(t_n, y_n), y^=yn+h2k1,\hat{y} = y_n + \frac{h}{2} k_1, k2=f(tn+h2,y^),k_2 = f\left(t_n + \frac{h}{2}, \hat{y}\right), yn+1=yn+hk2,y_{n+1} = y_n + h k_2, where tn+1=tn+ht_{n+1} = t_n + h and y^\hat{y} serves as a temporary predictor at the midpoint. This formulation follows a predictor-corrector style, with the first stage predicting the midpoint value explicitly using the slope at the current point, and the second stage correcting the full step using the slope at that predicted midpoint. This explicit midpoint method is distinct from implicit variants, such as the implicit midpoint rule, which solve an involving the unknown yn+1y_{n+1} at both stages and are not addressed here. The method extends naturally to systems of first-order ODEs, where yRm\mathbf{y} \in \mathbb{R}^m is a vector, f:R×RmRmf: \mathbb{R} \times \mathbb{R}^m \to \mathbb{R}^m, and all operations apply component-wise. The iterative formula becomes k1=f(tn,yn),\mathbf{k}_1 = f(t_n, \mathbf{y}_n), y^=yn+h2k1,\hat{\mathbf{y}} = \mathbf{y}_n + \frac{h}{2} \mathbf{k}_1, k2=f(tn+h2,y^),\mathbf{k}_2 = f\left(t_n + \frac{h}{2}, \hat{\mathbf{y}}\right), yn+1=yn+hk2.\mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{k}_2. For well-posedness and uniqueness of solutions, ff must satisfy a Lipschitz condition in y\mathbf{y} uniformly in tt.

Derivation

Taylor series expansion approach

The expansion provides a foundational approach to deriving the midpoint method for solving the y(t)=f(t,y(t))y'(t) = f(t, y(t)), y(t0)=y0y(t_0) = y_0, by matching the method's update formula to the exact solution's expansion up to second order. Consider the exact solution at tn+1=tn+ht_{n+1} = t_n + h, where hh is the step size. Expanding around tnt_n, the solution satisfies y(tn+1)=y(tn)+hy(tn)+h22y(tn)+O(h3).y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(t_n) + O(h^3). Since y(t)=f(t,y(t))y'(t) = f(t, y(t)), the first term is hf(tn,y(tn))h f(t_n, y(t_n)). To incorporate the second-order term, differentiate y(t)=f(t,y(t))y'(t) = f(t, y(t)) using the chain rule, yielding y(t)=ft(t,y(t))+fy(t,y(t))f(t,y(t)).y''(t) = \frac{\partial f}{\partial t}(t, y(t)) + \frac{\partial f}{\partial y}(t, y(t)) \cdot f(t, y(t)). This expression, evaluated at tnt_n, captures the quadratic contribution to the expansion. The forward Euler method approximates only up to the linear term, achieving first-order accuracy with local truncation error O(h2)O(h^2). The method improves accuracy by approximating the second at a to match the h2/2h^2/2 term. Specifically, it uses an intermediate step to estimate the solution at tn+h/2t_n + h/2, given by y(tn)+(h/2)f(tn,y(tn))y(t_n) + (h/2) f(t_n, y(t_n)), and evaluates the right-hand side there: f(tn+h/2,y(tn)+(h/2)f(tn,y(tn)))f(t_n + h/2, y(t_n) + (h/2) f(t_n, y(t_n))). The update formula then becomes yn+1=yn+hf(tn+h2,yn+h2f(tn,yn)).y_{n+1} = y_n + h f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2} f(t_n, y_n)\right). This substitution ensures the method's expansion aligns with the exact series up to O(h2)O(h^2). To verify the order, expand the method's output using Taylor series for ff around (tn,yn)(t_n, y_n). The resulting series matches the exact expansion's constant, linear, and quadratic terms, leaving a local of O(h3)O(h^3), which confirms second-order accuracy. This derivation highlights the method's superiority over schemes like Euler by systematically incorporating higher-order corrections via series matching.

Runge-Kutta interpretation

The midpoint method can be viewed as a two-stage explicit Runge-Kutta method of order 2 for solving the initial value problem y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0. Explicit Runge-Kutta methods approximate the solution at tn+1=tn+ht_{n+1} = t_n + h via yn+1=yn+hi=1sbikiy_{n+1} = y_n + h \sum_{i=1}^s b_i k_i, where the intermediate stages are computed as ki=f(tn+cih,yn+hj=1i1aijkj)k_i = f\left( t_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_j \right) for i=1,,si = 1, \dots, s, with the coefficients bib_i, cic_i, and aija_{ij} (for j<ij < i) defining the method. For the midpoint method, s=2s = 2 and the coefficients form the following Butcher tableau: 0001212001\begin{array}{c|cc} 0 & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline & 0 & 1 \end{array} Thus, c=(012)c = \begin{pmatrix} 0 \\ \frac{1}{2} \end{pmatrix}, A=(00120)A = \begin{pmatrix} 0 & 0 \\ \frac{1}{2} & 0 \end{pmatrix}, and b=(01)b = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. This configuration satisfies the order conditions for a second-order method, namely i=1sbi=1\sum_{i=1}^s b_i = 1 and i=1sbici=12\sum_{i=1}^s b_i c_i = \frac{1}{2}. In contrast to Ralston's method, another two-stage second-order Runge-Kutta method that optimizes stability by setting b=(1434)b = \begin{pmatrix} \frac{1}{4} \\ \frac{3}{4} \end{pmatrix} and a21=23a_{21} = \frac{2}{3}, the midpoint method places all weight on the second stage.

Error analysis

Local truncation error

The local truncation error for the midpoint method is defined as the difference between the exact solution value at the next time step and the value obtained by applying a single step of the method starting from the exact solution at the current time step, denoted as y(tn+h)z(tn+h)y(t_n + h) - z(t_n + h), where y(tn)=yny(t_n) = y_n is exact and zz represents the numerical approximation after one step. Assuming the right-hand side function f(t,y)f(t, y) and its partial derivatives up to third order are continuous, the local truncation error can be derived using Taylor series expansions around tnt_n. The expansions for y(tn+1)y(t_{n+1}) and the intermediate terms in the method cancel out the constant, linear, and quadratic terms, leaving a remainder involving the third derivative. Specifically, the error is h36y(ξ)\frac{h^3}{6} y'''(\xi) for some ξ[tn,tn+h]\xi \in [t_n, t_n + h], which is of order O(h3)O(h^3). This principal error term, involving the third derivative of the solution, demonstrates that the midpoint method achieves second-order accuracy.

Global error and convergence

The global error in the midpoint method for solving the initial value problem y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0, over a fixed time interval [t0,T][t_0, T] with N=(Tt0)/hN = (T - t_0)/h steps of size hh, is defined as en=y(tn)yne_n = y(t_n) - y_n, where y(tn)y(t_n) is the exact solution at the nn-th time point tn=t0+nht_n = t_0 + n h and yny_n is the numerical approximation. Under suitable conditions on ff, this error satisfies en=O(h2)\|e_n\| = O(h^2) as h0h \to 0 for each fixed nn, and uniformly max0nNen=O(h2)\max_{0 \leq n \leq N} \|e_n\| = O(h^2). To establish this bound, consider the error recursion derived from the method's update and the exact solution's Taylor expansion. Assuming ff satisfies a Lipschitz condition in yy with constant L>0L > 0, i.e., f(t,y1)f(t,y2)Ly1y2\|f(t, y_1) - f(t, y_2)\| \leq L \|y_1 - y_2\| for all t[t0,T]t \in [t_0, T] and relevant y1,y2y_1, y_2, the global error satisfies en+1en(1+Lh)+O(h3),\|e_{n+1}\| \leq \|e_n\| (1 + L h) + O(h^3), where the O(h3)O(h^3) term arises from the local of the method. Iterating this inequality from n=0n = 0 (with e0=0e_0 = 0) yields a : enhk=0n1O(h3)(1+Lh)n1kCh2eL(tnt0)\|e_n\| \leq h \sum_{k=0}^{n-1} O(h^3) (1 + L h)^{n-1-k} \leq C h^2 e^{L (t_n - t_0)} for some constant C>0C > 0 independent of hh, using the bound (1+Lh)meLmh(1 + L h)^m \leq e^{L m h}. Thus, the error remains O(h2)O(h^2) over the fixed interval. The convergence of the midpoint method follows from a general for one-step methods: if the method is consistent (local O(hp+1)O(h^{p+1})) and the problem is well-posed ( continuous ff), then the global error is O(hp)O(h^p). For the explicit midpoint method, which is a second-order Runge-Kutta scheme with p=2p = 2 when ff is continuously differentiable (C1C^1) and in yy, the implies max0nNenCh2\max_{0 \leq n \leq N} \|e_n\| \leq C h^2 for some CC independent of h>0h > 0 sufficiently small. In practice, this quadratic convergence means that halving the step size hh reduces the global error by a factor of approximately 4, enabling efficient accuracy control in simulations by refining the grid.

Implementation and examples

Algorithm steps

The midpoint method approximates solutions to the y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0 over the interval [t0,T][t_0, T] using a fixed step size hh. The algorithm initializes the current time t=t0t = t_0 and solution value y=y0y = y_0, then iteratively advances the solution until tt reaches or exceeds TT, producing a sequence of points (tn,yn)(t_n, y_n). The required inputs are the right-hand side function f(t,y)f(t, y), initial time t0t_0, initial value y0y_0, step size h>0h > 0, and end time T>t0T > t_0; the output is the discrete solution trajectory {(tn,yn)}n=0N\{(t_n, y_n)\}_{n=0}^N where tNTt_N \approx T. The core iteration follows this structure:

initialize t = t_0, y = y_0 while t < T: k1 = f(t, y) temp = y + (h/2) * k1 k2 = f(t + h/2, temp) y = y + h * k2 t = t + h output sequence (t_n, y_n)

initialize t = t_0, y = y_0 while t < T: k1 = f(t, y) temp = y + (h/2) * k1 k2 = f(t + h/2, temp) y = y + h * k2 t = t + h output sequence (t_n, y_n)

Each full step requires two evaluations of ff. For systems of ODEs, where yRdy \in \mathbb{R}^d and f:R×RdRdf: \mathbb{R} \times \mathbb{R}^d \to \mathbb{R}^d, the algorithm extends naturally by treating yy, k1k_1, k2k_2, and temp\textit{temp} as vectors, with all operations (, ) performed element-wise or via vector arithmetic in the implementation language. The step size hh must balance accuracy, which improves with smaller hh, against efficiency, as smaller steps increase the total number of function evaluations and computation time. Implementations should also handle potential singularities in ff, such as points leading to , by validating inputs or using adaptive safeguards before evaluation./8:_Ordinary_Differential_Equations/8.03:_Runge-Kutta_2nd-Order_Method_for_Solving_Ordinary_Differential_Equations)

Numerical example

To illustrate the application of the midpoint method, consider the y=yy' = -y, y(0)=1y(0) = 1, whose exact solution is y(t)=ety(t) = e^{-t}. This is a standard linear test equation for evaluating numerical solvers. We apply the method with step size h=0.1h = 0.1 for the first two steps. At t0=0t_0 = 0, y0=1y_0 = 1, compute k1=f(t0,y0)=1k_1 = f(t_0, y_0) = -1. The intermediate point is y=y0+h2k1=10.05=0.95y^* = y_0 + \frac{h}{2} k_1 = 1 - 0.05 = 0.95, so k2=f(t0+h2,y)=0.95k_2 = f(t_0 + \frac{h}{2}, y^*) = -0.95. Thus, y1=y0+hk2=10.095=0.905y_1 = y_0 + h k_2 = 1 - 0.095 = 0.905 at t1=0.1t_1 = 0.1. Next, at t1=0.1t_1 = 0.1, y1=0.905y_1 = 0.905, compute k1=0.905k_1 = -0.905. The intermediate point is y=0.905+0.05×(0.905)=0.85975y^* = 0.905 + 0.05 \times (-0.905) = 0.85975, so k2=0.85975k_2 = -0.85975. Thus, y2=0.905+0.1×(0.85975)0.819y_2 = 0.905 + 0.1 \times (-0.85975) \approx 0.819 at t2=0.2t_2 = 0.2. The results, including the exact values and absolute errors, are summarized in the following table: | tnt_n | yny_n (midpoint) | Exact y(tn)y(t_n) | yny(tn)|y_n - y(t_n)| | |-----------|----------------------|---------------------|-----------------------------| | 0.0 | 1.000 | 1.000000 | 0.000000 | | 0.1 | 0.905 | 0.904837 | 0.000163 | | 0.2 | 0.819 | 0.818731 | 0.000294 | The absolute errors are on the order of 10410^{-4} after two steps. To demonstrate quadratic convergence, the method is applied with halved step size h=0.05h = 0.05 (four steps to reach t=0.2t = 0.2), yielding approximations y(0.1)0.904877y(0.1) \approx 0.904877 and y(0.2)0.818802y(0.2) \approx 0.818802. The errors reduce by a factor of approximately 4, as shown below, consistent with the second-order accuracy of the method. | tnt_n | yny_n (, h=0.05h=0.05) | Exact y(tn)y(t_n) | yny(tn)|y_n - y(t_n)| | |-----------|------------------------------------|---------------------|-----------------------------| | 0.1 | 0.904877 | 0.904837 | 0.000039 | | 0.2 | 0.818802 | 0.818731 | 0.000071 |

Applications and extensions

Use in scientific computing

The midpoint method is widely applied in scientific computing for modeling oscillatory systems, such as the frictionless simple pendulum, where it solves the second-order describing angular motion to simulate periodic with good accuracy over moderate time scales. In , it facilitates basic predictions, for instance, in simulating the paths of projectiles or particles under gravitational forces by integrating the step by step. The method's advantages lie in its simplicity and second-order accuracy, which provide a balance between computational efficiency and precision without requiring higher derivatives, making it ideal for initial prototyping in physics and simulations. Its straightforward is easily implemented in software environments like or Python, often via custom functions or wrappers around libraries such as SciPy's ODE solvers for rapid deployment in educational and research settings. In practice, the explicit midpoint method has limitations, particularly its instability when applied to stiff systems, where rapid changes in solution behavior demand very small step sizes to maintain . Consequently, it serves primarily as a foundational component in adaptive solvers, contributing to error estimation and step-size control rather than standalone use in complex, long-duration simulations. Historically, the midpoint method, as an early member of the Runge-Kutta family developed in the early , played a role in 20th-century for solving differential equations in simulations before the dominance of higher-order and implicit methods with advancing power. Due to its second-order convergence properties, it offers reliable results for non-stiff oscillatory and mechanical problems when step sizes are appropriately selected.

Variants for stiff equations

The explicit midpoint method, being an explicit Runge-Kutta scheme, suffers from severe stability limitations when applied to stiff ordinary differential equations (ODEs), where the constant LL of the right-hand side function is large; specifically, the step size hh must satisfy h<2/Lh < 2/L to ensure stability, often leading to prohibitively small time steps for practical computations. To address this, the implicit midpoint method serves as a key variant, defined by the update yn+1=yn+hf(tn+h2,yn+yn+12),\mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{f}\left( t_n + \frac{h}{2}, \frac{\mathbf{y}_n + \mathbf{y}_{n+1}}{2} \right), which is a one-stage Gauss-Legendre Runge-Kutta method of order 2 and is A-stable, allowing larger step sizes for stiff problems without sacrificing qualitative behavior. This nonlinear for yn+1\mathbf{y}_{n+1} is typically solved using for mildly nonlinear cases or for stronger nonlinearity, with the explicit midpoint solution providing a good initial guess to accelerate convergence. For Hamiltonian systems, the implicit midpoint method is symplectic, preserving the symplectic structure and thus long-term in separable cases, making it particularly suitable for simulations in where stiffness arises from disparate timescales. A related implicit is the two-stage Gauss-Legendre Runge-Kutta method, with Butcher tableau coefficients A=[1/41/43/61/4+3/61/4]A = \begin{bmatrix} 1/4 & 1/4 - \sqrt{3}/6 \\ 1/4 + \sqrt{3}/6 & 1/4 \end{bmatrix}
Add your contribution
Related Hubs
User Avatar
No comments yet.