Hubbry Logo
Extended Kalman filterExtended Kalman filterMain
Open search
Extended Kalman filter
Community hub
Extended Kalman filter
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Extended Kalman filter
Extended Kalman filter
from Wikipedia

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered[1] the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.[2]

History

[edit]

The papers establishing the mathematical foundations of Kalman type filters were published between 1959 and 1961.[3][4][5] The Kalman filter is the optimal linear estimator for linear system models with additive independent white noise in both the transition and the measurement systems. Unfortunately, in engineering, most systems are nonlinear, so attempts were made to apply this filtering method to nonlinear systems; most of this work was done at NASA Ames.[6][7] The EKF adapted techniques from calculus, namely multivariate Taylor series expansions, to linearize a model about a working point. If the system model (as described below) is not well known or is inaccurate, then Monte Carlo methods, especially particle filters, are employed for estimation. Monte Carlo techniques predate the existence of the EKF but are more computationally expensive for any moderately dimensioned state-space.

Formulation

[edit]

In the extended Kalman filter, the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions.

Here wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. uk is the control vector.

The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.

At each time step, the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate.

See the Kalman Filter article for notational remarks.

Discrete-time predict and update equations

[edit]

Notation represents the estimate of at time n given observations up to and including at time mn.

Predict

[edit]
Predicted state estimate
Predicted covariance estimate

Update

[edit]
Innovation or measurement residual
Innovation (or residual) covariance
Near-optimal Kalman gain
Updated state estimate
Updated covariance estimate

where the state transition and observation matrices are defined to be the following Jacobians

Disadvantages and alternatives

[edit]

Unlike its linear counterpart, the extended Kalman filter in general is not an optimal estimator (it is optimal if the measurement and the state transition model are both linear, as in that case the extended Kalman filter is identical to the regular one). In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization. Another problem with the extended Kalman filter is that the estimated covariance matrix tends to underestimate the true covariance matrix and therefore risks becoming inconsistent in the statistical sense without the addition of "stabilising noise" [8] .

More generally one should consider the infinite dimensional nature of the nonlinear filtering problem and the inadequacy of a simple mean and variance-covariance estimator to fully represent the optimal filter. It should also be noted that the extended Kalman filter may give poor performances even for very simple one-dimensional systems such as the cubic sensor,[9] where the optimal filter can be bimodal[10] and as such cannot be effectively represented by a single mean and variance estimator, having a rich structure, or similarly for the quadratic sensor.[11] In such cases the projection filters have been studied as an alternative, having been applied also to navigation.[12] Other general nonlinear filtering methods like full particle filters may be considered in this case.

Having stated this, the extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS.[citation needed]

Generalizations

[edit]

Continuous-time extended Kalman filter

[edit]

Model

Initialize

Predict-Update

Unlike the discrete-time extended Kalman filter, the prediction and update steps are coupled in the continuous-time extended Kalman filter.[13]

Discrete-time measurements

[edit]

Most physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor. Therefore, the system model and measurement model are given by

where .

Initialize

Predict

where

Update

where

The update equations are identical to those of discrete-time extended Kalman filter.[14]

Higher-order extended Kalman filters

[edit]

The above recursion is a first-order extended Kalman filter (EKF). Higher order EKFs may be obtained by retaining more terms of the Taylor series expansions. For example, second and third order EKFs have been described.[14] However, higher order EKFs tend to only provide performance benefits when the measurement noise is small.

Non-additive noise formulation and equations

[edit]

The typical formulation of the EKF involves the assumption of additive process and measurement noise. This assumption, however, is not necessary for EKF implementation.[15] Instead, consider a more general system of the form:

Here wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. Then the covariance prediction and innovation equations become

where the matrices and are Jacobian matrices:

The predicted state estimate and measurement residual are evaluated at the mean of the process and measurement noise terms, which is assumed to be zero. Otherwise, the non-additive noise formulation is implemented in the same manner as the additive noise EKF.

Implicit extended Kalman filter

[edit]

In certain cases, the observation model of a nonlinear system cannot be solved for , but can be expressed by the implicit function:

where are the noisy observations.

The conventional extended Kalman filter can be applied with the following substitutions:[16][17]

where:

Here the original observation covariance matrix is transformed, and the innovation is defined differently. The Jacobian matrix is defined as before, but determined from the implicit observation model .

Modifications and alternatives

[edit]

Iterated extended Kalman filter

[edit]

The iterated extended Kalman filter improves the linearization of the extended Kalman filter by recursively modifying the centre point of the Taylor expansion. This reduces the linearization error at the cost of increased computational requirements.[17]

Robust extended Kalman filter

[edit]

The robust extended Kalman filter arises by linearizing the signal model about the current state estimate and using the linear Kalman filter to predict the next estimate. This attempts to produce a locally optimal filter, however, it is not necessarily stable because the solutions of the underlying Riccati equation are not guaranteed to be positive definite. One way of improving performance is the faux algebraic Riccati technique [18] which trades off optimality for stability. The familiar structure of the extended Kalman filter is retained but stability is achieved by selecting a positive definite solution to a faux algebraic Riccati equation for the gain design.

Another way of improving extended Kalman filter performance is to employ the H-infinity results from robust control. Robust filters are obtained by adding a positive definite term to the design Riccati equation.[19] The additional term is parametrized by a scalar which the designer may tweak to achieve a trade-off between mean-square-error and peak error performance criteria.

Invariant extended Kalman filter

[edit]

The invariant extended Kalman filter (IEKF) is a modified version of the EKF for nonlinear systems possessing symmetries (or invariances). It combines the advantages of both the EKF and the recently introduced symmetry-preserving filters. Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations converge to constant values on a much bigger set of trajectories than equilibrium points as it is the case for the EKF, which results in a better convergence of the estimation.

Unscented Kalman filters

[edit]

A nonlinear Kalman filter which shows promise as an improvement over the EKF is the unscented Kalman filter (UKF). In the UKF, the probability density is approximated by a deterministic sampling of points which represent the underlying distribution as a Gaussian. The nonlinear transformation of these points are intended to be an estimation of the posterior distribution, the moments of which can then be derived from the transformed samples. The transformation is known as the unscented transform. The UKF tends to be more robust and more accurate than the EKF in its estimation of error in all the directions.

"The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization."[1]

A 2012 paper includes simulation results which suggest that some published variants of the UKF fail to be as accurate as the Second Order Extended Kalman Filter (SOEKF), also known as the augmented Kalman filter.[20] The SOEKF predates the UKF by approximately 35 years with the moment dynamics first described by Bass et al.[21] The difficulty in implementing any Kalman-type filters for nonlinear state transitions stems from the numerical stability issues required for precision,[22] however the UKF does not escape this difficulty in that it uses linearization as well, namely linear regression. The stability issues for the UKF generally stem from the numerical approximation to the square root of the covariance matrix, whereas the stability issues for both the EKF and the SOEKF stem from possible issues in the Taylor Series approximation along the trajectory.

Ensemble Kalman Filter

[edit]

The UKF was in fact predated by the Ensemble Kalman filter, invented by Evensen in 1994. It has the advantage over the UKF that the number of ensemble members used can be much smaller than the state dimension, allowing for applications in very high-dimensional systems, such as weather prediction, with state-space sizes of a billion or more.

Fuzzy Kalman Filter

[edit]

Fuzzy Kalman filter with a new method to represent possibility distributions was recently proposed to replace probability distributions by possibility distributions in order to obtain a genuine possibilistic filter, enabling the use of non-symmetric process and observation noises as well as higher inaccuracies in both process and observation models. [23]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Extended Kalman filter (EKF) is a recursive algorithm for estimating the state of a nonlinear dynamic system from noisy measurements, extending the linear Kalman filter by locally linearizing the nonlinear process and measurement models using first-order Taylor approximations via Jacobian matrices. It operates under the assumption of additive, white Gaussian noise in both the process and measurement equations, providing an approximate minimum mean-squared error estimate by applying the Kalman filter's prediction and correction steps to these linearized models. This approach makes the EKF computationally efficient for real-time applications while handling systems where the state evolution follows a nonlinear function xk=f(xk1,uk1,wk1)x_k = f(x_{k-1}, u_{k-1}, w_{k-1}) and measurements obey zk=h(xk,vk)z_k = h(x_k, v_k). The EKF originated in the early 1960s at NASA's Ames Research Center, where engineer Stanley F. Schmidt adapted Rudolf E. Kalman's 1960 linear filtering framework to address the nonlinear dynamics encountered in spacecraft navigation for the Apollo program. Kalman's original work, published as "A New Approach to Linear Filtering and Prediction Problems," laid the foundation by introducing a recursive solution for linear Gaussian systems, but real-world aerospace problems required extensions for nonlinearity. Schmidt's implementation involved perturbation methods and relinearization around the current state estimate, marking the EKF's practical debut in circumlunar trajectory estimation and control during the space race era. In operation, the EKF performs a prediction step by propagating the state estimate through the nonlinear function ff and approximating the covariance using the Jacobian Fk=f/xx^k1F_k = \partial f / \partial x |_{\hat{x}_{k-1}}, followed by a measurement update that incorporates new observations via the Jacobian Hk=h/xx^kH_k = \partial h / \partial x |_{\hat{x}_k^-} to compute the Kalman gain and refine the estimate. This linearization assumes that the Gaussian distribution of the state and noise remains approximately Gaussian after the nonlinear transformation, enabling the use of closed-form updates similar to the linear case. However, the approximation can lead to inaccuracies or divergence in highly nonlinear regimes, large uncertainties, or multimodal distributions, prompting variants like the unscented Kalman filter for better handling of these issues. The EKF has become a cornerstone in fields requiring robust state estimation, including autonomous robotics for localization and mapping, aerospace for inertial navigation, and signal processing for target tracking in noisy environments. Its balance of simplicity and performance has sustained its widespread adoption since the 1970s, despite ongoing research into more accurate nonlinear alternatives.

Background and Fundamentals

Linear Kalman Filter Overview

The linear Kalman filter is an optimal recursive algorithm designed to estimate the state of a linear dynamic system from a sequence of noisy measurements, minimizing the mean squared error under Gaussian noise assumptions. Developed for applications in control and prediction, it processes data in two main steps—prediction and update—enabling real-time state estimation without storing the entire history of measurements. The filter operates on a linear state-space model, where the state evolves according to the transition equation xk=Fxk1+wk1\mathbf{x}_k = \mathbf{F} \mathbf{x}_{k-1} + \mathbf{w}_{k-1} and measurements are given by zk=Hxk+vk.\mathbf{z}_k = \mathbf{H} \mathbf{x}_k + \mathbf{v}_k. Here, xk\mathbf{x}_k is the state vector at time kk, F\mathbf{F} is the state transition matrix, H\mathbf{H} is the measurement matrix, wk1N(0,Q)\mathbf{w}_{k-1} \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}) is zero-mean Gaussian process noise with covariance Q\mathbf{Q}, and vkN(0,R)\mathbf{v}_k \sim \mathcal{N}(\mathbf{0}, \mathbf{R}) is zero-mean Gaussian measurement noise with covariance R\mathbf{R}, assumed to be white and uncorrelated. In the prediction step, the state estimate and its covariance are propagated forward: x^kk1=Fx^k1k1,\hat{\mathbf{x}}_{k|k-1} = \mathbf{F} \hat{\mathbf{x}}_{k-1|k-1}, Pkk1=FPk1k1FT+Q,\mathbf{P}_{k|k-1} = \mathbf{F} \mathbf{P}_{k-1|k-1} \mathbf{F}^T + \mathbf{Q}, where x^kk1\hat{\mathbf{x}}_{k|k-1} is the predicted state and Pkk1\mathbf{P}_{k|k-1} is the predicted error covariance. The update step incorporates the new measurement zk\mathbf{z}_k by first computing the innovation yk=zkHx^kk1\mathbf{y}_k = \mathbf{z}_k - \mathbf{H} \hat{\mathbf{x}}_{k|k-1} and its covariance Sk=HPkk1HT+R\mathbf{S}_k = \mathbf{H} \mathbf{P}_{k|k-1} \mathbf{H}^T + \mathbf{R}. The Kalman gain is then Kk=Pkk1HTSk1,\mathbf{K}_k = \mathbf{P}_{k|k-1} \mathbf{H}^T \mathbf{S}_k^{-1}, yielding the updated state estimate x^kk=x^kk1+Kkyk\hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + \mathbf{K}_k \mathbf{y}_k and covariance Pkk=(IKkH)Pkk1.\mathbf{P}_{k|k} = (\mathbf{I} - \mathbf{K}_k \mathbf{H}) \mathbf{P}_{k|k-1}. This gain Kk\mathbf{K}_k optimally balances the predicted estimate and the measurement based on their relative uncertainties. Under the linearity and Gaussian noise assumptions, the linear Kalman filter produces the minimum mean square error (MMSE) estimator of the state, achieving the conditional mean E[xkZk]\mathbb{E}[\mathbf{x}_k | \mathbf{Z}_k] where Zk\mathbf{Z}_k denotes all measurements up to time kk.

Nonlinear Estimation Challenges

In nonlinear state estimation problems, the system is typically modeled with a nonlinear state transition function xk+1=f(xk,uk,vk)\mathbf{x}_{k+1} = \mathbf{f}(\mathbf{x}_k, \mathbf{u}_k, \mathbf{v}_k) and a nonlinear measurement function zk=h(xk,uk)+wk\mathbf{z}_k = \mathbf{h}(\mathbf{x}_k, \mathbf{u}_k) + \mathbf{w}_k, where xk\mathbf{x}_k denotes the state vector, uk\mathbf{u}_k the control input, and vk\mathbf{v}_k, wk\mathbf{w}_k the process and measurement noises, respectively. Unlike linear systems, exact minimum mean square error (MMSE) estimation here is computationally intractable, as the posterior distribution of the state given measurements becomes non-Gaussian, involving high-dimensional integrals without closed-form solutions. This intractability arises because nonlinear transformations distort the Gaussian assumptions inherent to optimal recursive estimators, leading to complex, potentially multimodal posteriors that cannot be analytically propagated or updated. Direct application of the linear Kalman filter to such nonlinear systems introduces significant limitations. The filter assumes affine dynamics and linear observations, resulting in biased state estimates that deviate from the true posterior, especially when nonlinearities are pronounced. Moreover, the linear approximations cause filter divergence by failing to capture the full uncertainty, often underestimating the error covariance—sometimes by orders of magnitude in severe cases, such as reentry vehicle tracking where the estimated covariance was 100 times smaller than the actual mean squared error. These issues stem from the filter's inability to account for higher-order effects in the nonlinear mappings, leading to inconsistent performance and potential instability over time. The challenges of nonlinear estimation motivated early adaptations in the 1960s, particularly in aerospace applications like the Apollo program's circumlunar navigation. Researchers at NASA Ames Research Center, led by Stanley F. Schmidt, extended the linear Kalman filter using first-order Taylor series expansions around nominal trajectories or current state estimates to approximate the nonlinear functions locally. These perturbation methods enabled practical on-board implementation, demonstrating accuracy comparable to batch least-squares estimators while reducing computational load, though they highlighted the need for careful linearization to avoid divergence in highly dynamic scenarios. Illustrative examples underscore these difficulties in real-world contexts. In robotics, vehicle turning dynamics exhibit strong nonlinearities due to coupled lateral and yaw motions, causing linear Kalman filters to produce erroneous position estimates and map distortions during maneuvers. Similarly, range-bearing sensor models in tracking applications transform measurements nonlinearly into Cartesian coordinates, often yielding multimodal posterior distributions that linear methods cannot resolve, leading to poor observability and biased range estimates. The linear Kalman filter provides an optimal baseline only for affine systems with Gaussian noise, but fails to generalize here without approximations.

Historical Development

Origins in Linear Filtering

The foundations of the Extended Kalman Filter (EKF) lie in the development of the linear Kalman filter, which addressed the challenges of estimating the state of dynamic systems from noisy measurements. Rudolf E. Kálmán introduced this framework in his seminal 1960 paper, "A New Approach to Linear Filtering and Prediction Problems," published in the Transactions of the ASME Journal of Basic Engineering. This work built upon earlier extensions of Norbert Wiener's filtering theory from the 1940s, particularly by providing a recursive, state-space formulation suitable for discrete-time, time-varying linear systems, which overcame limitations in Wiener's frequency-domain approach for nonstationary processes. Kálmán's innovation enabled efficient real-time computation, marking a shift from batch processing to sequential estimation in control and signal processing applications. In the early 1960s, the linear Kalman filter found immediate practical utility in aerospace engineering, particularly at NASA, where it was adapted for navigation and guidance systems. A pivotal application occurred during the Apollo program, where the filter was employed to estimate spacecraft trajectories and orientations amidst sensor noise and uncertainties in orbital mechanics. Stanley F. Schmidt, leading the Dynamic Analysis Branch at NASA Ames Research Center, spearheaded its implementation, demonstrating its effectiveness in simulations and onboard computations for lunar missions, which contributed to the success of Apollo 11 in 1969. These efforts highlighted the filter's robustness in high-stakes environments, spurring widespread adoption in avionics and missile guidance. As aerospace challenges often involved nonlinear dynamics, such as those in missile tracking and reentry vehicles, early extensions of the Kalman filter emerged in 1961 at NASA Ames. Stanley F. Schmidt developed the extended Kalman filter by linearizing nonlinear measurement and process models around the current state estimate using Jacobian matrices, enabling the filter's use in estimating states for ballistic missiles and spacecraft under nonlinear constraints. These adaptations, tested in simulations for trajectory estimation, laid the groundwork for handling real-world nonlinearities without full statistical characterization. A key milestone came in 1974 with Arthur Gelb's edited volume, Applied Optimal Estimation, which formalized the EKF by standardizing Jacobian-based linearization around the current state estimate, providing a practical blueprint for engineers and solidifying its role in optimal estimation theory.

Evolution to Nonlinear Extensions

The transition from linear Kalman filtering to its nonlinear extensions began in the early 1960s as engineers encountered systems with inherent nonlinearities, such as those in aerospace trajectory estimation, prompting the development of the extended Kalman filter (EKF) to approximate nonlinear dynamics through local linearization. Building on the linear framework established by Rudolf E. Kálmán in 1960, the EKF adapted the prediction and update steps using Taylor series expansions around the current state estimate, enabling application to mildly nonlinear problems without requiring full statistical characterization of non-Gaussian errors. In the 1970s, significant theoretical advancements solidified the EKF's foundations, with Peter S. Maybeck's work on stochastic models providing rigorous frameworks for handling uncertainties in nonlinear estimation, as detailed in his comprehensive treatment of filter design for aerospace applications. Concurrently, H. W. Sorenson contributed key analyses on the consistency properties of the EKF, examining conditions under which the filter's error covariance remains a reliable bound on actual estimation errors, particularly through Monte Carlo simulations that demonstrated convergence even for large initial uncertainties. Influential texts further propelled these developments; Andrew H. Jazwinski's 1970 book Stochastic Processes and Filtering Theory offered a unified mathematical foundation for nonlinear filtering, emphasizing the EKF's role in sequential estimation under stochastic disturbances, while Yaakov Bar-Shalom's 2001 Estimation with Applications to Tracking and Navigation synthesized decades of progress, highlighting practical implementations in navigation systems. By the 1980s and 1990s, the EKF saw widespread adoption in integrated navigation systems, particularly for combining Global Positioning System (GPS) receivers with Inertial Navigation Systems (INS), where it fused position, velocity, and attitude data to mitigate GPS signal outages and INS drift. This integration became standard in aviation and military applications following GPS initial operational capability in 1993, with loosely and tightly coupled EKF architectures enabling real-time error correction and achieving positioning accuracies on the order of meters in dynamic environments. Early recognition of the EKF's limitations emerged in these decades, as studies revealed filter divergence risks from severe nonlinearities that biased linearization and inflated covariance estimates, spurring initial modifications like fading memory techniques to enhance robustness.

Core Mathematical Formulation

System Model and Assumptions

The Extended Kalman filter (EKF) is formulated for discrete-time nonlinear dynamic systems, where the state evolution and measurement processes are described by the following stochastic difference equations: xk=f(xk1,uk1,wk1)\mathbf{x}_k = f(\mathbf{x}_{k-1}, \mathbf{u}_{k-1}, \mathbf{w}_{k-1}) zk=h(xk,vk)\mathbf{z}_k = h(\mathbf{x}_k, \mathbf{v}_k) Here, xkRn\mathbf{x}_k \in \mathbb{R}^n denotes the state vector at discrete time step kk, f:Rn×Rm×RpRnf: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^p \to \mathbb{R}^n is the nonlinear state transition function incorporating the control input uk1Rm\mathbf{u}_{k-1} \in \mathbb{R}^m and process noise wk1Rp\mathbf{w}_{k-1} \in \mathbb{R}^p, zkRl\mathbf{z}_k \in \mathbb{R}^l is the observed measurement vector, and h:Rn×RlRlh: \mathbb{R}^n \times \mathbb{R}^l \to \mathbb{R}^l is the nonlinear measurement function with measurement noise vkRl\mathbf{v}_k \in \mathbb{R}^l. The process noise wk1\mathbf{w}_{k-1} and measurement noise vk\mathbf{v}_k are modeled as zero-mean Gaussian white noise processes with covariances Qk1\mathbf{Q}_{k-1} and Rk\mathbf{R}_k, respectively, i.e., wk1N(0,Qk1)\mathbf{w}_{k-1} \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}_{k-1}) and vkN(0,Rk)\mathbf{v}_k \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_k). These noises are assumed to be uncorrelated with each other at all times, i.e., E[wk1vj]=0E[\mathbf{w}_{k-1} \mathbf{v}_j^\top] = \mathbf{0} for all j,kj, k. The system is further assumed to follow a Markov process, meaning the state xk\mathbf{x}_k depends only on the immediate previous state xk1\mathbf{x}_{k-1} (conditional independence given the past). Additionally, the nonlinear functions ff and hh must be known and differentiable to enable local linear approximations, and the initial state is Gaussian distributed as x0N(x^0,P0)\mathbf{x}_0 \sim \mathcal{N}(\hat{\mathbf{x}}_0, \mathbf{P}_0). The primary objective of the EKF is to recursively approximate the posterior probability density function of the state given all measurements up to time kk, p(xkZk)N(x^kk,Pkk)p(\mathbf{x}_k | \mathbf{Z}_k) \approx \mathcal{N}(\hat{\mathbf{x}}_{k|k}, \mathbf{P}_{k|k}), where Zk={z1,,zk}\mathbf{Z}_k = \{\mathbf{z}_1, \dots, \mathbf{z}_k\} is the set of accumulated measurements, x^kk\hat{\mathbf{x}}_{k|k} is the estimated state mean, and Pkk\mathbf{P}_{k|k} is the corresponding error covariance matrix. When ff and hh are affine functions (i.e., linear plus constant terms), the EKF formulation specializes to the linear Kalman filter. A continuous-time analog of this model is given by the stochastic differential equation for state propagation, x˙(t)=f(x(t),u(t),w(t),t),\dot{\mathbf{x}}(t) = f(\mathbf{x}(t), \mathbf{u}(t), \mathbf{w}(t), t), along with the measurement equation z(t)=h(x(t),v(t),t)\mathbf{z}(t) = h(\mathbf{x}(t), \mathbf{v}(t), t), where w(t)\mathbf{w}(t) and v(t)\mathbf{v}(t) are zero-mean Gaussian white noise processes with power spectral densities Qc(t)\mathbf{Q}_c(t) and Rc(t)\mathbf{R}_c(t), respectively, and the noises remain uncorrelated. This formulation is discretized for numerical implementation in the EKF algorithm.

Linearization via Jacobians

The extended Kalman filter addresses nonlinear system dynamics and measurements by locally linearizing the nonlinear functions around the current state estimate, employing first-order Taylor series expansions to approximate the behavior as affine transformations. This linearization enables the application of the standard linear Kalman filter machinery while propagating the Gaussian approximation of the state distribution through the nonlinearities. Consider the nonlinear discrete-time state transition model xk=f(xk1,uk1,wk1)\mathbf{x}_k = f(\mathbf{x}_{k-1}, \mathbf{u}_{k-1}, \mathbf{w}_{k-1}), where ff is the nonlinear function, uk1\mathbf{u}_{k-1} is the control input, and wk1\mathbf{w}_{k-1} is zero-mean Gaussian process noise. To linearize this around the previous state estimate x^k1k1\hat{\mathbf{x}}_{k-1|k-1}, perform a first-order Taylor expansion at the point (x^k1k1,uk1,0)(\hat{\mathbf{x}}_{k-1|k-1}, \mathbf{u}_{k-1}, \mathbf{0}): f(xk1,uk1,wk1)f(x^k1k1,uk1,0)+fxx^k1k1,uk1(xk1x^k1k1)+fwx^k1k1,uk1wk1.\begin{aligned} f(\mathbf{x}_{k-1}, \mathbf{u}_{k-1}, \mathbf{w}_{k-1}) &\approx f(\hat{\mathbf{x}}_{k-1|k-1}, \mathbf{u}_{k-1}, \mathbf{0}) \\ &\quad + \left. \frac{\partial f}{\partial \mathbf{x}} \right|_{\hat{\mathbf{x}}_{k-1|k-1}, \mathbf{u}_{k-1}} (\mathbf{x}_{k-1} - \hat{\mathbf{x}}_{k-1|k-1}) \\ &\quad + \left. \frac{\partial f}{\partial \mathbf{w}} \right|_{\hat{\mathbf{x}}_{k-1|k-1}, \mathbf{u}_{k-1}} \mathbf{w}_{k-1}. \end{aligned}
Add your contribution
Related Hubs
User Avatar
No comments yet.