Recent from talks
Contribute something
Nothing was collected or created yet.
Particle filter
View on Wikipedia
Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference.[1] The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s.[2] The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.[3]
Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology[2][4][5] for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.
Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution.[6] In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures.[7][8][9][10][11] These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955,[12] and more recently by Jack H. Hetherington in 1984.[13] In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods.[14][15][16] Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.
The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter[17]), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion.[18] Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.
Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics,[19] phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance[20][21] and other fields.
History
[edit]Heuristic-like algorithms
[edit]From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.
The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines[22] and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.[23][24] The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo',[25] that was proposed by John Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game.[26] In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book[27] published in 1975.
In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms.[28] The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[29] and Crosby (1973).[30] Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.
From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions.[7][8] Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals.[7][8][9][13][14][31][32] The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron chain reactions,[33] but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.[13] One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.[34] In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.[12]
The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter",[35] a slightly modified version of this article appeared in 1996.[36] In April 1993, Neil J. Gordon et al., published in their seminal work[37] an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral[2] and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut[38] on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.[39][40][41][42][43][44]
Mathematical foundations
[edit]From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.
The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral[2][4] in 1996. The article[2] also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.
Dan Crisan, Jessica Gaines, and Terry Lyons,[45][46][47] as well as Pierre Del Moral, and Terry Lyons,[48] created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo[8][49][50] made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet[51] proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo[8] proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet.[49][50] The first rigorous analysis of genealogical tree-based particle filter smoothers is due to P. Del Moral and L. Miclo in 2001[52]
The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books.[8][5] These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter[53]), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models,[10][5][54] backward Markov particle models,[10][55] adaptive mean-field particle models,[6] island-type particle models,[56][57] particle Markov chain Monte Carlo methodologies,[58][59] Sequential Monte Carlo samplers [60][61][62] and Sequential Monte Carlo Approximate Bayesian Computation methods[63] and Sequential Monte Carlo ABC based Bayesian Bootstrap.[64]
The filtering problem
[edit]Objective
[edit]A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.
A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:
the filtering problem is to estimate sequentially the values of the hidden states , given the values of the observation process at any time step k.
All Bayesian estimates of follow from the posterior density . The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior .
The Signal-Observation model
[edit]Particle methods often assume and the observations can be modeled in this form:
- is a Markov process on (for some ) that evolves according to the transition probability density . This model is also often written in a synthetic way as
- with an initial probability density .
- The observations take values in some state space on (for some ) and are conditionally independent provided that are known. In other words, each only depends on . In addition, we assume conditional distribution for given are absolutely continuous, and in a synthetic way we have
An example of system with these properties is:
where both and are mutually independent sequences with known probability density functions and g and h are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions g and h in the above example are linear, and if both and are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).
The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions of the Markov chain and to compute the likelihood function (see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.
Approximate Bayesian computation models
[edit]In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute.[19] In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal by the Markov chain and to introduce a virtual observation of the form
for some sequence of independent random variables with known probability density functions. The central idea is to observe that
The particle filter associated with the Markov process given the partial observations is defined in terms of particles evolving in with a likelihood function given with some obvious abusive notation by . These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter.[65] They were further developed by P. Del Moral, A. Doucet and A. Jasra.[66][67]
The nonlinear filtering equation
[edit]Bayes' rule for conditional probability gives:
where
Particle filters are also an approximation, but with enough particles they can be much more accurate.[2][4][5][49][50] The nonlinear filtering equation is given by the recursion
|
| Eq. 1 |
with the convention for k = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.
Feynman-Kac formulation
[edit]We fix a time horizon n and a sequence of observations , and for each k = 0, ..., n we set:
In this notation, for any bounded function F on the set of trajectories of from the origin k = 0 up to time k = n, we have the Feynman-Kac formula
Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences.[8][10][5] Their interpretations are dependent on the application domain. For instance, if we choose the indicator function of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:
and
as soon as the normalizing constant is strictly positive.
Particle filters
[edit]A Genetic type particle algorithm
[edit]Initially, such an algorithm starts with N independent random variables with common probability density . The genetic algorithm selection-mutation transitions[2][4]
mimic/approximate the updating-prediction transitions of the optimal filter evolution (Eq. 1):
- During the selection-updating transition we sample N (conditionally) independent random variables with common (conditional) distribution
where stands for the Dirac measure at a given state a.
- During the mutation-prediction transition, from each selected particle we sample independently a transition
In the above displayed formulae stands for the likelihood function evaluated at , and stands for the conditional density evaluated at .
At each time k, we have the particle approximations
and
In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.[5][45][48]
Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density
For example, we may have N samples from the approximate posterior distribution of , where the samples are labeled with superscripts as:
Then, expectations with respect to the filtering distribution are approximated by
| Eq. 2 |
with
where stands for the Dirac measure at a given state a. The function f, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation (Eq. 2) is satisfied for any bounded function f we write
Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines
of the particles . The random states , with the lower indices l=0,...,k, stands for the ancestor of the individual at level l=0,...,k. In this situation, we have the approximation formula
| Eq. 3 |
with the empirical measure
Here F stands for any founded function on the path space of the signal. In a more synthetic form (Eq. 3) is equivalent to
Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.[10][5]
This section may be too technical for most readers to understand. (June 2017) |
The general probabilistic principle
[edit]The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form where stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor
satisfies a nonlinear evolution starting with the probability distribution . One of the simplest ways to approximate these probability measures is to start with N independent random variables with common probability distribution . Suppose we have defined a sequence of N random variables such that
At the next step we sample N (conditionally) independent random variables with common law .
A particle interpretation of the filtering equation
[edit]We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors
|
| Eq. 4 |
For k = 0 we use the convention .
By the law of large numbers, we have
in the sense that
for any bounded function . We further assume that we have constructed a sequence of particles at some rank k such that
in the sense that for any bounded function we have
In this situation, replacing by the empirical measure in the evolution equation of the one-step optimal filter stated in (Eq. 4) we find that
Notice that the right hand side in the above formula is a weighted probability mixture
where stands for the density evaluated at , and stands for the density evaluated at for
Then, we sample N independent random variable with common probability density so that
Iterating this procedure, we design a Markov chain such that
Notice that the optimal filter is approximated at each time step k using the Bayes' formulae
The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure by the empirical approximation . The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.[10][5]
Some convergence results
[edit]The analysis of the convergence of particle filters was started in 1996[2][4] and in 2000 in the book[8] and the series of articles.[48][49][50][51][52][68][69] More recent developments can be found in the books,[10][5] When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates
are controlled by the non asymptotic uniform estimates
for any function f bounded by 1, and for some finite constants In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and some finite constant c. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.
Genealogical trees and Unbiasedness properties
[edit]This section may be too technical for most readers to understand. (June 2017) |
Genealogical tree based particle smoothing
[edit]Tracing back in time the ancestral lines
of the individuals and at every time step k, we also have the particle approximations
These empirical approximations are equivalent to the particle integral approximations
for any bounded function F on the random trajectories of the signal. As shown in[54] the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.[10][5]
Unbiased particle estimates of likelihood functions
[edit]We use the product formula
with
and the conventions and for k = 0. Replacing by the empirical approximation
in the above displayed formula, we design the following unbiased particle approximation of the likelihood function
with
where stands for the density evaluated at . The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article.[2] Refined variance estimates can be found in[5] and.[10]
Backward particle smoothers
[edit]Using Bayes' rule, we have the formula
Notice that
This implies that
Replacing the one-step optimal predictors by the particle empirical measures
we find that
We conclude that
with the backward particle approximation
The probability measure
is the probability of the random paths of a Markov chain running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles
- Initially (at time k=n) the chain chooses randomly a state with the distribution
- From time k to the time (k-1), the chain starting at some state for some at time k moves at time (k-1) to a random state chosen with the discrete weighted probability
In the above displayed formula, stands for the conditional distribution evaluated at . In the same vein, and stand for the conditional densities and evaluated at and These models allows to reduce integration with respect to the densities in terms of matrix operations with respect to the Markov transitions of the chain described above.[55] For instance, for any function we have the particle estimates
where
This also shows that if
then
Particle smoothing can also be achieved in a single online pass through a fixed-lag approximation[70].
Some convergence results
[edit]We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.
In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by
for some finite constant c. In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and for some finite constant c.
The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees
are controlled by the non asymptotic uniform estimates
for any function F bounded by 1, and for some finite constants In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and for some finite constant c. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form
with
with functions bounded by 1, we have
and
for some finite constants More refined estimates including exponentially small probability of errors are developed in.[10]
Sequential Importance Resampling (SIR)
[edit]Monte Carlo filter and bootstrap filter
[edit]Sequential importance Resampling (SIR), Monte Carlo filtering (Kitagawa 1993[35]), bootstrap filtering algorithm (Gordon et al. 1993[37]) and single distribution resampling (Bejuri W.M.Y.B et al. 2017[71]), are also commonly applied filtering algorithms, which approximate the filtering probability density by a weighted set of N samples
The importance weights are approximations to the relative posterior probabilities (or densities) of the samples such that
Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average
For a finite set of samples, the algorithm performance is dependent on the choice of the proposal distribution
- .
The "optimal" proposal distribution is given as the target distribution
This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998.[4] When it is difficult to sample transitions according to the distribution one natural strategy is to use the following particle approximation
with the empirical approximation
associated with N (or any other large number of samples) independent random samples with the conditional distribution of the random state given . The consistency of the resulting particle filter of this approximation and other extensions are developed in.[4] In the above display stands for the Dirac measure at a given state a.
However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:
Sequential Importance Resampling (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.
Resampling is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1993[35]) is optimal in terms of variance.
A single step of sequential importance resampling is as follows:
- 1) For draw samples from the proposal distribution
- 2) For update the importance weights up to a normalizing constant:
- Note that when we use the transition prior probability distribution as the importance function,
- this simplifies to the following :
- 3) For compute the normalized importance weights:
- 4) Compute an estimate of the effective number of particles as
- This criterion reflects the variance of the weights. Other criteria can be found in the article,[6] including their rigorous analysis and central limit theorems.
- 5) If the effective number of particles is less than a given threshold , then perform resampling:
- a) Draw N particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
- b) For set
The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term Importance Resampling is more accurate because the word "resampling" implies that the initial sampling has already been done.[72]
Sequential importance sampling (SIS)
[edit]Sequential importance sampling (SIS) is the same as the SIR algorithm but without the resampling stage. This version often exhibits particle weight collapse, where all the probability gets concentrated on one or two particles, and the rest of the particle weights correspond to very small probability. The introduction of resampling alleviates this problem.
"Direct version" algorithm
[edit]This section may be confusing or unclear to readers. (October 2011) |
The "direct version" algorithm [citation needed] is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from :
- 1) Set n = 0 (This will count the number of particles generated so far)
- 2) Uniformly choose an index i from the range
- 3) Generate a test from the distribution with
- 4) Generate the probability of using from where is the measured value
- 5) Generate another uniform u from where
- 6) Compare u and
- 6a) If u is larger then repeat from step 2
- 6b) If u is smaller then save as and increment n
- 7) If n == N then quit
The goal is to generate P "particles" at k using only the particles from . This requires that a Markov equation can be written (and computed) to generate a based only upon . This algorithm uses the composition of the P particles from to generate a particle at k and repeats (steps 2–6) until P particles are generated at k.
This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimension is the particle number. For example, would be the ith particle at and can also be written (as done above in the algorithm). Step 3 generates a potential based on a randomly chosen particle () at time and rejects or accepts it in step 6. In other words, the values are generated using the previously generated .
Applications
[edit]Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
- Bayesian inference, machine learning, risk analysis and rare event sampling
- Bioinformatics[19]
- Computational science
- Economics, financial mathematics and mathematical finance: particle filters can perform simulations which are needed to compute the high-dimensional and/or complex integrals related to problems such as dynamic stochastic general equilibrium models in macro-economics and option pricing[73]
- Engineering
- Infectious disease epidemiology where they have been applied to a number of epidemic forecasting problems, for example predicting seasonal influenza epidemics[74]
- Fault detection and isolation: in observer-based schemas a particle filter can forecast expected sensors output enabling fault isolation[75][76][77]
- Molecular chemistry and computational physics
- Pharmacokinetics[78]
- Phylogenetics
- Robotics, artificial intelligence: Monte Carlo localization is a de facto standard in mobile robot localization[79][80][81]
- Signal and image processing: visual localization, tracking, feature recognition[82]
Other particle filters
[edit]- Auxiliary particle filter[83]
- Cost Reference particle filter
- Exponential Natural Particle Filter[84]
- Feynman-Kac and mean-field particle methodologies[2][10][5]
- Gaussian particle filter
- Gauss–Hermite particle filter
- Hierarchical/Scalable particle filter[85]
- Nudged particle filter[86]
- Particle Markov-Chain Monte-Carlo, see e.g. pseudo-marginal Metropolis–Hastings algorithm.
- Rao–Blackwellized particle filter[53]
- Regularized auxiliary particle filter[87]
- Rejection-sampling based optimal particle filter[88][89]
- Unscented particle filter
- Online particle smoother[70]
See also
[edit]References
[edit]- ^ Wills, Adrian G.; Schön, Thomas B. (3 May 2023). "Sequential Monte Carlo: A Unified Review". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 159–182. doi:10.1146/annurev-control-042920-015119. ISSN 2573-5144. S2CID 255638127.
- ^ a b c d e f g h i j Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580.
- ^ Liu, Jun S.; Chen, Rong (1998-09-01). "Sequential Monte Carlo Methods for Dynamic Systems". Journal of the American Statistical Association. 93 (443): 1032–1044. doi:10.1080/01621459.1998.10473765. ISSN 0162-1459.
- ^ a b c d e f g Del Moral, Pierre (1998). "Measure Valued Processes and Interacting Particle Systems. Application to Non Linear Filtering Problems". Annals of Applied Probability. 8 (2) (Publications du Laboratoire de Statistique et Probabilités, 96-15 (1996) ed.): 438–495. doi:10.1214/aoap/1028903535.
- ^ a b c d e f g h i j k l Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. Series: Probability and Applications. p. 556. ISBN 978-0-387-20268-6.
- ^ a b c Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2012). "On Adaptive Resampling Procedures for Sequential Monte Carlo Methods" (PDF). Bernoulli. 18 (1): 252–278. doi:10.3150/10-bej335. S2CID 4506682.
- ^ a b c Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Probability and its Applications. Springer. p. 575. ISBN 9780387202686.
Series: Probability and Applications
- ^ a b c d e f g h Del Moral, Pierre; Miclo, Laurent (2000). "Branching and Interacting Particle Systems Approximations of Feynman-Kac Formulae with Applications to Non-Linear Filtering". In Jacques Azéma; Michel Ledoux; Michel Émery; Marc Yor (eds.). Séminaire de Probabilités XXXIV (PDF). Lecture Notes in Mathematics. Vol. 1729. pp. 1–145. doi:10.1007/bfb0103798. ISBN 978-3-540-67314-9.
- ^ a b Del Moral, Pierre; Miclo, Laurent (2000). "A Moran particle system approximation of Feynman-Kac formulae". Stochastic Processes and Their Applications. 86 (2): 193–216. doi:10.1016/S0304-4149(99)00094-0. S2CID 122757112.
- ^ a b c d e f g h i j k Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626.
Monographs on Statistics & Applied Probability
- ^ Moral, Piere Del; Doucet, Arnaud (2014). "Particle methods: An introduction with applications". ESAIM: Proc. 44: 1–46. doi:10.1051/proc/201444001.
- ^ a b Rosenbluth, Marshall, N.; Rosenbluth, Arianna, W. (1955). "Monte-Carlo calculations of the average extension of macromolecular chains". J. Chem. Phys. 23 (2): 356–359. Bibcode:1955JChPh..23..356R. doi:10.1063/1.1741967. S2CID 89611599.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ a b c Hetherington, Jack, H. (1984). "Observations on the statistical iteration of matrices". Phys. Rev. A. 30 (2713): 2713–2719. Bibcode:1984PhRvA..30.2713H. doi:10.1103/PhysRevA.30.2713.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ a b Del Moral, Pierre (2003). "Particle approximations of Lyapunov exponents connected to Schrödinger operators and Feynman-Kac semigroups". ESAIM Probability & Statistics. 7: 171–208. doi:10.1051/ps:2003001.
- ^ Assaraf, Roland; Caffarel, Michel; Khelif, Anatole (2000). "Diffusion Monte Carlo Methods with a fixed number of walkers" (PDF). Phys. Rev. E. 61 (4): 4566–4575. Bibcode:2000PhRvE..61.4566A. doi:10.1103/physreve.61.4566. PMID 11088257. Archived from the original (PDF) on 2014-11-07.
- ^ Caffarel, Michel; Ceperley, David; Kalos, Malvin (1993). "Comment on Feynman-Kac Path-Integral Calculation of the Ground-State Energies of Atoms". Phys. Rev. Lett. 71 (13): 2159. Bibcode:1993PhRvL..71.2159C. doi:10.1103/physrevlett.71.2159. PMID 10054598.
- ^ Ocone, D. L. (January 1, 1999). "Asymptotic stability of beneš filters". Stochastic Analysis and Applications. 17 (6): 1053–1074. doi:10.1080/07362999908809648. ISSN 0736-2994.
- ^ Maurel, Mireille Chaleyat; Michel, Dominique (January 1, 1984). "Des resultats de non existence de filtre de dimension finie". Stochastics. 13 (1–2): 83–102. doi:10.1080/17442508408833312. ISSN 0090-9491.
- ^ a b c Hajiramezanali, Ehsan; Imani, Mahdi; Braga-Neto, Ulisses; Qian, Xiaoning; Dougherty, Edward R. (2019). "Scalable optimal Bayesian classification of single-cell trajectories under regulatory model uncertainty". BMC Genomics. 20 (Suppl 6): 435. arXiv:1902.03188. Bibcode:2019arXiv190203188H. doi:10.1186/s12864-019-5720-3. PMC 6561847. PMID 31189480.
- ^ Cruz, Marcelo G.; Peters, Gareth W.; Shevchenko, Pavel V. (2015-02-27). Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk (1 ed.). Wiley. doi:10.1002/9781118573013. ISBN 978-1-118-11839-9.
- ^ Peters, Gareth W.; Shevchenko, Pavel V. (2015-02-20). Advances in Heavy Tailed Risk Modeling: A Handbook of Operational Risk (1 ed.). Wiley. doi:10.1002/9781118909560. ISBN 978-1-118-90953-9.
- ^ Turing, Alan M. (October 1950). "Computing machinery and intelligence". Mind. LIX (238): 433–460. doi:10.1093/mind/LIX.236.433.
- ^ Barricelli, Nils Aall (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68.
- ^ Barricelli, Nils Aall (1957). "Symbiogenetic evolution processes realized by artificial methods". Methodos: 143–182.
- ^ Hammersley, J. M.; Morton, K. W. (1954). "Poor Man's Monte Carlo". Journal of the Royal Statistical Society. Series B (Methodological). 16 (1): 23–38. doi:10.1111/j.2517-6161.1954.tb00145.x. JSTOR 2984008.
- ^ Barricelli, Nils Aall (1963). "Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life". Acta Biotheoretica. 16 (3–4): 99–126. doi:10.1007/BF01556602. S2CID 86717105.
- ^ "Adaptation in Natural and Artificial Systems | The MIT Press". mitpress.mit.edu. Retrieved 2015-06-06.
- ^ Fraser, Alex (1957). "Simulation of genetic systems by automatic digital computers. I. Introduction". Aust. J. Biol. Sci. 10 (4): 484–491. doi:10.1071/BI9570484.
- ^ Fraser, Alex; Burnell, Donald (1970). Computer Models in Genetics. New York: McGraw-Hill. ISBN 978-0-07-021904-5.
- ^ Crosby, Jack L. (1973). Computer Simulation in Genetics. London: John Wiley & Sons. ISBN 978-0-471-18880-3.
- ^ Assaraf, Roland; Caffarel, Michel; Khelif, Anatole (2000). "Diffusion Monte Carlo Methods with a fixed number of walkers" (PDF). Phys. Rev. E. 61 (4): 4566–4575. Bibcode:2000PhRvE..61.4566A. doi:10.1103/physreve.61.4566. PMID 11088257. Archived from the original (PDF) on 2014-11-07.
- ^ Caffarel, Michel; Ceperley, David; Kalos, Malvin (1993). "Comment on Feynman-Kac Path-Integral Calculation of the Ground-State Energies of Atoms". Phys. Rev. Lett. 71 (13): 2159. Bibcode:1993PhRvL..71.2159C. doi:10.1103/physrevlett.71.2159. PMID 10054598.
- ^ Fermi, Enrique; Richtmyer, Robert, D. (1948). "Note on census-taking in Monte Carlo calculations" (PDF). LAM. 805 (A).
Declassified report Los Alamos Archive
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ Herman, Kahn; Harris, Theodore, E. (1951). "Estimation of particle transmission by random sampling" (PDF). Natl. Bur. Stand. Appl. Math. Ser. 12: 27–30.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ a b c Kitagawa, G. (January 1993). "A Monte Carlo Filtering and Smoothing Method for Non-Gaussian Nonlinear State Space Models" (PDF). Proceedings of the 2nd U.S.-Japan Joint Seminar on Statistical Time Series Analysis: 110–131.
- ^ Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics. 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
- ^ a b Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. (April 1993). "Novel approach to nonlinear/non-Gaussian Bayesian state estimation". IEE Proceedings F - Radar and Signal Processing. 140 (2): 107–113. doi:10.1049/ip-f-2.1993.0015. ISSN 0956-375X.
- ^ Carvalho, Himilcon; Del Moral, Pierre; Monin, André; Salut, Gérard (July 1997). "Optimal Non-linear Filtering in GPS/INS Integration" (PDF). IEEE Transactions on Aerospace and Electronic Systems. 33 (3): 835. Bibcode:1997ITAES..33..835C. doi:10.1109/7.599254. S2CID 27966240. Archived from the original (PDF) on 2022-11-10. Retrieved 2015-06-01.
- ^ P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : An unified framework for particle solutions
LAAS-CNRS, Toulouse, Research Report no. 91137, DRET-DIGILOG- LAAS/CNRS contract, April (1991). - ^ P. Del Moral, G. Rigal, and G. Salut. Nonlinear and non-Gaussian particle filters applied to inertial platform repositioning.
LAAS-CNRS, Toulouse, Research Report no. 92207, STCAN/DIGILOG-LAAS/CNRS Convention STCAN no. A.91.77.013, (94p.) September (1991). - ^ P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Experimental results.
Convention DRET no. 89.34.553.00.470.75.01, Research report no.2 (54p.), January (1992). - ^ P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Theoretical results
Convention DRET no. 89.34.553.00.470.75.01, Research report no.3 (123p.), October (1992). - ^ P. Del Moral, J.-Ch. Noyer, G. Rigal, and G. Salut. Particle filters in radar signal processing : detection, estimation and air targets recognition.
LAAS-CNRS, Toulouse, Research report no. 92495, December (1992). - ^ P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation.
Studies on: Filtering, optimal control, and maximum likelihood estimation. Convention DRET no. 89.34.553.00.470.75.01. Research report no.4 (210p.), January (1993). - ^ a b Crisan, Dan; Gaines, Jessica; Lyons, Terry (1998). "Convergence of a branching particle method to the solution of the Zakai". SIAM Journal on Applied Mathematics. 58 (5): 1568–1590. doi:10.1137/s0036139996307371. S2CID 39982562.
- ^ Crisan, Dan; Lyons, Terry (1997). "Nonlinear filtering and measure-valued processes". Probability Theory and Related Fields. 109 (2): 217–244. doi:10.1007/s004400050131. S2CID 119809371.
- ^ Crisan, Dan; Lyons, Terry (1999). "A particle approximation of the solution of the Kushner–Stratonovitch equation". Probability Theory and Related Fields. 115 (4): 549–578. doi:10.1007/s004400050249. S2CID 117725141.
- ^ a b c Crisan, Dan; Del Moral, Pierre; Lyons, Terry (1999). "Discrete filtering using branching and interacting particle systems" (PDF). Markov Processes and Related Fields. 5 (3): 293–318.
- ^ a b c d Del Moral, Pierre; Guionnet, Alice (1999). "On the stability of Measure Valued Processes with Applications to filtering". C. R. Acad. Sci. Paris. 39 (1): 429–434.
- ^ a b c d Del Moral, Pierre; Guionnet, Alice (2001). "On the stability of interacting processes with applications to filtering and genetic algorithms". Annales de l'Institut Henri Poincaré. 37 (2): 155–194. Bibcode:2001AIHPB..37..155D. doi:10.1016/s0246-0203(00)01064-5. Archived from the original on 2014-11-07.
- ^ a b Del Moral, P.; Guionnet, A. (1999). "Central limit theorem for nonlinear filtering and interacting particle systems". The Annals of Applied Probability. 9 (2): 275–297. doi:10.1214/aoap/1029962742. ISSN 1050-5164.
- ^ a b Del Moral, Pierre; Miclo, Laurent (2001). "Genealogies and Increasing Propagation of Chaos For Feynman-Kac and Genetic Models". The Annals of Applied Probability. 11 (4): 1166–1198. doi:10.1214/aoap/1015345399. ISSN 1050-5164.
- ^ a b Doucet, A.; De Freitas, N.; Murphy, K.; Russell, S. (2000). Rao–Blackwellised particle filtering for dynamic Bayesian networks. Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. pp. 176–183. CiteSeerX 10.1.1.137.5199.
- ^ a b Del Moral, Pierre; Miclo, Laurent (2001). "Genealogies and Increasing Propagations of Chaos for Feynman-Kac and Genetic Models". Annals of Applied Probability. 11 (4): 1166–1198.
- ^ a b Del Moral, Pierre; Doucet, Arnaud; Singh, Sumeetpal, S. (2010). "A Backward Particle Interpretation of Feynman-Kac Formulae" (PDF). M2AN. 44 (5): 947–976. doi:10.1051/m2an/2010048. S2CID 14758161.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ Vergé, Christelle; Dubarry, Cyrille; Del Moral, Pierre; Moulines, Eric (2013). "On parallel implementation of Sequential Monte Carlo methods: the island particle model". Statistics and Computing. 25 (2): 243–260. arXiv:1306.3911. Bibcode:2013arXiv1306.3911V. doi:10.1007/s11222-013-9429-x. S2CID 39379264.
- ^ Chopin, Nicolas; Jacob, Pierre, E.; Papaspiliopoulos, Omiros (2011). "SMC^2: an efficient algorithm for sequential analysis of state-space models". arXiv:1101.1528v3 [stat.CO].
{{cite arXiv}}: CS1 maint: multiple names: authors list (link) - ^ Andrieu, Christophe; Doucet, Arnaud; Holenstein, Roman (2010). "Particle Markov chain Monte Carlo methods". Journal of the Royal Statistical Society, Series B. 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x.
- ^ Del Moral, Pierre; Patras, Frédéric; Kohn, Robert (2014). "On Feynman-Kac and particle Markov chain Monte Carlo models". arXiv:1404.5733 [math.PR].
- ^ Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2006). "Sequential Monte Carlo Samplers". Journal of the Royal Statistical Society. Series B (Statistical Methodology). 68 (3): 411–436. arXiv:cond-mat/0212648. doi:10.1111/j.1467-9868.2006.00553.x. ISSN 1369-7412. JSTOR 3879283.
- ^ Peters, Gareth (2005). "Topics in Sequential Monte Carlo Samplers". SSRN Electronic Journal. doi:10.2139/ssrn.3785582. ISSN 1556-5068.
- ^ Del Moral, Pierre; Doucet, Arnaud; Peters, Gareth (2004). "Sequential Monte Carlo Samplers CUED Technical Report". SSRN Electronic Journal. doi:10.2139/ssrn.3841065. ISSN 1556-5068.
- ^ Sisson, S. A.; Fan, Y.; Beaumont, M. A., eds. (2019). Handbook of approximate Bayesian computation. Boca Raton: CRC Press, Taylor and Francis Group. ISBN 978-1-315-11719-5.
- ^ Peters, Gareth W.; Wüthrich, Mario V.; Shevchenko, Pavel V. (2010-08-01). "Chain ladder method: Bayesian bootstrap versus classical bootstrap". Insurance: Mathematics and Economics. 47 (1): 36–51. arXiv:1004.2548. doi:10.1016/j.insmatheco.2010.03.007. ISSN 0167-6687.
- ^ Del Moral, Pierre; Jacod, Jean; Protter, Philip (2001-07-01). "The Monte-Carlo method for filtering with discrete-time observations". Probability Theory and Related Fields. 120 (3): 346–368. doi:10.1007/PL00008786. hdl:1813/9179. ISSN 0178-8051. S2CID 116274.
- ^ Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2011). "An adaptive sequential Monte Carlo method for approximate Bayesian computation". Statistics and Computing. 22 (5): 1009–1020. CiteSeerX 10.1.1.218.9800. doi:10.1007/s11222-011-9271-y. ISSN 0960-3174. S2CID 4514922.
- ^ Martin, James S.; Jasra, Ajay; Singh, Sumeetpal S.; Whiteley, Nick; Del Moral, Pierre; McCoy, Emma (May 4, 2014). "Approximate Bayesian Computation for Smoothing". Stochastic Analysis and Applications. 32 (3): 397–420. arXiv:1206.5208. doi:10.1080/07362994.2013.879262. ISSN 0736-2994. S2CID 17117364.
- ^ Del Moral, Pierre; Rio, Emmanuel (2011). "Concentration inequalities for mean field particle models". The Annals of Applied Probability. 21 (3): 1017–1052. arXiv:1211.1837. doi:10.1214/10-AAP716. ISSN 1050-5164. S2CID 17693884.
- ^ Del Moral, Pierre; Hu, Peng; Wu, Liming (2012). On the Concentration Properties of Interacting Particle Processes. Hanover, MA, USA: Now Publishers Inc. ISBN 978-1601985125.
- ^ a b Duffield, Samuel; Singh, Sumeetpal (2022). "Online Particle Smoothing With Application to Map-Matching". IEEE Transactions on Signal Processing. 70: 497–508. arXiv:2012.04602. Bibcode:2022ITSP...70..497D. doi:10.1109/TSP.2022.3141259. ISSN 1053-587X.
- ^ Bejuri, Wan Mohd Yaakob Wan; Mohamad, Mohd Murtadha; Raja Mohd Radzi, Raja Zahilah; Salleh, Mazleena; Yusof, Ahmad Fadhil (2017-10-18). "Adaptive memory-based single distribution resampling for particle filter". Journal of Big Data. 4 (1): 33. doi:10.1186/s40537-017-0094-3. ISSN 2196-1115. S2CID 256407088.
- ^ Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5.
- ^ Creal, Drew (2012). "A Survey of Sequential Monte Carlo Methods for Economics and Finance". Econometric Reviews. 31 (2): 245–296. doi:10.1080/07474938.2011.607333. hdl:1871/15287. S2CID 2730761.
- ^ Moss, Robert; Zarebski, Alexander; Dawson, Peter; McCaw, James M. (2016). "Forecasting influenza outbreak dynamics in Melbourne from Internet search query surveillance data". Influenza and Other Respiratory Viruses. 10 (4): 314–323. doi:10.1111/irv.12376. PMC 4910172. PMID 26859411.
- ^ Shen, Yin; Xiangping, Zhu (2015). "Intelligent Particle Filter and Its Application to Fault Detection of Nonlinear System". IEEE Transactions on Industrial Electronics. 62 (6): 1. Bibcode:2015ITIE...62.3852Y. doi:10.1109/TIE.2015.2399396. S2CID 23951880.
- ^ D'Amato, Edigio; Notaro, Immacolata; Nardi, Vito Antonio; Scordamaglia, Valerio (2021). "A Particle Filtering Approach for Fault Detection and Isolation of UAV IMU Sensors: Design, Implementation and Sensitivity Analysis". Sensors. 21 (9): 3066. Bibcode:2021Senso..21.3066D. doi:10.3390/s21093066. PMC 8124649. PMID 33924891.
- ^ Kadirkamanathan, V.; Li, P.; Jaward, M. H.; Fabri, S. G. (2002). "Particle filtering-based fault detection in non-linear stochastic systems". International Journal of Systems Science. 33 (4): 259–265. Bibcode:2002IJSyS..33..259K. doi:10.1080/00207720110102566. S2CID 28634585.
- ^ Bonate P: Pharmacokinetic-Pharmacodynamic Modeling and Simulation. Berlin: Springer; 2011.
- ^ Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun, "Monte Carlo Localization: Efficient Position Estimation for Mobile Robots." Proc. of the Sixteenth National Conference on Artificial Intelligence John Wiley & Sons Ltd, 1999.
- ^ Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics MIT Press, 2005. Ch. 8.3 ISBN 9780262201629.
- ^ Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert. "Robust monte carlo localization for mobile robots." Artificial Intelligence 128.1 (2001): 99–141.
- ^ Abbasi, Mahdi; Khosravi, Mohammad R. (2020). "A Robust and Accurate Particle Filter-Based Pupil Detection Method for Big Datasets of Eye Video". Journal of Grid Computing. 18 (2): 305–325. doi:10.1007/s10723-019-09502-1. S2CID 209481431.
- ^ Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Archived from the original on 2007-10-16. Retrieved 2008-05-06.
- ^ Zand, G.; Taherkhani, M.; Safabakhsh, R. (2015). "Exponential Natural Particle Filter". arXiv:1511.06603 [cs.LG].
- ^ Canton-Ferrer, C.; Casas, J.R.; Pardàs, M. (2011). "Human Motion Capture Using Scalable Body Models". Computer Vision and Image Understanding. 115 (10): 1363–1374. doi:10.1016/j.cviu.2011.06.001. hdl:2117/13393.
- ^ Akyildiz, Ömer Deniz; Míguez, Joaquín (2020-03-01). "Nudging the particle filter". Statistics and Computing. 30 (2): 305–330. doi:10.1007/s11222-019-09884-y. hdl:10044/1/100011. ISSN 1573-1375. S2CID 88515918.
- ^ Liu, J.; Wang, W.; Ma, F. (2011). "A Regularized Auxiliary Particle Filtering Approach for System State Estimation and Battery Life Prediction". Smart Materials and Structures. 20 (7): 1–9. Bibcode:2011SMaS...20g5021L. doi:10.1088/0964-1726/20/7/075021. S2CID 110670991.
- ^ Blanco, J.L.; Gonzalez, J.; Fernandez-Madrigal, J.A. (2008). An Optimal Filtering Algorithm for Non-Parametric Observation Models in Robot Localization. IEEE International Conference on Robotics and Automation (ICRA'08). pp. 461–466. CiteSeerX 10.1.1.190.7092.
- ^ Blanco, J.L.; Gonzalez, J.; Fernandez-Madrigal, J.A. (2010). "Optimal Filtering for Non-Parametric Observation Models: Applications to Localization and SLAM". The International Journal of Robotics Research. 29 (14): 1726–1742. CiteSeerX 10.1.1.1031.4931. doi:10.1177/0278364910364165. S2CID 453697.
Bibliography
[edit]- Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-05-31.
- Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. "Series: Probability and Applications"
- Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. "Monographs on Statistics & Applied Probability"
- Cappe, O.; Moulines, E.; Ryden, T. (2005). Inference in Hidden Markov Models. Springer.
- Liu, J.S. (2001). Monte Carlo strategies in Scientific Computing. Springer.
- Kong, A.; Liu, J.S.; Wong, W.H. (1994). "Sequential imputations and Bayesian missing data problems" (PDF). Journal of the American Statistical Association. 89 (425): 278–288. doi:10.1080/01621459.1994.10476469.
- Liu, J.S.; Chen, R. (1995). "Blind deconvolution via sequential imputations" (PDF). Journal of the American Statistical Association. 90 (430): 567–576. doi:10.2307/2291068. JSTOR 2291068.
- Ristic, B.; Arulampalam, S.; Gordon, N. (2004). Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House.
- Doucet, A.; Johansen, A.M. (December 2008). "A tutorial on particle filtering and smoothing: fifteen years later" (PDF). Technical Report.
- Doucet, A.; Godsill, S.; Andrieu, C. (2000). "On sequential Monte Carlo sampling methods for Bayesian filtering". Statistics and Computing. 10 (3): 197–208. doi:10.1023/A:1008935410038. S2CID 16288401.
- Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. (2002). "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking". IEEE Transactions on Signal Processing. 50 (2): 174–188. Bibcode:2002ITSP...50..174A. CiteSeerX 10.1.1.471.8617. doi:10.1109/78.978374. S2CID 55577025.
- Cappe, O.; Godsill, S.; Moulines, E. (2007). "An overview of existing methods and recent advances in sequential Monte Carlo". Proceedings of the IEEE. 95 (5): 899–924. Bibcode:2007IEEEP..95..899C. doi:10.1109/JPROC.2007.893250. S2CID 3081664.
- Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics. 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
- Kotecha, J.H.; Djuric, P. (2003). "Gaussian Particle filtering". IEEE Transactions on Signal Processing. 51 (10): 2592. Bibcode:2003ITSP...51.2592K. doi:10.1109/TSP.2003.816758.
- Haug, A.J. (2005). "A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes" (PDF). The MITRE Corporation, USA, Tech. Rep., Feb. Archived (PDF) from the original on December 22, 2021. Retrieved 2021-12-22.
- Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Archived from the original on 2007-10-16. Retrieved 2008-05-06.
- Gordon, N. J.; Salmond, D. J.; Smith, A. F. M. (1993). "Novel approach to nonlinear/non-Gaussian Bayesian state estimation". IEE Proceedings F - Radar and Signal Processing. 140 (2): 107–113. doi:10.1049/ip-f-2.1993.0015.
- Vaswani, N.; Rathi, Y.; Yezzi, A.; Tannenbaum, A. (2007). "Tracking deforming objects using particle filtering for geometric active contours". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (8): 1470–1475. Bibcode:2007ITPAM..29.1470R. doi:10.1109/tpami.2007.1081. PMC 3663080. PMID 17568149.
External links
[edit]- Feynman–Kac models and interacting particle algorithms (a.k.a. Particle Filtering) Theoretical aspects and a list of application domains of particle filters
- Sequential Monte Carlo Methods (Particle Filtering) homepage on University of Cambridge
- Dieter Fox's MCL Animations
- Rob Hess' free software
- SMCTC: A Template Class for Implementing SMC algorithms in C++
- Java applet on particle filtering
- vSMC : Vectorized Sequential Monte Carlo
- Particle filter explained in the context of self driving car
Particle filter
View on GrokipediaHistory
Early heuristic approaches
Early heuristic approaches to particle filtering emerged from practical needs in physics and engineering, where probabilistic simulations were used to approximate complex systems without rigorous theoretical backing. In the 1950s and 1960s, Monte Carlo methods were developed at Los Alamos National Laboratory to model neutron transport and other nuclear processes, simulating thousands of individual particle paths to estimate aggregate behaviors in reactors and weapons design.[4] These techniques relied on random sampling to propagate particle trajectories heuristically, providing approximate solutions to high-dimensional integration problems in radiation shielding and fission chain reactions. By the 1970s, similar particle-based approximations extended to engineering simulations, such as reliability analysis in complex systems, where stochastic particle ensembles approximated failure probabilities under uncertainty. In parallel, the 1970s saw the introduction of genetic algorithms as another class of heuristic optimizers that influenced early filtering ideas. John Holland's foundational work framed these algorithms as evolutionary processes, using mutation, selection, and crossover to evolve populations of candidate solutions toward optimal states. Applied to estimation problems, genetic algorithms treated state variables as "genes" in a population, iteratively refining approximations to hidden system parameters through survival-of-the-fittest mechanics, as demonstrated in early control theory applications for dynamic system identification. These methods provided ad-hoc ways to handle nonlinear estimation without assuming Gaussian noise, foreshadowing population-based filtering strategies. A pivotal bridge to more structured approaches came in 1993 with the bootstrap filter proposed by Gordon, Salmond, and Smith, which adapted Monte Carlo sampling for sequential state estimation in nonlinear, non-Gaussian settings. This algorithm represented the posterior density via a set of weighted particles propagated through prediction and update steps, offering a practical heuristic for tracking targets in radar signals. Building on this, Isard and Blake's 1996 condensation algorithm applied particle propagation specifically to visual tracking, using conditional density estimation to handle occlusions and clutter in video sequences. These developments marked informal precursors that paved the way for rigorous probabilistic formulations in subsequent decades.Mathematical and probabilistic foundations
The mathematical and probabilistic foundations of particle filters trace back to seminal developments in stochastic processes and measure theory, providing the rigorous framework for approximating complex filtering distributions through interacting particle systems. A key cornerstone is the Feynman-Kac formula, introduced by Mark Kac in 1947, which establishes a probabilistic representation for solutions to certain parabolic partial differential equations via expectations over Brownian motion paths. This formula laid the groundwork for path integral methods in quantum mechanics and probability, later extended to sequential Monte Carlo contexts. In 1996, Pierre Del Moral applied Feynman-Kac formulae to nonlinear filtering problems, demonstrating how branching and interacting particle systems could numerically approximate these solutions in state estimation tasks. Further theoretical support emerged from the study of interacting particle systems, pioneered by Henry P. McKean in 1966, who analyzed a class of Markov processes linked to nonlinear parabolic equations and established their mean-field limits. McKean's work introduced the concept of propagation of chaos, wherein the empirical distribution of a large number of weakly interacting particles converges to a deterministic nonlinear evolution, providing a limiting regime essential for justifying particle approximations in high-dimensional spaces. These mean-field limits ensure that particle systems behave predictably as the number of particles increases, bridging microscopic interactions to macroscopic probabilistic descriptions. Pivotal contributions included K. R. Parthasarathy's 1967 work on probability measures on metric spaces, which formalized the dynamics relevant to population evolutions under stochastic branching and selection, influencing later particle models for measure approximations. Building on this, in the 1990s, Pierre Del Moral and Alice Guionnet developed genetic-type algorithms within interacting particle frameworks, proving stability and convergence properties for these systems in filtering applications through large deviation principles. Their analyses highlighted how selection and mutation mechanisms in particle populations mimic evolutionary processes to maintain diversity and accuracy in approximations. A significant timeline event occurred in 1997 with Dan Crisan's work on measure-valued processes, which connected nonlinear filtering equations to superprocesses and demonstrated the convergence of interacting particle systems to these measures, solidifying the probabilistic backbone for practical implementations. This development, emerging from earlier heuristic approaches in the 1970s and 1980s, provided central limit theorems and consistency results that validated particle methods as unbiased estimators for filtering posteriors.The Filtering Problem
Bayesian estimation objectives
The primary objective of Bayesian estimation in filtering problems is to compute the posterior distribution , which represents the conditional probability density of the hidden state at time given the sequence of observations up to that time.[5][6] This distribution encapsulates all available information about the state, enabling the derivation of optimal estimates such as the mean or mode for decision-making or inference tasks.[5] In the context of sequential data processing, this posterior serves as the foundation for characterizing uncertainty in dynamic systems.[6] Bayesian filtering distinguishes between three main estimation tasks based on the availability and use of observations. Filtering refers to the online estimation of the current state via , updating recursively as new data arrives.[5][6] Smoothing, in contrast, is an offline process that refines estimates of past states (for ) using all observations up to a final time , yielding with reduced uncertainty due to future data.[5][6] Prediction extends the framework to forecast future states, computing distributions like for by propagating the current posterior forward.[5] The recursive structure of Bayesian filtering relies on Bayes' theorem to update the posterior sequentially. Specifically, where the integral performs prediction by marginalizing over the previous state, and the likelihood incorporates the new observation.[5][6] This formulation alternates between a prediction step, which propagates uncertainty through the system dynamics, and an update step, which corrects the estimate based on the measurement.[5] In nonlinear and non-Gaussian settings, the multidimensional integrals in this recursion lack closed-form analytical solutions, rendering exact computation intractable even with modern numerical methods.[5][6] Such intractability arises because the state transition and observation densities do not preserve simple forms like Gaussianity under integration, motivating the development of approximate inference techniques.[5]State-space signal models
In state-space signal models, the underlying system is typically formulated as a hidden Markov model (HMM), where the state evolves according to a Markov process and observations are conditionally independent given the current state. The state transition is described by the equation , where is the hidden state at time , is a possibly nonlinear transition function, and is process noise, often assumed to be independent and identically distributed (i.i.d.) with a known distribution such as Gaussian. Similarly, the observation model is given by , where is the observed signal, is a possibly nonlinear measurement function, and is observation noise, also typically i.i.d. and independent of the process noise, often Gaussian. These models capture the dynamic evolution of hidden states and their partial observability through noisy measurements, forming the foundation for inference in sequential data processing. A special case arises when both and are linear and the noises and are Gaussian, leading to a linear Gaussian state-space model. In this scenario, the optimal filtering solution can be computed exactly using the Kalman filter, which recursively updates the state estimate and its covariance based on predictions and measurement corrections. For general nonlinear and/or non-Gaussian cases, however, no closed-form solution exists, necessitating approximate methods to handle the intractable integrals involved in state estimation. The model is initialized with a prior distribution over the initial state, which encodes available knowledge about the system's starting condition, such as a Gaussian centered at an expected value. The joint density of the state trajectory and observations up to time is then , reflecting the Markov property and conditional independence assumptions. These formulations provide the probabilistic structure that supports Bayesian estimation objectives by defining the likelihood and prior dynamics for posterior inference. Practical examples illustrate the versatility of these models. In target tracking, a constant velocity model assumes the state evolves as , with linear observations of position, capturing smooth motion under Gaussian process noise.[7] In financial time series analysis, stochastic volatility models treat log-volatility as a hidden state following an autoregressive process, such as with Gaussian , and observations as returns where , enabling the capture of time-varying risk without assuming constant variance.Nonlinear filtering equations
In the context of state-space models, nonlinear filtering addresses the problem of estimating the posterior distribution of a hidden state sequence given a sequence of observations, relying on recursive Bayesian updates.[6] The exact Bayesian filtering recursions consist of a prediction step followed by an update step. In the prediction step, the prior distribution of the state at time given observations up to time is obtained by marginalizing over the previous state: which follows from the Chapman-Kolmogorov equation for the evolution of marginal distributions in Markov processes.[8][6] The update step then incorporates the new observation via Bayes' theorem to yield the posterior: where the normalizing constant, or likelihood of the observation, is [8][6] These recursions provide the optimal solution for nonlinear, non-Gaussian filtering problems under the Bayesian framework but are generally intractable to compute analytically, as the required integrals lack closed-form expressions except in special cases like linear Gaussian models.[6] The intractability intensifies with the curse of dimensionality: in high-dimensional state spaces, the volume of the integration domain grows exponentially, rendering exact evaluation computationally prohibitive and necessitating approximate numerical methods such as particle filters.[8][6]Feynman-Kac probabilistic formulations
The Feynman-Kac formula establishes a connection between solutions of parabolic partial differential equations (PDEs) and expectations involving diffusion processes. For a diffusion process starting from and governed by an Itô stochastic differential equation , where is a Brownian motion, the formula states that the expectation equals , the value at initial time and position of the solution to the PDE with the infinitesimal generator of the diffusion and . In nonlinear filtering problems, the Feynman-Kac framework recasts the posterior distribution of the hidden state given observations as a normalized expectation under a change of probability measure. The unnormalized posterior measure at time is represented as for test functions , where the measure change incorporates the transition dynamics and the potentials corresponding to the observation likelihoods , with the observed data sequence.[9] This semigroup property allows the filtering recursion to be viewed as the evolution of expectations under the Feynman-Kac flow, providing a measure-theoretic foundation for sequential Monte Carlo approximations. Branching particle representations offer a genealogical interpretation of these expectations, modeling the diffusion paths as particle lineages in a branching process. In this setup, each particle simulates a potential state trajectory, with branching events (births and deaths) governed by the potentials : particles "reproduce" with rates proportional to to amplify likely paths and may be pruned otherwise, ensuring the empirical measure of surviving particles approximates the target posterior.[10] This mechanism naturally handles the multiplicative structure of the expectations in the Feynman-Kac formula, facilitating unbiased approximations in high-dimensional or nonlinear settings. Pierre Del Moral's 2004 monograph serves as the foundational reference for integrating Feynman-Kac formulae with interacting and genealogical particle systems, particularly in filtering applications.Core Principles of Particle Filters
Monte Carlo simulation basics
Monte Carlo methods provide a computational framework for approximating expectations and integrals involving complex probability distributions by leveraging random sampling. These techniques rely on the law of large numbers, which ensures that the sample average of a function evaluated at independent draws from a target distribution converges almost surely to the true expectation as the number of samples increases. This empirical approximation is particularly valuable in high-dimensional or intractable settings where analytical solutions are unavailable. A fundamental representation in Monte Carlo simulation is the empirical probability measure, given by , where are independent and identically distributed (i.i.d.) samples from , and denotes the Dirac delta measure at . For a general weighted case, this extends to , where the weights sum to 1, providing an unbiased estimator for integrals of the form . As , this measure converges weakly to the target distribution with probability 1, enabling reliable approximations for large sample sizes. When direct sampling from the target distribution is difficult, importance sampling addresses this by drawing samples from a proposal distribution that is easier to sample from, and reweighting each sample by the importance weight . The weighted empirical measure then approximates expectations under , yielding , where are the normalized weights. This method, originally developed in the context of neutron transport simulations, reduces computational burden by aligning samples with regions of high relevance to the target.[11] To assess the efficiency of such approximations, particularly in importance sampling, the effective sample size quantifies the reduction in variance due to non-uniform weights, defined as . This metric, which ranges from 1 (complete degeneracy, where one weight dominates) to (uniform weights, equivalent to direct sampling), indicates how many i.i.d. samples from would yield the same variance as the weighted set. Variance reduction techniques, including careful choice of to minimize weight variability, aim to maximize and thus improve estimator precision.[12] The statistical error in Monte Carlo estimators follows from the central limit theorem, which establishes that the normalized error converges in distribution to a normal random variable with mean 0 and variance as , provided the variance is finite. Consequently, the root-mean-square error scales as , highlighting the parametric convergence rate that governs the reliability of approximations in practice. This asymptotic normality underpins confidence interval construction and error assessment in Monte Carlo simulations.Sequential importance sampling mechanisms
Sequential importance sampling (SIS) extends Monte Carlo simulation principles to sequentially approximate the posterior distributions in dynamic systems, where particles representing possible states are propagated and reweighted over time as new observations arrive. In particle filters, SIS targets the filtering distribution by drawing particles from an importance proposal distribution and adjusting their weights to account for the discrepancy between the proposal and the true posterior. This mechanism ensures that the weighted particle cloud remains an unbiased estimator of the evolving posterior, though it requires careful design to maintain estimation accuracy.[13] The core of SIS lies in the incremental update of particle weights at each time step . Given a set of particles approximating , new states are sampled from a proposal , and the unnormalized weights are updated as The weights are then normalized such that , yielding the approximation . This recursive formulation leverages the Markov structure of state-space models, allowing efficient propagation without storing the full particle paths if not needed for smoothing.[13][14] The choice of proposal distribution critically affects the efficiency of SIS, as it determines the variance of the weight increments. The bootstrap proposal, , is the simplest, directly sampling from the transition density and simplifying the weight update to ; it requires no conditioning on the current observation but can lead to high weight variance in low-signal scenarios.[13] In contrast, the optimal proposal minimizes this variance and is given by , resulting in weights ; however, computing often demands intractable integrals, making it impractical without approximations.[13][14] A key limitation of SIS is particle impoverishment, arising from the growth in weight variance over time, which causes degeneracy: after several steps, most particles receive near-zero weights, concentrating the approximation on a few dominant particles and reducing effective sample size. This variance growth is theoretically bounded below by a factor involving the likelihood, leading to exponential degeneracy in high-dimensional or informative observation settings; the effective sample size serves as a diagnostic, dropping below a threshold (e.g., ) signals severe impoverishment.[13] The SIS algorithm can be outlined in pseudocode as follows: Initialization (at ):Sample , set . For each time step :
- For to :
- Sample .
- Compute unnormalized weight .
- Normalize: .
- Output weighted particles approximating .
Resampling techniques for particle rejuvenation
Resampling techniques address the degeneracy problem that arises in sequential importance sampling, where particle weights become highly uneven, leading to most particles contributing negligibly to estimates. By selectively replicating high-weight particles and discarding low-weight ones, resampling rejuvenates the particle system while maintaining an unbiased approximation of the posterior distribution. The simplest and most commonly used method is multinomial resampling, which independently draws N new particle indices with replacement, where each original particle is selected with probability proportional to its normalized weight . This approach, integral to the original bootstrap particle filter, is straightforward to implement but can introduce higher variance due to the randomness in selections, potentially leading to particle impoverishment in high dimensions. To mitigate this variance, systematic resampling employs a deterministic spacing mechanism. First, the cumulative weights for are computed. Then, N equally spaced points for are generated from a uniform distribution , and each point is assigned to the smallest index such that . This stratified procedure ensures better particle diversity and lower resampling variance compared to multinomial methods, making it widely adopted for its computational efficiency and ease of parallelization.[15] Residual resampling further reduces variance by combining deterministic and stochastic elements. For each particle , copies are assigned deterministically, accounting for the integer part of the expected multiplicity. The remaining particles are then drawn via multinomial resampling using the residual weights . Introduced as an improvement over pure multinomial schemes, this method preserves more unique particles and exhibits lower variance, particularly when weights are uneven, though it requires slightly more computation for the residual calculation.[16] Resampling is not performed at every step to avoid unnecessary computational overhead and excessive particle depletion; instead, it is triggered based on a degeneracy measure such as the effective sample size (ESS), defined as . A common criterion is to resample when , as this threshold balances estimation accuracy and diversity while preventing severe weight collapse, with stratification in methods like systematic and residual helping to further minimize variance in the resampled weights.[14]Sequential Importance Resampling Algorithm
Bootstrap filter implementation
The bootstrap filter, introduced by Gordon, Salmond, and Smith in 1993, represents a foundational implementation of the sequential importance resampling (SIR) framework for particle filtering, specifically employing the state transition prior as the proposal distribution.[17] This approach approximates the posterior distribution by propagating a set of weighted particles through the system dynamics and updating their weights based on the observation likelihood, followed by resampling to mitigate weight degeneracy.[17] The algorithm proceeds in three main stages at each time step . First, each particle from the previous step is propagated forward by sampling a new state , drawing directly from the transition density without incorporating the current observation.[17] Second, unnormalized importance weights are computed as , reflecting the likelihood of the observation given the predicted state, and then normalized such that .[17] Third, if weight degeneracy is detected—typically via a low effective sample size for some threshold (e.g., 0.5)—resampling is performed by drawing new particles from the set with replacement, using probabilities proportional to , and resetting all weights to .[18] A pseudocode outline for the bootstrap filter initialization and iteration is as follows:Initialization (t=1):
- For i = 1 to N:
- Sample x_1^{(i)} ~ p(x_1)
- Set w_1^{(i)} = p(y_1 | x_1^{(i)})
- Normalize weights: \tilde{w}_1^{(i)} = w_1^{(i)} / \sum_j w_1^{(j)}
Iteration (t >= 2):
- For i = 1 to N:
- Sample \tilde{x}_t^{(i)} ~ p(x_t | x_{t-1}^{(i)})
- Set w_t^{(i)} = p(y_t | \tilde{x}_t^{(i)})
- Normalize weights: \tilde{w}_t^{(i)} = w_t^{(i)} / \sum_j w_t^{(j)}
- If N_eff < \theta N:
- Resample: Draw x_t^{(i)} from \{\tilde{x}_t^{(j)}\} with prob. \tilde{w}_t^{(j)}
- Set \tilde{w}_t^{(i)} = 1/N for all i
Initialization (t=1):
- For i = 1 to N:
- Sample x_1^{(i)} ~ p(x_1)
- Set w_1^{(i)} = p(y_1 | x_1^{(i)})
- Normalize weights: \tilde{w}_1^{(i)} = w_1^{(i)} / \sum_j w_1^{(j)}
Iteration (t >= 2):
- For i = 1 to N:
- Sample \tilde{x}_t^{(i)} ~ p(x_t | x_{t-1}^{(i)})
- Set w_t^{(i)} = p(y_t | \tilde{x}_t^{(i)})
- Normalize weights: \tilde{w}_t^{(i)} = w_t^{(i)} / \sum_j w_t^{(j)}
- If N_eff < \theta N:
- Resample: Draw x_t^{(i)} from \{\tilde{x}_t^{(j)}\} with prob. \tilde{w}_t^{(j)}
- Set \tilde{w}_t^{(i)} = 1/N for all i
Direct algorithmic steps
The sequential importance resampling (SIR) algorithm provides a Monte Carlo-based framework for approximating the posterior distribution in nonlinear, non-Gaussian state-space models through iterative particle propagation and weight adjustment. In its generic form, the procedure begins with initialization and proceeds recursively over time steps, incorporating a proposal distribution for sampling, importance weighting to correct for proposal-target mismatch, normalization, and resampling to mitigate weight degeneracy. The bootstrap filter represents a special case where the proposal distribution matches the state transition kernel. The full SIR procedure is outlined as follows:- Initialization (at time ): Draw particles independently from the initial state distribution , and assign uniform weights to each particle.
-
For each time step :
- Sampling: For each particle , draw a new state sample , where is the chosen proposal distribution (e.g., the transition kernel in the bootstrap case).
- Weight computation: Compute the unnormalized importance weights as .
- Normalization: Normalize the weights to obtain . The normalizing constant provides an approximation to the predictive likelihood: .
- Resampling: Draw indices with replacement from according to the discrete distribution given by , and set the resampled particles as . Assign equal weights to all resampled particles, effectively resetting them to uniform after this step. Multinomial resampling is commonly used for its simplicity and unbiasedness.
- State estimation: Approximate the filtered state posterior using the weighted particles, such as via the mean or maximum a posteriori estimate.
