The goal of temporal filtering is essentially to estimate “optimally” the pdf of the hidden state of a temporally evolving stochastic process. (The case of producing a concrete state estimate is relatively simple once the pdf is estimated.) This page provides an overview of the very generic problem and gives a rough indication how various assumptions lead to different filters (i.e, algorithms for “filtering” the signal from the noisy data).
This page primarily deals with filtering of discrete time probabilistic systems. For continuous time filtering see stochastic filter.
We first outline the basic model and some notations, before giving a “mathematical” expressions for the solution. We then discuss how various modelling assumptions and computational constraints lead to particular filters.
Note: in full generality filtering considers a process into which control signals are fed (often based upon the previous state estimates) and incorporates their expected results into the estimation process. This is beyond the scope of this text.
Our basic model is that we are tracking a hidden state variable that evolves in discrete time using a model of its observable consequences. (The unobservable state is generally referred to intechangeably as hidden or latent.) There are two components:
Probabilistic dynamics: The value of given is drawn from the dynamics distribution . This distribution typically has a deterministic component (which may be either linear or nonlinear) and a random component. For a given problem this decomposition may or may not be explicitly known.
Data support: . In the most general formulation can be any blackbox function, although sometimes it has a known structure (e.g., a Gaussian centred on a point). We call this data rather than measurements to highlight that this is not restricted to be a single “instrument reading”. (Note that often needs to be calculated from using Bayes’ Theorem, but we assume this has already been done.)
Note that the process model is often specified in terms of more explicit equations, but these are not general enough to highlight the similarity between the various filters.
There are some common notations that will make it easier to see what equations denote.
Note the process model assumes that evolution is Markov in the sense that is independent of for given . (Standard tupling-of-past-values into the state variable can be used to turn an n-th order Markov processes into a first order Markov process). Thus the probabilistic dynamics can be represented by a set of distributions (possibly different for each ).
There are two consecutive output stages in filtering:
The first step is to estimate , namely the current state’s pdf using only the inferences drawn from previous timesteps’ data along with knowledge of the dynamics. Simple probabilty leads to the commonly shown backward formulation:
which can be seen as a convolution of with the “probabilistic dynamics” . It will turn out to be clearer to consider the forward formulation:
where is borrowed from computer languages to denote accumulation. (A series of assignments to a “variable” , …, is equivalent to .) Providing the s are over the entirety of (and interpreted appropriately when is a continuous space), this forward formulation is exactly correct.
These two equations actually have a much more similar structure than it initially appears because the in the backward formulation implicitly contains and . The advantage of the forward formulation is that when only a limited number of evaluations is computationally possible, having chosen an by sampling a “good” – in the sense of being an large contribution to the probability mass making up the new pdf – value of can be found simply by drawing from an element from . In contrast, it’s not clear how to use the backward formulation to pick a “good” value of even with the knowledge of .
The second step in filtering is to compute , namely the state pdf estimate using all data including that for the current timestep. This is much simpler:
where the constant of proportionality is independent of the state estimation (it’s essentially ).
Now consider how to approximate this forward formulation process in the case where we have limited computational resources. Clearly it is in general impractical to perform all the “choices” implied by the s. There are two choices at extreme end of the spectrum:
For state distributions that are not believed to be close to parametric forms or probabilistic dynamics which have nonlinear deterministic parts or non-Gaussian stochastic components, one could choose a pdf represention as a set of particles, each consisting of a position and a weight . A formal mathematical way of viewing a set of particles is as a sum of weighted delta functions in , and as with this sum it is convenient to allow multiple particles at the same position to correspond to the pdf having a value equal to the sum of their weights. One big advantage of using particles is that the number of particles can be adjusted to trade-off between computational effort and accuracy.
The particle representation clearly does not capture some characteristics of the pdfs such as continuity, but importantly gives good approximations to various statistics (e.g., mean, etc) of the pdf and can be used to generate random samples from the pdf.
This is very useful because, in its simplest form, we can simply approximate the forward formulation by direct random simulation: intially set to empty and apply
times, finally normalising the particle representation of . A minor technicality is that the sampling in step 1 can be moved after step 3 so that the representation is actually maintained as a set of constant weight particles containing many duplicates. It is important to note that this resampling, wherever it occurs, means that at each step computational effort is probabilistically concentrated on regions of the pdf according to their likelihood: this is very different, and has much greater accuracy, to following an initial set of particles over multiple timesteps without resampling.
This procedure is very simple to implement and is the basic particle filter.
In terms of the forward formulation, this corresponds to replacing the with samplings from the particle set and replacing the with the choice of one sample drawn from the probabilistic dynamics for each particle.
It is also interesting that these steps just require the ability to sample from the probabilistic dynamics, not any form of analytic representation. Thus this algorithm is very widely applicable. However it is very inefficient if the problem has some known analytical structure that could be used to replace some of the sampling.
TODO
We can take the representation (i.e., and ) to be some parametric probability distribution. Then we can look for restrictions on the dynamical model and data distribution which guarantee that for will be in the representation whenever is (reminiscent of the closedness condition for a group). This is very, very computationally efficient and is accurate when the actual process fits the model well.
The case where the representation and data distribution are Gaussian and the dynamics are linear plus Gaussian noise is optimally solved by the Kalman filter?.
In terms of the forward formulation, this corresponds to replacing the with a single particle at the parametric Gaussian state distribution mean and replacing the with the deterministic linear part of the probabilistic dynamics. However in this case the covariance of the distribution can also be propagated exactly analytically so that a new Gaussian state distribution is obtained.
The Kalman filter is only “exactly correct” in the case the the deterministic part of the dynamics is linear. The Extended Kalman Filter is the simplest extension to non-linear deterministic dynamics and is obtained by linearising the dynamics around the current mean (using Jacobian matrices) and applying the updates for a linear function. How successful this is depends on the degree of nonlinearity relative to the curreent covariance.
In terms of the forward formulation, this corresponds to replacing the with a single particle at the parametric Gaussian state distribution mean and replacing the with the deterministic part of the probabilistic dynamics. However, by linearising the covariance of the distribution can also be propagated analytically to give an approximate new Gaussian state distribution.
In filtering one often thinks that the state distribution ought to be a unimodal distribution which ought to be well approximated by a Gaussian but highly non-linear dynamics causes the Kalman filter to perform poorly. In this case one can attempt to propagate a Gaussian pdf through the non-linear function more accurately using the following procedure:
This process is known as the Unscented Kalman Filter or the Sigma-point Kalman Filter and is generally much more accurate than the Extended Kalman Filter for nonlinear dynamics. (One can show that the symmetry causes certain terms in the deterministic update function’s Taylor expansion to cancel, providing some explanation.)
In terms of the forward formulation, this corresponds to replacing the with particles placed deterministically from the parametric Gaussian state representation and replacing the with selection of the deterministic component of the probabilistic dynamics for each particle, finally converting the particles into a parametric Gaussian with the same mean and covariance.
One can take this idea further by considering the Iterated Sigma-point Filter, which attempts to use a more sophisticated iterative Gaussian fitting technique to determine the new Gaussian from the transformed sigma-points.
In terms of the forward formulation, this corresponds to replacing the with particles placed deterministically from the parametric Gaussian state representation and replacing the with selection of the deterministic component of the probabilistic dynamics for each particle, finally converting the particles into a parametric Gaussian using an iterative fitting scheme.
Here’s a schematic illustration of the accuracy of various “transforms” for propagating the representation of a pdf through non-linear dynamics: