Bayesian Filters

This module contains Bayesian filters.

All classes from this module are currently imported to top-level pybayes module, so instead of from pybayes.filters import KalmanFilter you can type from pybayes import KalmanFilter.

Filter prototype

class pybayes.filters.Filter[source]

Abstract prototype of a bayesian filter.

bayes(yt, cond=None)[source]

Perform approximate or exact bayes rule.

Parameters:
  • yt (1D numpy.ndarray) – observation at time t
  • cond (1D numpy.ndarray) – condition at time t. Exact meaning is defined by each filter
Returns:

always returns True (see posterior() to get posterior density)

posterior()[source]

Return posterior probability density funcion (CPdf).

Returns:posterior density
Return type:Pdf

Filter implementations may decide to return a reference to their work pdf - it is not safe to modify it in any way, doing so may leave the filter in undefined state.

evidence_log(yt)[source]

Return the logarithm of evidence function (also known as marginal likelihood) evaluated in point yt.

Parameters:yt (numpy.ndarray) – point which to evaluate the evidence in
Return type:double

This is typically computed after bayes() with the same observation:

>>> filter.bayes(yt)
>>> log_likelihood = filter.evidence_log(yt)

Kalman Filter

class pybayes.filters.KalmanFilter(A, B=None, C=None, D=None, Q=None, R=None, state_pdf=None)[source]

Implementation of standard Kalman filter. cond in bayes() is interpreted as control (intervention) input u_t to the system.

Kalman filter forms optimal Bayesian solution for the following system:

x_t &= A_t x_{t-1} + B_t u_t + v_{t-1} \quad \quad
    A_t \in \mathbb{R}^{n,n} \;\;
    B_t \in \mathbb{R}^{n,k} \;\;
    \;\; n \in \mathbb{N}
    \;\; k \in \mathbb{N}_0 \text{ (may be zero)}
    \\
y_t &= C_t x_t + D_t u_t + w_t \quad \quad \quad \;\;
    C_t \in \mathbb{R}^{j,n} \;\;
    D_t \in \mathbb{R}^{j,k} \;\;
    j \in \mathbb{N} \;\; j \leq n

where x_t \in \mathbb{R}^n is hidden state vector, y_t \in \mathbb{R}^j is observation vector and u_t \in \mathbb{R}^k is control vector. v_t is normally distributed zero-mean process noise with covariance matrix Q_t, w_t is normally distributed zero-mean observation noise with covariance matrix R_t. Additionally, intial pdf (state_pdf) has to be Gaussian.

__init__(A, B=None, C=None, D=None, Q=None, R=None, state_pdf=None)[source]

Initialise Kalman filter.

Parameters:

All matrices can be time-varying - you can modify or replace all above stated matrices providing that you don’t change their shape and all constraints still hold. On the other hand, you should not modify state_pdf unless you really know what you are doing.

>>> # initialise control-less Kalman filter:
>>> kf = pb.KalmanFilter(A=np.array([[1.]]),
                         C=np.array([[1.]]),
                         Q=np.array([[0.7]]), R=np.array([[0.3]]),
                         state_pdf=pb.GaussPdf(...))
bayes(yt, cond=None)[source]

Perform exact bayes rule.

Parameters:
  • yt (1D numpy.ndarray) – observation at time t
  • cond (1D numpy.ndarray) – control (intervention) vector at time t. May be unspecified if the filter is control-less.
Returns:

always returns True (see posterior() to get posterior density)

Particle Filter Family

class pybayes.filters.ParticleFilter(n, init_pdf, p_xt_xtp, p_yt_xt, proposal=None)[source]

Standard particle filter (or SIR filter, SMC method) implementation with resampling and optional support for proposal density.

Posterior pdf is represented using EmpPdf and takes following form:

p(x_t|y_{1:t}) = \sum_{i=1}^n \omega_i \delta ( x_t - x_t^{(i)} )

__init__(n, init_pdf, p_xt_xtp, p_yt_xt, proposal=None)[source]

Initialise particle filter.

Parameters:
  • n (int) – number of particles
  • init_pdf (Pdf) – either EmpPdf instance that will be used directly as a posterior (and should already have initial particles sampled) or any other probability density which initial particles are sampled from
  • p_xt_xtp (CPdf) – p(x_t|x_{t-1}) cpdf of state in t given state in t-1
  • p_yt_xt (CPdf) – p(y_t|x_t) cpdf of observation in t given state in t
  • proposal (Filter) – (optional) a filter whose posterior will be used to sample particles in bayes() from (and to correct their weights). More specifically, its bayes \left(y_t, x_{t-1}^{(i)}\right) method is called before sampling i-th particle. Each call to bayes() should therefore reset any effects of the previous call.
bayes(yt, cond=None)[source]

Perform Bayes rule for new measurement y_t; cond is ignored.

Parameters:cond (numpy.ndarray) – optional condition that is passed to p(x_t|x_{t-1}) after x_{t-1} so that is can be rewritten as: p(x_t|x_{t-1}, c).

The algorithm is as follows:

  1. generate new particles: x_t^{(i)} = \text{sample from }
p(x_t^{(i)}|x_{t-1}^{(i)}) \quad \forall i
  2. recompute weights: \omega_i = p(y_t|x_t^{(i)})
\omega_i \quad \forall i
  3. normalise weights
  4. resample particles
class pybayes.filters.MarginalizedParticleFilter(n, init_pdf, p_bt_btp, kalman_args, kalman_class=<class 'pybayes.filters.KalmanFilter'>)[source]

Simple marginalized particle filter implementation. Assume that tha state vector x can be divided into two parts x_t = (a_t, b_t) and that the pdf representing the process model can be factorised as follows:

p(x_t|x_{t-1}) = p(a_t|a_{t-1}, b_t) p(b_t | b_{t-1})

and that the a_t part (given b_t) can be estimated with (a subbclass of) the KalmanFilter. Such system may be suitable for the marginalized particle filter, whose posterior pdf takes the form

p &= \sum_{i=1}^n \omega_i p(a_t | y_{1:t}, b_{1:t}^{(i)}) \delta(b_t - b_t^{(i)}) \\
p(a_t | y_{1:t}, b_{1:t}^{(i)}) &\text{ is posterior pdf of i}^{th} \text{ Kalman filter} \\
\text{where } \quad \quad \quad \quad \quad
b_t^{(i)} &\text{ is value of the (b part of the) i}^{th} \text{ particle} \\
\omega_i \geq 0 &\text{ is weight of the i}^{th} \text{ particle} \quad \sum \omega_i = 1

Note: currently b_t is hard-coded to be process and observation noise covariance of the a_t part. This will be changed soon and b_t will be passed as condition to KalmanFilter.bayes().

__init__(n, init_pdf, p_bt_btp, kalman_args, kalman_class=<class 'pybayes.filters.KalmanFilter'>)[source]

Initialise marginalized particle filter.

Parameters:
  • n (int) – number of particles
  • init_pdf (Pdf) – probability density which initial particles are sampled from. (both a_t and b_t parts)
  • p_bt_btp (CPdf) – p(b_t|b_{t-1}) cpdf of the (b part of the) state in t given state in t-1
  • kalman_args (dict) – arguments for the Kalman filter, passed as dictionary; state_pdf key should not be speficied as it is supplied by the marginalized particle filter
  • kalman_class (class) – class of the filter used for the a_t part of the system; defaults to KalmanFilter
bayes(yt, cond=None)[source]

Perform Bayes rule for new measurement y_t. Uses following algorithm:

  1. generate new b parts of particles: b_t^{(i)} = \text{sample from }
p(b_t^{(i)}|b_{t-1}^{(i)}) \quad \forall i
  2. \text{set } Q_i = b_t^{(i)} \quad R_i = b_t^{(i)} where Q_i, R_i is covariance of process (respectively observation) noise in ith Kalman filter.
  3. perform Bayes rule for each Kalman filter using passed observation y_t
  4. recompute weights: \omega_i = p(y_t | y_{1:t-1}, b_t^{(i)}) \omega_i where p(y_t | y_{1:t-1}, b_t^{(i)}) is evidence (marginal likelihood) pdf of ith Kalman filter.
  5. normalise weights
  6. resample particles

Table Of Contents

Previous topic

Probability Density Functions

Next topic

PyBayes Development

This Page