Kalman Filter

Recall that the key steps in implementing a model are:
represent the probability distribution (analytically or using sampling)
model the motion (or state transition):
model the measurement:
Kalman Filter
In the standard Kalman Filter algorithm, the state transition is modeled as:
where . The covariance matrix represents the uncertainty or noise. To obtain the analytic form of , we just need to observer follows the normal distribution.
Similarly, the measurement in the standard Kalman filter is modeled as:
where describing the measurement noise.
It can be shown that in the standard Kalman filter, the states follows normal distribution. i.e.:
and the algorithm is given as

\begin{algorithm}
\renewcommand{\thealgorithm}{}
\begin{algorithmic}[1]
\Function{Kalman\_Filter}{$\mu_{t-1}, \Sigma_{t-1}, u_t, z_t$}
\State $\bar{\mu}_t = A_t \mu_{t-1} + B_t u_t$
\State $\bar{\Sigma}_t = A_t\Sigma_{t-1}A_t^T + R_t$
\State $K_t = \bar{\Sigma}_t C_t^T (C_t \bar{\Sigma}_t C_t^T + Q_t)^{-1}$ \Comment{This is called Kalman gain}
\State $\mu_t = \bar{\mu}_t + K_t(z_t - C_t \bar{\mu}_t)$
\State $\Sigma_t = (I - K_tC_t) \bar{\Sigma}_t$
\State $\textbf{return} \;\;\; \mu_t, \Sigma_t$
\EndFunction
\end{algorithmic}
\end{algorithm}
Extended Kalman Filter
In EKF, the state transition and measurement model take more general form:
The problem is that for arbitrary functions, it may not be possible to obtain an analytic form of the distribution of state variable . One way to get around this problem is linearization using Taylor expansion.
Before we look into the detail of the Taylor expansion, let's take one step back and review what we already have. One of the important things is not to confuse parameters with known values.
Recall that the ultimate goal is to calculate and . Although they are conditional probability and and are conditions in the two expressions respectively, all of them are parameters (or function arguments). If we forget about the probability context for a moment, it's quite obvious is a mapping from to a value.
We also recall that in the standard Kalman filter, the distribution of the states is tracked by so at time , are known values.
Now, we can get back to the Taylor expansion. For the motion model, we perform the linearization around the because this is our estimate of the state at and it should be close to . Therefore, we have
where means the partial derivative with respect to the second variable evaluated at .
Similarly, we can write
where .
The extended Kalman filter algorithm is given as:

\begin{algorithm}
\renewcommand{\thealgorithm}{}
\begin{algorithmic}[1]
\Function{EKF}{$\mu_{t-1}, \Sigma_{t-1}, u_t, z_t$}
\State $\bar{\mu}_t = g(u_t, \mu_{t-1})$
\State $\bar{\Sigma}_t = G_t\Sigma_{t-1}G_t^T + R_t$
\State $K_t = \bar{\Sigma}_t H_t^T (H_t \bar{\Sigma}_t H_t^T + Q_t)^{-1}$
\State $\mu_t = \bar{\mu}_t + K_t(z_t - h(\bar{\mu}_t))$
\State $\Sigma_t = (I - K_tH_t) \bar{\Sigma}_t$
\State $\textbf{return} \;\;\; \mu_t, \Sigma_t$
\EndFunction
\end{algorithmic}
\end{algorithm}
Last updated