# Introduction

Suppose we have particle moving in at constant velocity in 1 dimension, where the velocity is sampled from a distribution. We can observe the position of the particle at fixed intervals and we wish to estimate its initial velocity. For generality, let us assume that the positions and the velocities can be perturbed at each interval and that our measurements are noisy.

A point of Haskell interest: using type level literals caught a bug in the mathematical description (one of the dimensions of a matrix was incorrect). Of course, this would have become apparent at run-time but proof checking of this nature is surely the future for mathematicians. One could conceive of writing an implementation of an algorithm or proof, compiling it but never actually running it purely to check that some aspects of the algorithm or proof are correct.

# The Mathematical Model

We take the position as and the velocity :

where and are all IID normal with means of 0 and variances of and

We can re-write this as

where

Let us denote the mean and variance of as and respectively and note that

Since and are jointly Gaussian and recalling that = as covariance matrices are symmetric, we can calculate their mean and covariance matrix as

We can now use standard formulæ which say if

then

and apply this to

to give

This is called the measurement update; more explicitly

Sometimes the measurement residual , the measurement prediction covariance and the filter gain are defined and the measurement update is written as

We further have that

We thus obtain the Kalman filter prediction step:

Further information can be found in (Boyd 2008), (Kleeman 1996) and (Särkkä 2013).

# A Haskell Implementation

The hmatrix now uses type level literals via the DataKind extension in ghc to enforce compatibility of matrix and vector operations at the type level. See here for more details. Sadly a bug in the hmatrix implementation means we can’t currently use this excellent feature and we content ourselves with comments describing what the types would be were it possible to use it.

```
> {-# OPTIONS_GHC -Wall #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods #-}
> {-# OPTIONS_GHC -fno-warn-orphans #-}
```

```
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE ScopedTypeVariables #-}
> {-# LANGUAGE RankNTypes #-}
```

```
> module FunWithKalmanPart1a where
```

```
> import Numeric.LinearAlgebra.HMatrix hiding ( outer )
```

```
> import Data.Random.Source.PureMT
> import Data.Random hiding ( gamma )
> import Control.Monad.State
> import qualified Control.Monad.Writer as W
> import Control.Monad.Loops
```

Let us make our model almost deterministic but with noisy observations.

```
> stateVariance :: Double
> stateVariance = 1e-6
```

```
> obsVariance :: Double
> obsVariance = 1.0
```

And let us start with a prior normal distribution with a mean position and velocity of 0 with moderate variances and no correlation.

```
> -- muPrior :: R 2
> muPrior :: Vector Double
> muPrior = vector [0.0, 0.0]
```

```
> -- sigmaPrior :: Sq 2
> sigmaPrior :: Matrix Double
> sigmaPrior = (2 >< 2) [ 1e1, 0.0
> , 0.0, 1e1
> ]
```

We now set up the parameters for our model as outlined in the preceeding section.

```
> deltaT :: Double
> deltaT = 0.001
```

```
> -- bigA :: Sq 2
> bigA :: Matrix Double
> bigA = (2 >< 2) [ 1, deltaT
> , 0, 1
> ]
```

```
> a :: Double
> a = 1.0
```

```
> -- bigH :: L 1 2
> bigH :: Matrix Double
> bigH = (1 >< 2) [ a, 0
> ]
```

```
> -- bigSigmaY :: Sq 1
> bigSigmaY :: Matrix Double
> bigSigmaY = (1 >< 1) [ obsVariance ]
```

```
> -- bigSigmaX :: Sq 2
> bigSigmaX :: Matrix Double
> bigSigmaX = (2 >< 2) [ stateVariance, 0.0
> , 0.0, stateVariance
> ]
```

The implementation of the Kalman filter using the hmatrix package is straightforward.

```
> -- outer :: forall m n . (KnownNat m, KnownNat n) =>
> -- R n -> Sq n -> L m n -> Sq m -> Sq n -> Sq n -> [R m] -> [(R n, Sq n)]
> outer :: Vector Double
> -> Matrix Double
> -> Matrix Double
> -> Matrix Double
> -> Matrix Double
> -> Matrix Double
> -> [Vector Double]
> -> [(Vector Double, Matrix Double)]
> outer muPrior sigmaPrior bigH bigSigmaY bigA bigSigmaX ys = result
> where
> result = scanl update (muPrior, sigmaPrior) ys
>
> -- update :: (R n, Sq n) -> R m -> (R n, Sq n)
> update (xHatFlat, bigSigmaHatFlat) y =
> (xHatFlatNew, bigSigmaHatFlatNew)
> where
> -- v :: R m
> v = y - bigH #> xHatFlat
> -- bigS :: Sq m
> bigS = bigH <> bigSigmaHatFlat <> (tr bigH) + bigSigmaY
> -- bigK :: L n m
> bigK = bigSigmaHatFlat <> (tr bigH) <> (inv bigS)
> -- xHat :: R n
> xHat = xHatFlat + bigK #> v
> -- bigSigmaHat :: Sq n
> bigSigmaHat = bigSigmaHatFlat - bigK <> bigS <> (tr bigK)
> -- xHatFlatNew :: R n
> xHatFlatNew = bigA #> xHat
> -- bigSigmaHatFlatNew :: Sq n
> bigSigmaHatFlatNew = bigA <> bigSigmaHat <> (tr bigA) + bigSigmaX
```

We create some ranodm data using our model parameters.

```
> singleSample ::(Double, Double) ->
> RVarT (W.Writer [(Double, (Double, Double))]) (Double, Double)
> singleSample (xPrev, vPrev) = do
> psiX <- rvarT (Normal 0.0 stateVariance)
> let xNew = xPrev + deltaT * vPrev + psiX
> psiV <- rvarT (Normal 0.0 stateVariance)
> let vNew = vPrev + psiV
> upsilon <- rvarT (Normal 0.0 obsVariance)
> let y = a * xNew + upsilon
> lift $ W.tell [(y, (xNew, vNew))]
> return (xNew, vNew)
```

```
> streamSample :: RVarT (W.Writer [(Double, (Double, Double))]) (Double, Double)
> streamSample = iterateM_ singleSample (1.0, 1.0)
```

```
> samples :: ((Double, Double), [(Double, (Double, Double))])
> samples = W.runWriter (evalStateT (sample streamSample) (pureMT 2))
```

Here are the actual values of the randomly generated positions.

```
> actualXs :: [Double]
> actualXs = map (fst . snd) $ take nObs $ snd samples
```

```
> test :: [(Vector Double, Matrix Double)]
> test = outer muPrior sigmaPrior bigH bigSigmaY bigA bigSigmaX
> (map (\x -> vector [x]) $ map fst $ snd samples)
```

And using the Kalman filter we can estimate the positions.

```
> estXs :: [Double]
> estXs = map (!!0) $ map toList $ map fst $ take nObs test
```

```
> nObs :: Int
> nObs = 1000
```

And we can see that the estimates track the actual positions quite nicely.

Of course we really wanted to estimate the velocity.

```
> actualVs :: [Double]
> actualVs = map (snd . snd) $ take nObs $ snd samples
```

```
> estVs :: [Double]
> estVs = map (!!1) $ map toList $ map fst $ take nObs test
```

# Bibliography

Boyd, Stephen. 2008. “EE363 Linear Dynamical Systems.” http://stanford.edu/class/ee363.

Kleeman, Lindsay. 1996. “Understanding and Applying Kalman Filtering.” In *Proceedings of the Second Workshop on Perceptive Systems, Curtin University of Technology, Perth Western Australia (25-26 January 1996)*.

Särkkä, Simo. 2013. *Bayesian Filtering and Smoothing*. Vol. 3. Cambridge University Press.

Pingback: Fun with (Extended Kalman) Filters | Idontgetoutmuch’s Weblog

Pingback: The Flow of the Thames: An Autoregressive Model | Idontgetoutmuch's Weblog