Gibbs Sampling in R, Haskell, Jags and Stan

Introduction

It’s possible to Gibbs sampling in most languages and since I am doing some work in R and some work in Haskell, I thought I’d present a simple example in both languages: estimating the mean from a normal distribution with unknown mean and variance. Although one can do Gibbs sampling directly in R, it is more common to use a specialised language such as JAGS or STAN to do the actual sampling and do pre-processing and post-processing in R. This blog post presents implementations in native R, JAGS and STAN as well as Haskell.

Preamble

> {-# OPTIONS_GHC -Wall                      #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing   #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults    #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind   #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods  #-}
> {-# OPTIONS_GHC -fno-warn-orphans          #-}
> {-# LANGUAGE NoMonomorphismRestriction     #-}
> module Gibbs (
>     main
>   , m
>   , Moments(..)
>   ) where
> 
> import qualified Data.Vector.Unboxed as V
> import qualified Control.Monad.Loops as ML
> import Data.Random.Source.PureMT
> import Data.Random
> import Control.Monad.State
> import Data.Histogram ( asList )
> import Data.Histogram.Fill
> import Data.Histogram.Generic ( Histogram )
> import Data.List
> import qualified Control.Foldl as L
> 
> import Diagrams.Backend.Cairo.CmdLine
> 
> import LinRegAux
> 
> import Diagrams.Backend.CmdLine
> import Diagrams.Prelude hiding ( sample, render )

The length of our chain and the burn-in.

> nrep, nb :: Int
> nb   = 5000
> nrep = 105000

Data generated from {\cal{N}}(10.0, 5.0).

> xs :: [Double]
> xs = [
>     11.0765808082301
>   , 10.918739177542
>   , 15.4302462747137
>   , 10.1435649220266
>   , 15.2112705014697
>   , 10.441327659703
>   , 2.95784054883142
>   , 10.2761068139607
>   , 9.64347295100318
>   , 11.8043359297675
>   , 10.9419989262713
>   , 7.21905367667346
>   , 10.4339807638017
>   , 6.79485294803006
>   , 11.817248658832
>   , 6.6126710570584
>   , 12.6640920214508
>   , 8.36604701073303
>   , 12.6048485320333
>   , 8.43143879537592
>   ]

A Bit of Theory

Gibbs Sampling

For a multi-parameter situation, Gibbs sampling is a special case of Metropolis-Hastings in which the proposal distributions are the posterior conditional distributions.

Referring back to the explanation of the metropolis algorithm, let us describe the state by its parameters i \triangleq \boldsymbol{\theta}^{(i)} \triangleq (\theta^{(i)}_1,\ldots, \theta^{(i)}_n) and the conditional posteriors by \pi\big({\theta}_{k}^{(j)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(i)}\big) where {\boldsymbol{\theta}}^{(i)}_{-k} = \big(\theta_1^{(i)},\ldots,\theta_{k-1}^{(i)},\theta_{k+1}^{(i)}\ldots\theta_n^{(i)}\big) then

\displaystyle   \begin{aligned}  \frac{\pi\big(\boldsymbol{\theta}^{(j)}\big)q\big(\boldsymbol{\theta}^{(j)}, \boldsymbol{\theta}^{(i)}\big)}  {\pi(\boldsymbol{\theta}^{(i)})q(\boldsymbol{\theta}^{(i)}, \boldsymbol{\theta}^{(j)})}  &=  \frac{  \pi\big({\theta}_{k}^{(j)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\theta}_{k}^{(i)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)  }  {  \pi\big({\theta}_{k}^{(i)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(i)}\big)\pi\big({\boldsymbol{\theta}}_{-k}^{(i)}\big)\pi\big({\theta}_{k}^{(j)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(i)}\big)  } \\  &=  \frac{  \pi\big({\theta}_{k}^{(j)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\theta}_{k}^{(i)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)  }  {  \pi\big({\theta}_{k}^{(i)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\boldsymbol{\theta}}_{-k}^{(j)}\big)\pi\big({\theta}_{k}^{(j)} \,\big|\, {\boldsymbol{\theta}}_{-k}^{(j)}\big)  } \\  &= 1  \end{aligned}

where we have used the rules of conditional probability and the fact that \boldsymbol{\theta}_i^{(-k)} = \boldsymbol{\theta}_j^{(-k)}

Thus we always accept the proposed jump. Note that the chain is not in general reversible as the order in which the updates are done matters.

Normal Distribution with Unknown Mean and Variance

It is fairly standard to use an improper prior

\displaystyle   \begin{aligned}  \pi(\mu, \tau) \propto \frac{1}{\tau} & & -\infty < \mu < \infty\, \textrm{and}\, 0 < \tau < \infty  \end{aligned}

The likelihood is

\displaystyle   p(\boldsymbol{x}\,|\,\mu, \sigma) = \prod_{i=1}^n \bigg(\frac{1}{\sigma\sqrt{2\pi}}\bigg)\exp{\bigg( -\frac{(x_i - \mu)^2}{2\sigma^2}\bigg)}

re-writing in terms of precision

\displaystyle   p(\boldsymbol{x}\,|\,\mu, \tau) \propto \prod_{i=1}^n \sqrt{\tau}\exp{\bigg( -\frac{\tau}{2}{(x_i - \mu)^2}\bigg)} = \tau^{n/2}\exp{\bigg( -\frac{\tau}{2}\sum_{i=1}^n{(x_i - \mu)^2}\bigg)}

Thus the posterior is

\displaystyle   p(\mu, \tau \,|\, \boldsymbol{x}) \propto \tau^{n/2 - 1}\exp{\bigg( -\frac{\tau}{2}\sum_{i=1}^n{(x_i - \mu)^2}\bigg)}

We can re-write the sum in terms of the sample mean \bar{x} = \frac{1}{n}\sum_{i=1}^n x_i and variance s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})^2 using

\displaystyle   \begin{aligned}  \sum_{i=1}^n (x_i - \mu)^2 &= \sum_{i=1}^n (x_i - \bar{x} + \bar{x} - \mu)^2 \\  &= \sum_{i=1}^n (x_i - \bar{x})^2 - 2\sum_{i=1}^n (x_i - \bar{x})(\bar{x} - \mu) + \sum_{i=1}^n (\bar{x} - \mu)^2 \\  &= \sum_{i=1}^n (x_i - \bar{x})^2 - 2(\bar{x} - \mu)\sum_{i=1}^n (x_i - \bar{x}) + \sum_{i=1}^n (\bar{x} - \mu)^2 \\  &= (n - 1)s^2 + n(\bar{x} - \mu)^2  \end{aligned}

Thus the conditional posterior for \mu is

\displaystyle   \begin{aligned}  p(\mu \,|\, \tau, \boldsymbol{x}) &\propto \exp{\bigg( -\frac{\tau}{2}\bigg(\nu s^2 + \sum_{i=1}^n{(\mu - \bar{x})^2}\bigg)\bigg)} \\  &\propto \exp{\bigg( -\frac{n\tau}{2}{(\mu - \bar{x})^2}\bigg)} \\  \end{aligned}

which we recognise as a normal distribution with mean of \bar{x} and a variance of (n\tau)^{-1}.

The conditional posterior for \tau is

\displaystyle   \begin{aligned}  p(\tau \,|\, , \mu, \boldsymbol{x}) &\propto \tau^{n/2 -1}\exp\bigg(-\tau\frac{1}{2}\sum_{i=1}^n{(x_i - \mu)^2}\bigg)  \end{aligned}

which we recognise as a gamma distribution with a shape of n/2 and a scale of \frac{1}{2}\sum_{i=1}^n{(x_i - \mu)^2}

In this particular case, we can calculate the marginal posterior of \mu analytically. Writing z = \frac{\tau}{2}\sum_{i=1}^n{(x_i - \mu)^2} we have

\displaystyle   \begin{aligned}  p(\mu \,|\, \boldsymbol{x}) &= \int_0^\infty p(\mu, \tau \,|\, \boldsymbol{x}) \textrm{d}\tau \\  &\propto \int_0^\infty \tau^{n/2 - 1}\exp{\bigg( -\frac{\tau}{2}\sum_{i=1}^n{(x_i - \mu)^2}\bigg)} \textrm{d}\tau \\  &\propto \bigg( \sum_{i=1}^n{(x_i - \mu)^2} \bigg)^{-n/2} \int_0^\infty z^{n/2 - 1}\exp{-z}\textrm{d}\tau \\  &\propto \bigg( \sum_{i=1}^n{(x_i - \mu)^2} \bigg)^{-n/2} \\  \end{aligned}

Finally we can calculate

\displaystyle   \begin{aligned}  p(\mu \,|\, \boldsymbol{x}) &\propto \bigg( (n - 1)s^2 + n(\bar{x} - \mu)^2 \bigg)^{-n/2} \\  &\propto \bigg( 1 + \frac{n(\mu - \bar{x})^2}{(n - 1)s^2} \bigg)^{-n/2} \\  \end{aligned}

This is the non-standardized Student’s t-distribution t_{n-1}(\bar{x}, s^2/n).

Alternatively the marginal posterior of \mu is

\displaystyle   \frac{\mu - \bar{x}}{s/\sqrt{n}}\bigg|\, x \sim t_{n-1}

where t_{n-1} is the standard t distribution with n - 1 degrees of freedom.

The Model in Haskell

Following up on a comment from a previous blog post, let us try using the foldl package to calculate the length, the sum and the sum of squares traversing the list only once. An improvement on creating your own strict record and using foldl’ but maybe it is not suitable for some methods e.g. calculating the skewness and kurtosis incrementally, see below.

> x2Sum, xSum, n :: Double
> (x2Sum, xSum, n) = L.fold stats xs
>   where
>     stats = (,,) <$>
>             (L.premap (\x -> x * x) L.sum) <*>
>             L.sum <*>
>             L.genericLength

And re-writing the sample variance s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})^2 using

\displaystyle   \begin{aligned}  \sum_{i=1}^n (x_i - \bar{x})^2 &= \sum_{i=1}^n (x_i^2 - 2x_i\bar{x} + \bar{x}^2) \\  &= \sum_{i=1}^n x_i^2 - 2\bar{x}\sum_{i=1}^n x_i + \sum_{i=1}^n \bar{x}^2 \\  &= \sum_{i=1}^n x_i^2 - 2n\bar{x}^2 + n\bar{x}^2 \\  &= \sum_{i=1}^n x_i^2 - n\bar{x}^2 \\  \end{aligned}

we can then calculate the sample mean and variance using the sums we have just calculated.

> xBar, varX :: Double
> xBar = xSum / n
> varX = n * (m2Xs - xBar * xBar) / (n - 1)
>   where m2Xs = x2Sum / n

In random-fu, the Gamma distribution is specified by the rate paratmeter, \beta.

> beta, initTau :: Double
> beta = 0.5 * n * varX
> initTau = evalState (sample (Gamma (n / 2) beta)) (pureMT 1)

Our sampler takes an old value of \tau and creates new values of \mu and \tau.

> gibbsSampler :: MonadRandom m => Double -> m (Maybe ((Double, Double), Double))
> gibbsSampler oldTau = do
>   newMu <- sample (Normal xBar (recip (sqrt (n * oldTau))))
>   let shape = 0.5 * n
>       scale = 0.5 * (x2Sum + n * newMu^2 - 2 * n * newMu * xBar)
>   newTau <- sample (Gamma shape (recip scale))
>   return $ Just ((newMu, newTau), newTau)

From which we can create an infinite stream of samples.

> gibbsSamples :: [(Double, Double)]
> gibbsSamples = evalState (ML.unfoldrM gibbsSampler initTau) (pureMT 1)

As our chains might be very long, we calculate the mean, variance, skewness and kurtosis using an incremental method.

> data Moments = Moments { mN :: !Double
>                        , m1 :: !Double
>                        , m2 :: !Double
>                        , m3 :: !Double
>                        , m4 :: !Double
>                        }
>   deriving Show
> moments :: [Double] -> Moments
> moments xs = foldl' f (Moments 0.0 0.0 0.0 0.0 0.0) xs
>   where
>     f :: Moments -> Double -> Moments
>     f m x = Moments n' m1' m2' m3' m4'
>       where
>         n = mN m
>         n'  = n + 1
>         delta = x - (m1 m)
>         delta_n = delta / n'
>         delta_n2 = delta_n * delta_n
>         term1 = delta * delta_n * n
>         m1' = m1 m + delta_n
>         m4' = m4 m +
>               term1 * delta_n2 * (n'*n' - 3*n' + 3) +
>               6 * delta_n2 * m2 m - 4 * delta_n * m3 m
>         m3' = m3 m + term1 * delta_n * (n' - 2) - 3 * delta_n * m2 m
>         m2' = m2 m + term1

In order to examine the posterior, we create a histogram.

> numBins :: Int
> numBins = 400
> hb :: HBuilder Double (Data.Histogram.Generic.Histogram V.Vector BinD Double)
> hb = forceDouble -<< mkSimple (binD lower numBins upper)
>   where
>     lower = xBar - 2.0 * sqrt varX
>     upper = xBar + 2.0 * sqrt varX

And fill it with the specified number of samples preceeded by a burn-in.

> hist :: Histogram V.Vector BinD Double
> hist = fillBuilder hb (take (nrep - nb) $ drop nb $ map fst gibbsSamples)

Now we can plot this.

And calculate the skewness and kurtosis.

> m :: Moments
> m = moments (take (nrep - nb) $ drop nb $ map fst gibbsSamples)
ghci> import Gibbs
ghci> putStrLn $ show $ (sqrt (mN m)) * (m3 m) / (m2 m)**1.5
  8.733959917065126e-4

ghci> putStrLn $ show $ (mN m) * (m4 m) / (m2 m)**2
  3.451374739494607

We expect a skewness of 0 and a kurtosis of 3 + 6 / \nu - 4 = 3.4 for \nu = 19. Not too bad.

The Model in JAGS

JAGS is a mature, declarative, domain specific language for building Bayesian statistical models using Gibbs sampling.

Here is our model as expressed in JAGS. Somewhat terse.

model {
	for (i in 1:N) {
		x[i] ~ dnorm(mu, tau)
	}
	mu ~ dnorm(0, 1.0E-6)
	tau <- pow(sigma, -2)
	sigma ~ dunif(0, 1000)
}

To run it and examine its results, we wrap it up in some R

## Import the library that allows R to inter-work with jags.
library(rjags)

## Read the simulated data into a data frame.
fn <- read.table("example1.data", header=FALSE)

jags <- jags.model('example1.bug',
                   data = list('x' = fn[,1], 'N' = 20),
                   n.chains = 4,
                   n.adapt = 100)

## Burnin for 10000 samples
update(jags, 10000);

mcmc_samples <- coda.samples(jags, variable.names=c("mu", "sigma"), n.iter=20000)

png(file="diagrams/jags.png",width=400,height=350)
plot(mcmc_samples)
dev.off()

And now we can look at the posterior for \mu.

The Model in STAN

STAN is a domain specific language for building Bayesian statistical models similar to JAGS but newer and which allows variables to be re-assigned and so cannot really be described as declarative.

Here is our model as expressed in STAN. Again, somewhat terse.

data {
  int<lower=0> N;
  real x[N];
}

parameters {
  real mu;
  real<lower=0,upper=1000> sigma;
}
model{
  x     ~ normal(mu, sigma);
  mu    ~ normal(0, 1000);
}

Just as with JAGS, to run it and examine its results, we wrap it up in some R.

library(rstan)

## Read the simulated data into a data frame.
fn <- read.table("example1.data", header=FALSE)

## Running the model
fit1 <- stan(file = 'Stan.stan',
             data = list('x' = fn[,1], 'N' = 20),
             pars=c("mu", "sigma"),
             chains=3,
             iter=30000,
             warmup=10000)

png(file="diagrams/stan.png",width=400,height=350)
plot(fit1)
dev.off()

Again we can look at the posterior although we only seem to get medians and 80% intervals.

PostAmble

Write the histogram produced by the Haskell code to a file.

> displayHeader :: FilePath -> Diagram B R2 -> IO ()
> displayHeader fn =
>   mainRender ( DiagramOpts (Just 900) (Just 700) fn
>              , DiagramLoopOpts False Nothing 0
>              )
> main :: IO ()
> main = do
>   displayHeader "diagrams/DataScienceHaskPost.png"
>     (barDiag
>      (zip (map fst $ asList hist) (map snd $ asList hist)))

The code can be downloaded from github.

Posted in Bayesian, Haskell, R, Statistics | 1 Comment

Student’s T and Space Leaks

Introduction

The other speaker at the Machine Learning Meetup at which I gave my talk on automatic differentiation gave a very interesting talk on A/B testing. Apparently this is big business these days as attested by the fact I got 3 ads above the wikipedia entry when I googled for it.

It seems that people tend to test with small sample sizes and to do so very often, resulting in spurious results. Of course readers of XKCD will be well aware of some of the pitfalls.

I thought a Bayesian approach might circumvent some of the problems and set out to write a blog article only to discover that there was no Haskell library for sampling from Student’s t. Actually there was one but is currently an unreleased part of random-fu. So I set about fixing this shortfall.

I thought I had better run a few tests so I calculated the sampled mean, variance, skewness and kurtosis.

I wasn’t really giving this my full attention and as a result ran into a few problems with space. I thought these were worth sharing and that is what this blog post is about. Hopefully, I will have time soon to actually blog about the Bayesian equivalent of A/B testing.

Preamble

> {-# OPTIONS_GHC -Wall                      #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing   #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults    #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind   #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods  #-}
> {-# OPTIONS_GHC -fno-warn-orphans          #-}
> 
> {-# LANGUAGE NoMonomorphismRestriction     #-}
> 
> module StudentTest (
>     main
>   ) where
> 
> import qualified Data.Vector.Unboxed as V
> import Data.Random.Source.PureMT
> import Data.Random
> import Data.Random.Distribution.T
> import Control.Monad.State
> import Data.Histogram.Fill
> import Data.Histogram.Generic ( Histogram )
> import Data.List

Space Analysis

Let’s create a reasonable number of samples as the higher moments converge quite slowly.

> nSamples :: Int
> nSamples = 1000000

An arbitrary seed for creating the samples.

> arbSeed :: Int
> arbSeed = 8

Student’s t only has one parameter, the number of degrees of freedom.

> nu :: Integer
> nu = 6

Now we can do our tests by calculating the sampled values.

> ts :: [Double]
> ts =
>   evalState (replicateM nSamples (sample (T nu)))
>             (pureMT $ fromIntegral arbSeed)
> mean, variance, skewness, kurtosis :: Double
> mean = (sum ts) / fromIntegral nSamples
> variance = (sum (map (**2) ts)) / fromIntegral nSamples
> skewness = (sum (map (**3) ts) / fromIntegral nSamples) / variance**1.5
> kurtosis = (sum (map (**4) ts) / fromIntegral nSamples) / variance**2

This works fine for small sample sizes but not for the number we have chosen.

./StudentTest +RTS -hc
Stack space overflow: current size 8388608 bytes.
Use `+RTS -Ksize -RTS' to increase it.

It seems a shame that the function in the Prelude has this behaviour but never mind let us ensure that we consume values strictly (they are being produced lazily).

> mean' = (foldl' (+) 0 ts) / fromIntegral nSamples
> variance' = (foldl' (+) 0 (map (**2) ts)) / fromIntegral nSamples
> skewness' = (foldl' (+) 0 (map (**3) ts) / fromIntegral nSamples) / variance'**1.5
> kurtosis' = (foldl' (+) 0 (map (**4) ts) / fromIntegral nSamples) / variance'**2

We now have a space leak on the heap as using the ghc profiler below shows. What went wrong?

If we only calculate the mean using foldl then all is well. Instead of 35M we only use 45K.

Well that gives us a clue. The garbage collector cannot reclaim the samples as they are needed for other calculations. What we need to do is calculate the moments strictly altogether.

Let’s create a strict record to do this.

> data Moments = Moments { m1 :: !Double
>                        , m2 :: !Double
>                        , m3 :: !Double
>                        , m4 :: !Double
>                        }
>   deriving Show

And calculate the results strictly.

> 
> m  = foldl' (\m x -> Moments { m1 = m1 m + x
>                              , m2 = m2 m + x**2
>                              , m3 = m3 m + x**3
>                              , m4 = m4 m + x**4
>                              }) (Moments 0.0 0.0 0.0 0.0) ts
> 
> mean''       = m1 m / fromIntegral nSamples
> variance''   = m2 m / fromIntegral nSamples
> skewness''   = (m3 m / fromIntegral nSamples) / variance''**1.5
> kurtosis'' = (m4 m / fromIntegral nSamples) / variance''**2

Now we have what we want; the program runs in small constant space.

> main :: IO ()
> main = do
>   putStrLn $ show mean''
>   putStrLn $ show variance''
>   putStrLn $ show skewness''
>   putStrLn $ show kurtosis''

Oh and the moments give the expected answers.

ghci> mean''
  3.9298418844289093e-4

ghci> variance''
  1.4962681916693004

ghci> skewness''
  1.0113188204317015e-2

ghci> kurtosis''
  5.661776268997382

Running the Code

To run this you will need my version of random-fu. The code for this article is here. You will need to compile everything with profiling, something like

ghc -O2 -main-is StudentTest StudentTest.lhs -prof
    -package-db=.cabal-sandbox/x86_64-osx-ghc-7.6.2-packages.conf.d

Since you need all the packages to be built with profiling, you will probably want to build using a sandbox as above. The only slightly tricky aspect is building random-fu so it is in your sandbox.

runghc Setup.lhs configure --enable-library-profiling
--package-db=/HasBayes/.cabal-sandbox/x86_64-osx-ghc-7.6.2-packages.conf.d
--libdir=/HasBayes/.cabal-sandbox/lib
Posted in Haskell, Statistics | 5 Comments

Bayesian Analysis: A Conjugate Prior and Markov Chain Monte Carlo

Introduction

This is meant to be shorter blog post than normal with the expectation that the material will be developed further in future blog posts.

A Bayesian will have a prior view of the distribution of some data and then based on data, update that view. Mostly the updated distribution, the posterior, will not be expressible as an analytic function and sampling via Markov Chain Monte Carlo (MCMC) is the only way to determine it.

In some special cases, when the posterior is of the same family of distributions as the prior, then the posterior is available analytically and we call the posterior and prior conjugate. It turns out that the normal or Gaussian distribution is conjugate with respect to a normal likelihood distribution.

This gives us the opportunity to compare MCMC against the analytic solution and give ourselves more confidence that MCMC really does deliver the goods.

Some points of note:

  • Since we want to display the posterior (and the prior for that matter), for histograms we use the histogram-fill package.

  • Since we are using Monte Carlo we can use all the cores on our computer via one of Haskell’s parallelization mechanisms.

Preamble

> {-# OPTIONS_GHC -Wall                      #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing   #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults    #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind   #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods  #-}
> {-# OPTIONS_GHC -fno-warn-orphans          #-}
> {-# LANGUAGE NoMonomorphismRestriction     #-}
> module ConjMCMCSimple where
> 
> import qualified Data.Vector.Unboxed as V
> import Data.Random.Source.PureMT
> import Data.Random
> import Control.Monad.State
> import Data.Histogram ( asList )
> import qualified Data.Histogram as H
> import Data.Histogram.Fill
> import Data.Histogram.Generic ( Histogram )
> import Data.List
> import Control.Parallel.Strategies
> 
> import Diagrams.Backend.Cairo.CmdLine
> 
> import Diagrams.Backend.CmdLine
> import Diagrams.Prelude hiding ( sample, render )
> 
> import LinRegAux

A Simple Example

Analytically

Suppose the prior is \mu \sim \cal{N}(\mu_0, \sigma_0), that is

\displaystyle   \pi(\mu) \propto \exp{\bigg( -\frac{(\mu - \mu_0)^2}{2\sigma_0^2}\bigg)}

Our data is IID normal, x_i \sim \cal{N}(\mu, \sigma), where \sigma is known, so the likelihood is

\displaystyle   p(x\,|\,\mu, \sigma) \propto \prod_{i=1}^n \exp{\bigg( -\frac{(x_i - \mu)^2}{2\sigma^2}\bigg)}

The assumption that \sigma is known is unlikely but the point of this post is to demonstrate MCMC matching an analytic formula.

This gives a posterior of

\displaystyle   \begin{aligned}  p(\mu\,|\, \boldsymbol{x}) &\propto \exp{\bigg(  -\frac{(\mu - \mu_0)^2}{2\sigma_0^2}  - \frac{\sum_{i=1}^n(x_i - \mu)^2}{2\sigma^2}\bigg)} \\  &\propto \exp{\bigg[-\frac{1}{2}\bigg(\frac{\mu^2 \sigma^2 -2\sigma^2\mu\mu_0 - 2\sigma_0^2n\bar{x}\mu + \sigma_0^2 n\mu^2}{\sigma^2\sigma_0^2}\bigg)\bigg]} \\  &= \exp{\bigg[-\frac{1}{2}\bigg(\frac{ (n\sigma_0^2 + \sigma^2)\mu^2 - 2(\sigma^2\mu_0 - \sigma_0^2n\bar{x})\mu}{\sigma^2\sigma_0^2}\bigg)\bigg]} \\  &= \exp{\Bigg[-\frac{1}{2}\Bigg(\frac{ \mu^2 - 2\mu\frac{(\sigma^2\mu_0 - \sigma_0^2n\bar{x})}{(n\sigma_0^2 + \sigma^2)}}{\frac{\sigma^2\sigma_0^2}{(n\sigma_0^2 + \sigma^2)}}\Bigg)\Bigg]} \\  &\propto \exp{\Bigg[-\frac{1}{2}\Bigg(\frac{\big(\mu - \frac{(\sigma^2\mu_0 - \sigma_0^2n\bar{x})}{(n\sigma_0^2 + \sigma^2)}\big)^2}{\frac{\sigma^2\sigma_0^2}{(n\sigma_0^2 + \sigma^2)}}\Bigg)\Bigg]}  \end{aligned}

In other words

\displaystyle   \mu\,|\, \boldsymbol{x} \sim {\cal{N}}\bigg(\frac{\sigma^2\mu_0 + n\sigma_0^2\bar{x}}{n\sigma_0^2 + \sigma^2}, \frac{\sigma^2\sigma_0^2}{n\sigma_0^2 + \sigma^2} \bigg)

Writing

\displaystyle   \sigma_n^2 = \frac{\sigma^2\sigma_0^2}{n\sigma_n^2 + \sigma^2}

we get

\displaystyle   \frac{1}{\sigma_n^2} = \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}

Thus the precision (the inverse of the variance) of the posterior is the precision of the prior plus the precision of the data scaled by the number of observations. This gives a nice illustration of how Bayesian statistics improves our beliefs.

Writing

\displaystyle   \mu_n = \frac{\sigma^2\mu_0 + n\sigma_0^2\bar{x}}{n\sigma_0^2 + \sigma^2}

and

\displaystyle   \lambda = 1 / \sigma^2, \, \lambda_0 = 1 / \sigma_0^2, \, \lambda_n = 1 / \sigma_n^2

we see that

\displaystyle   \mu_n = \frac{n\bar{x}\lambda + \mu_0\lambda_0}{\lambda_n}

Thus the mean of the posterior is a weight sum of the mean of the prior and the sample mean scaled by preciscion of the prior and the precision of the data itself scaled by the number of observations.

Rather arbitrarily let us pick a prior mean of

> mu0 :: Double
> mu0 = 11.0

and express our uncertainty about it with a largish prior variance

> sigma_0 :: Double
> sigma_0 = 2.0

And also arbitrarily let us pick the know variance for the samples as

> sigma :: Double
> sigma = 1.0

We can sample from this in way that looks very similar to STAN and JAGS:

> hierarchicalSample :: MonadRandom m => m Double
> hierarchicalSample = do
>   mu <- sample (Normal mu0 sigma_0)
>   x  <- sample (Normal mu sigma)
>   return x

and we didn’t need to write a new language for this.

Again arbitrarily let us take

> nSamples :: Int
> nSamples = 10

and use

> arbSeed :: Int
> arbSeed = 2

And then actually generate the samples.

> simpleXs :: [Double]
> simpleXs =
>   evalState (replicateM nSamples hierarchicalSample)
>             (pureMT $ fromIntegral arbSeed)

Using the formulae we did above we can calculate the posterior

> mu_1, sigma1, simpleNumerator :: Double
> simpleNumerator = fromIntegral nSamples * sigma_0**2 + sigma**2
> mu_1 = (sigma**2 * mu0 + sigma_0**2 * sum simpleXs) / simpleNumerator
> sigma1 = sigma**2 * sigma_0**2 / simpleNumerator

and then compare it against the prior

The red posterior shows we are a lot more certain now we have some evidence.

Via Markov Chain Monte Carlo

The theory behinde MCMC is described in a previous post. We need to generate some proposed steps for the chain. We sample from the normal distribution but we could have used e.g. the gamma.

> normalisedProposals :: Int -> Double -> Int -> [Double]
> normalisedProposals seed sigma nIters =
>   evalState (replicateM nIters (sample (Normal 0.0 sigma)))
>   (pureMT $ fromIntegral seed)

We also need samples from the uniform distribution

> acceptOrRejects :: Int -> Int -> [Double]
> acceptOrRejects seed nIters =
>   evalState (replicateM nIters (sample stdUniform))
>   (pureMT $ fromIntegral seed)

And now we can calculate the (un-normalised) prior, likelihood and posterior

> prior :: Double -> Double
> prior mu = exp (-(mu - mu0)**2 / (2 * sigma_0**2))
> 
> likelihood :: Double -> [Double] -> Double
> likelihood mu xs = exp (-sum (map (\x -> (x - mu)**2 / (2 * sigma**2)) xs))
> 
> posterior :: Double -> [Double] -> Double
> posterior mu xs = likelihood mu xs * prior mu

The Metropolis algorithm tells us that we always jump to a better place but only sometimes jump to a worse place. We count the number of acceptances as we go.

> acceptanceProb :: Double -> Double -> [Double] -> Double
> acceptanceProb mu mu' xs = min 1.0 ((posterior mu' xs) / (posterior mu xs))
> oneStep :: (Double, Int) -> (Double, Double) -> (Double, Int)
> oneStep (mu, nAccs) (proposedJump, acceptOrReject) =
>   if acceptOrReject < acceptanceProb mu (mu + proposedJump) simpleXs
>   then (mu + proposedJump, nAccs + 1)
>   else (mu, nAccs)

Now we can actually run our simulation. We set the number of jumps and a burn in but do not do any thinning.

> nIters, burnIn :: Int
> nIters = 300000
> burnIn = nIters `div` 10

Let us start our chain at

> startMu :: Double
> startMu = 10.0

and set the variance of the jumps to

> jumpVar :: Double
> jumpVar = 0.4
> test :: Int -> [(Double, Int)]
> test seed =
>   drop burnIn $
>   scanl oneStep (startMu, 0) $
>   zip (normalisedProposals seed jumpVar nIters)
>       (acceptOrRejects seed nIters)

We put the data into a histogram

> numBins :: Int
> numBins = 400
> hb :: HBuilder Double (Data.Histogram.Generic.Histogram V.Vector BinD Double)
> hb = forceDouble -<< mkSimple (binD lower numBins upper)
>   where
>     lower = startMu - 1.5*sigma_0
>     upper = startMu + 1.5*sigma_0
> 
> hist :: Int -> Histogram V.Vector BinD Double
> hist seed = fillBuilder hb (map fst $ test seed)

Not bad but a bit lumpy. Let’s try a few runs and see if we can smooth things out.

> hists :: [Histogram V.Vector BinD Double]
> hists = parMap rpar hist [3,4..102]
> emptyHist :: Histogram V.Vector BinD Double
> emptyHist = fillBuilder hb (replicate numBins 0)
> 
> smoothHist :: Histogram V.Vector BinD Double
> smoothHist = foldl' (H.zip (+)) emptyHist hists

Quite nice and had my machine running at 750% with +RTS -N8.

Comparison

Let’s create the same histogram but from the posterior created analytically.

> analPosterior :: [Double]
> analPosterior =
>   evalState (replicateM 100000 (sample (Normal mu_1 (sqrt sigma1))))
>   (pureMT $ fromIntegral 5)
> 
> histAnal :: Histogram V.Vector BinD Double
> histAnal = fillBuilder hb analPosterior

And then compare them. Because they overlap so well, we show the MCMC, both and the analytic on separate charts.

PostAmble

Normally with BlogLiteratelyD, we can generate diagrams on the fly. However, here we want to run the simulations in parallel so we need to actually compile something.

ghc -O2 ConjMCMCSimple.lhs -main-is ConjMCMCSimple -threaded -fforce-recomp
> displayHeader :: FilePath -> Diagram B R2 -> IO ()
> displayHeader fn =
>   mainRender ( DiagramOpts (Just 900) (Just 700) fn
>              , DiagramLoopOpts False Nothing 0
>              )
> main :: IO ()
> main = do
>   displayHeader "http://idontgetoutmuch.files.wordpress.com/2014/03/HistMCMC.png"
>     (barDiag MCMC
>      (zip (map fst $ asList (hist 2)) (map snd $ asList (hist 2)))
>      (zip (map fst $ asList histAnal) (map snd $ asList histAnal)))
> 
>   displayHeader "http://idontgetoutmuch.files.wordpress.com/2014/03/HistMCMCAnal.png"
>     (barDiag MCMCAnal
>      (zip (map fst $ asList (hist 2)) (map snd $ asList (hist 2)))
>      (zip (map fst $ asList histAnal) (map snd $ asList histAnal)))
> 
>   displayHeader "http://idontgetoutmuch.files.wordpress.com/2014/03/HistAnal.png"
>     (barDiag Anal
>      (zip (map fst $ asList (hist 2)) (map snd $ asList (hist 2)))
>      (zip (map fst $ asList histAnal) (map snd $ asList histAnal)))
> 
>   displayHeader "http://idontgetoutmuch.files.wordpress.com/2014/03/SmoothHistMCMC.png"
>     (barDiag MCMC
>      (zip (map fst $ asList smoothHist) (map snd $ asList smoothHist))
>      (zip (map fst $ asList histAnal) (map snd $ asList histAnal)))
Posted in Uncategorized | 1 Comment

Laplace’s Equation in Haskell: Using a DSL for Stencils

Introduction

Suppose we have a square thin plate of metal and we hold each of edges at a temperature which may vary along the edge but is fixed for all time. After some period depending on the conductivity of the metal, the temperature at every point on the plate will have stabilised. What is the temperature at any point?

We can calculate this using by solving Laplace’s equation \nabla^2 \phi = 0 in 2 dimensions. Apart from the preceeding motivation, a more compelling reason for doing so is that it is a moderately simple equation, in so far as partial differential equations are simple, that has been well studied for centuries.

In Haskell terms this gives us the opportunity to use the repa library and use hmatrix which is based on Lapack (as well as other libraries) albeit hmatrix only for illustratative purposes.

I had originally intended this blog to contain a comparison repa’s performance against an equivalent C program even though this has already been undertaken by the repa team in their various publications. And indeed it is still my intention to produce such a comparision. However, as I investigated further, it turned out a fair amount of comparison work has already been done by a team from Intel which suggests there is currently a performance gap but one which is not so large that it outweighs the other benefits of Haskell.

To be more specific, one way in which using repa stands out from the equivalent C implementation is that it gives a language in which we can specify the stencil being used to solve the equation. As an illustration we substitute the nine point method for the five point method merely by changing the stencil.

A Motivating Example: The Steady State Heat Equation

Fourier’s law states that the rate of heat transfer or the flux \boldsymbol{\sigma} is proportional to the negative temperature gradient, as heat flows from hot to cold, and further that it flows in the direction of greatest temperature change. We can write this as

\displaystyle  \boldsymbol{\sigma} = -k\nabla \phi

where \phi : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R} is the temperature at any given point on the plate and k is the conductivity of the metal.

Moreover, we know that for any region on the plate, the total amount of heat flowing in must be balanced by the amount of heat flowing out. We can write this as

\displaystyle  \nabla \cdot \boldsymbol{\sigma} = 0

Substituting the first equation into the second we obtain Laplace’s equation

\displaystyle  \nabla^2 \phi = 0

For example, suppose we hold the temperature of the edges of the plate as follows

\displaystyle  \begin{matrix}  \phi(x, 0) = 1 & \phi(x, 1) = 2 & \phi(0, y) = 1 & \phi(1, y) = 2  \end{matrix}

then after some time the temperature of the plate will be as shown in the heatmap below.

Notes:

  1. Red is hot.

  2. Blue is cold.

  3. The heatmap is created by a finite difference method described below.

  4. The y-axis points down (not up) i.e. \phi(x,1) is at the bottom, reflecting the fact that we are using an array in the finite difference method and rows go down not up.

  5. The corners are grey because in the five point finite difference method these play no part in determining temperatures in the interior of the plate.

Colophon

Since the book I am writing contains C code (for performance comparisons), I need a way of being able to compile and run this code and include it “as is” in the book. Up until now, all my blog posts have contained Haskell and so I have been able to use BlogLiteratelyD which allows me to include really nice diagrams. But clearly this tool wasn’t really designed to handle other languages (although I am sure it could be made to do so).

Using pandoc’s scripting capability with the small script provided

#!/usr/bin/env runhaskell
import Text.Pandoc.JSON

doInclude :: Block -> IO Block
doInclude cb@(CodeBlock ("verbatim", classes, namevals) contents) =
  case lookup "include" namevals of
       Just f     -> return . (\x -> Para [Math DisplayMath x]) =<< readFile f
       Nothing    -> return cb
doInclude cb@(CodeBlock (id, classes, namevals) contents) =
  case lookup "include" namevals of
       Just f     -> return . (CodeBlock (id, classes, namevals)) =<< readFile f
       Nothing    -> return cb
doInclude x = return x

main :: IO ()
main = toJSONFilter doInclude

I can then include C code blocks like this

~~~~ {.c include="Chap1a.c"}
~~~~

And format the whole document like this

pandoc -s Chap1.lhs --filter=./Include -t markdown+lhs > Chap1Expanded.lhs
BlogLiteratelyD Chap1Expanded.lhs > Chap1.html

Sadly, the C doesn’t get syntax highlighting but this will do for now.

PS Sadly, WordPress doesn’t seem to be able to handle \color{red} and \color{blue} in LaTeX so there are some references to blue and red which do not render.

Acknowledgements

A lot of the code for this post is taken from the repa package itself. Many thanks to the repa team for providing the package and the example code.

Haskell Preamble

> {-# OPTIONS_GHC -Wall                      #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing   #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults    #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind   #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods  #-}
> {-# OPTIONS_GHC -fno-warn-orphans          #-}
> {-# LANGUAGE BangPatterns                  #-}
> {-# LANGUAGE TemplateHaskell               #-}
> {-# LANGUAGE QuasiQuotes                   #-}
> {-# LANGUAGE NoMonomorphismRestriction     #-}
> module Chap1 (
>     module Control.Applicative
>   , solveLaplaceStencil
>   , useBool
>   , boundMask
>   , boundValue
>   , bndFnEg1
>   , fivePoint
>   , ninePoint
>   , testStencil5
>   , testStencil9
>   , analyticValue
>   , slnHMat4
>   , slnHMat5
>   , testJacobi4
>   , testJacobi6
>   , bndFnEg3
>   , runSolver
>   , s5
>   , s9
>   ) where
>
> import Data.Array.Repa                   as R
> import Data.Array.Repa.Unsafe            as R
> import Data.Array.Repa.Stencil           as A
> import Data.Array.Repa.Stencil.Dim2      as A
> import Prelude                           as P
> import Data.Packed.Matrix
> import Numeric.LinearAlgebra.Algorithms
> import Chap1Aux
> import Control.Applicative

Laplace’s Equation: The Five Point Formula

We show how to apply finite difference methods to Laplace’s equation:

\displaystyle  \nabla^2 u = 0

where

\displaystyle  \nabla^2 = \frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2}

For a sufficiently smooth function (see (Iserles 2009, chap. 8)) we have

\displaystyle  \begin{aligned}  \frac{\partial^2 u}{\partial x^2}\mathop{\Bigg|_{x = x_0 + k\Delta x}}_{y = y_0 + l\Delta x} &= \frac{1}{(\Delta x)^2}\Delta_{0,x}^2 u_{k,l} + \mathcal{O}((\Delta x)^2) \\  \frac{\partial^2 u}{\partial y^2}\mathop{\Bigg|_{x = x_0 + k\Delta x}}_{y = y_0 + l\Delta x} &= \frac{1}{(\Delta x)^2}\Delta_{0,y}^2 u_{k,l} + \mathcal{O}((\Delta x)^2)  \end{aligned}

where the central difference operator \Delta_0 is defined as

\displaystyle  (\Delta_0 z)_k \triangleq z_{k + \frac{1}{2}} - z_{k - \frac{1}{2}}

We are therefore led to consider the five point difference scheme.

\displaystyle  \frac{1}{(\Delta x)^2}(\Delta_{0,x}^2 + \Delta_{0,y}^2) u_{k,l} = 0

We can re-write this explicitly as

\displaystyle  u_{k-1,l} + u_{k+1,l} + u_{k,l-1} + u_{k,l+1} - 4u_{k,l} = 0

Specifically for the grid point (2,1) in a 4 \times 4 grid we have

\displaystyle  {{u_{1,1}}} + {{u_{3,1}}} + {{u_{2,0}}} + {{u_{2,2}}} - 4{{u_{2,1}}} = 0

where blue indicates that the point is an interior point and red indicates that the point is a boundary point. For Dirichlet boundary conditions (which is all we consider in this post), the values at the boundary points are known.

We can write the entire set of equations for this grid as

\displaystyle  \begin{bmatrix}  -4.0 & 1.0 & 1.0 & 0.0 \\  1.0 & -4.0 & 0.0 & 1.0 \\  1.0 & 0.0 & -4.0 & 1.0 \\  0.0 & 1.0 & 1.0 & -4.0  \end{bmatrix}  \begin{bmatrix}  {{u_{11}}} \\  {{u_{21}}} \\  {{u_{12}}} \\  {{u_{22}}}  \end{bmatrix}  =  \begin{bmatrix}  -{{u_{10}}} + -{{u_{01}}} \\  -{{u_{20}}} + -{{u_{31}}} \\  -{{u_{02}}} + -{{u_{13}}} \\  -{{u_{23}}} + -{{u_{32}}}  \end{bmatrix}

A Very Simple Example

Let us take the boundary conditions to be

\displaystyle  \begin{matrix}  u(x, 0) = 1 & u(x, 1) = 2 & u(0, y) = 1 & u(1, y) = 2  \end{matrix}

With our 4 \times 4 grid we can solve this exactly using the hmatrix package which has a binding to LAPACK.

First we create a 4 \times 4 matrix in hmatrix form

> simpleEgN :: Int
> simpleEgN = 4 - 1
>
> matHMat4 :: IO (Matrix Double)
> matHMat4 = do
>   matRepa <- computeP $ mkJacobiMat simpleEgN :: IO (Array U DIM2 Double)
>   return $ (simpleEgN - 1) >< (simpleEgN - 1) $ toList matRepa
ghci> matHMat4
  (2><2)
   [ -4.0, 1.0
   ,  1.0, 0.0 ]

Next we create the column vector as presribed by the boundary conditions

> bndFnEg1 :: Int -> Int -> (Int, Int) -> Double
> bndFnEg1 _ m (0, j) |           j > 0 && j < m = 1.0
> bndFnEg1 n m (i, j) | i == n && j > 0 && j < m = 2.0
> bndFnEg1 n _ (i, 0) |           i > 0 && i < n = 1.0
> bndFnEg1 n m (i, j) | j == m && i > 0 && i < n = 2.0
> bndFnEg1 _ _ _                                 = 0.0
> bnd1 :: Int -> [(Int, Int)] -> Double
> bnd1 n = negate .
>          sum .
>          P.map (bndFnEg1 n n)
> bndHMat4 :: Matrix Double
> bndHMat4 = ((simpleEgN - 1) * (simpleEgN - 1)) >< 1 $
>            mkJacobiBnd fromIntegral bnd1 3
ghci>  bndHMat4
  (4><1)
   [ -2.0
   , -3.0
   , -3.0
   , -4.0 ]
> slnHMat4 :: IO (Matrix Double)
> slnHMat4 = matHMat4 >>= return . flip linearSolve bndHMat4
ghci> slnHMat4
  (4><1)
   [               1.25
   ,                1.5
   , 1.4999999999999998
   , 1.7499999999999998 ]

The Jacobi Method

Inverting a matrix is expensive so instead we use the (possibly most) classical of all iterative methods, Jacobi iteration. Given A\boldsymbol{x} = \boldsymbol{b} and an estimated solution \boldsymbol{x}_i^{[k]}, we can generate an improved estimate \boldsymbol{x}_i^{[k+1]}. See (Iserles 2009, chap. 12) for the details on convergence and convergence rates.

\displaystyle  \boldsymbol{x}_i^{[k+1]} = \frac{1}{A_{i,i}}\Bigg[\boldsymbol{b}_i - \sum_{j \neq i} A_{i,j}\boldsymbol{x}_j^{[k]}\Bigg]

The simple example above does not really give a clear picture of what happens in general during the update of the estimate. Here is a larger example

Sadly, WordPress does not seem to be able to render 16 \times 16 matrices written in LaTeX so you will have to look at the output from hmatrix in the larger example below. You can see that this matrix is sparse and has a very clear pattern.

Expanding the matrix equation for a {\text{point}} not in the {\text{boundary}} we get

\displaystyle  x_{i,j}^{[k+1]} = \frac{1}{4}(x^{[k]}_{i-1,j} + x^{[k]}_{i,j-1} + x^{[k]}_{i+1,j} + x^{[k]}_{i,j+1})

Cleary the values of the points in the boundary are fixed and must remain at those values for every iteration.

Here is the method using repa. To produce an improved estimate, we define a function relaxLaplace and we pass in a repa matrix representing our original estimate \boldsymbol{x}_i^{[k]} and receive the one step update \boldsymbol{x}_i^{[k+1]} also as a repa matrix.

We pass in a boundary condition mask which specifies which points are boundary points; a point is a boundary point if its value is 1.0 and not if its value is 0.0.

> boundMask :: Monad m => Int -> Int -> m (Array U DIM2 Double)
> boundMask gridSizeX gridSizeY = computeP $
>                                 fromFunction (Z :. gridSizeX + 1 :. gridSizeY + 1) f
>   where
>     f (Z :. _ix :.  iy) | iy == 0         = 0
>     f (Z :. _ix :.  iy) | iy == gridSizeY = 0
>     f (Z :.  ix :. _iy) | ix == 0         = 0
>     f (Z :.  ix :. _iy) | ix == gridSizeX = 0
>     f _                                   = 1

Better would be to use at least a Bool as the example below shows but we wish to modify the code from the repa git repo as little as possible.

> useBool :: IO (Array U DIM1 Double)
> useBool = computeP $
>           R.map (fromIntegral . fromEnum) $
>           fromFunction (Z :. (3 :: Int)) (const True)
ghci> useBool
  AUnboxed (Z :. 3) (fromList [1.0,1.0,1.0])

We further pass in the boundary conditions. We construct these by using a function which takes the grid size in the x direction, the grid size in the y direction and a given pair of co-ordinates in the grid and returns a value at this position.

> boundValue :: Monad m =>
>               Int ->
>               Int ->
>               (Int -> Int -> (Int, Int) -> Double) ->
>               m (Array U DIM2 Double)
> boundValue gridSizeX gridSizeY bndFn =
>   computeP $
>   fromFunction (Z :. gridSizeX + 1 :. gridSizeY + 1) g
>   where
>     g (Z :. ix :. iy) = bndFn gridSizeX gridSizeY (ix, iy)

Note that we only update an element in the repa matrix representation of the vector if it is not on the boundary.

> relaxLaplace
>   :: Monad m
>      => Array U DIM2 Double
>      -> Array U DIM2 Double
>      -> Array U DIM2 Double
>      -> m (Array U DIM2 Double)
>
> relaxLaplace arrBoundMask arrBoundValue arr
>   = computeP
>     $ R.zipWith (+) arrBoundValue
>     $ R.zipWith (*) arrBoundMask
>     $ unsafeTraverse arr id elemFn
>   where
>     _ :. height :. width
>       = extent arr
>
>     elemFn !get !d@(sh :. i :. j)
>       = if isBorder i j
>         then  get d
>         else (get (sh :. (i-1) :. j)
>               +   get (sh :. i     :. (j-1))
>               +   get (sh :. (i+1) :. j)
>               +   get (sh :. i     :. (j+1))) / 4
>     isBorder !i !j
>       =  (i == 0) || (i >= width  - 1)
>          || (j == 0) || (j >= height - 1)

We can use this to iterate as many times as we like.

> solveLaplace
>   :: Monad m
>         => Int
>         -> Array U DIM2 Double
>         -> Array U DIM2 Double
>         -> Array U DIM2 Double
>         -> m (Array U DIM2 Double)
>
> solveLaplace steps arrBoundMask arrBoundValue arrInit
>  = go steps arrInit
>   where
>     go !i !arr
>       | i == 0
>       = return     arr
>
>       | otherwise
>       = do arr' <- relaxLaplace arrBoundMask arrBoundValue arr
>            go (i - 1) arr'

For our small example, we set the initial array to 0 at every point. Note that the function which updates the grid, relaxLaplace will immediately over-write the points on the boundary with values given by the boundary condition.

> mkInitArrM :: Monad m => Int -> m (Array U DIM2 Double)
> mkInitArrM n = computeP $ fromFunction (Z :. (n + 1) :. (n + 1)) (const 0.0)

We can now test the Jacobi method

> testJacobi4 :: Int -> IO (Array U DIM2 Double)
> testJacobi4 nIter = do
>   mask    <- boundMask simpleEgN simpleEgN
>   val     <- boundValue simpleEgN simpleEgN bndFnEg1
>   initArr <- mkInitArrM simpleEgN
>   solveLaplace nIter mask val initArr

After 55 iterations, we obtain convergence up to the limit of accuracy of double precision floating point numbers. Note this only provides a solution of the matrix equation which is an approximation to Laplace’s equation. To obtain a more accurate result for the latter we need to use a smaller grid size.

ghci> testJacobi4 55 >>= return . pPrint
  [0.0, 1.0, 1.0, 0.0]
  [1.0, 1.25, 1.5, 2.0]
  [1.0, 1.5, 1.75, 2.0]
  [0.0, 2.0, 2.0, 0.0]

A Larger Example

Armed with Jacobi, let us now solve a large example.

> largerEgN, largerEgN2 :: Int
> largerEgN = 6 - 1
> largerEgN2 = (largerEgN - 1) * (largerEgN - 1)

First let us use hmatrix.

> matHMat5 :: IO (Matrix Double)
> matHMat5 = do
>   matRepa <- computeP $ mkJacobiMat largerEgN :: IO (Array U DIM2 Double)
>   return $ largerEgN2 >< largerEgN2 $ toList matRepa
ghci> matHMat5
  (16><16)
   [ -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0
   ,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0 ]
> bndHMat5 :: Matrix Double
> bndHMat5 = largerEgN2>< 1 $ mkJacobiBnd fromIntegral bnd1 5
ghci>  bndHMat5
  (16><1)
   [ -2.0
   , -1.0
   , -1.0
   , -3.0
   , -1.0
   ,  0.0
   ,  0.0
   , -2.0
   , -1.0
   ,  0.0
   ,  0.0
   , -2.0
   , -3.0
   , -2.0
   , -2.0
   , -4.0 ]
> slnHMat5 :: IO (Matrix Double)
> slnHMat5 = matHMat5 >>= return . flip linearSolve bndHMat5
ghci> slnHMat5
  (16><1)
   [ 1.0909090909090908
   , 1.1818181818181817
   , 1.2954545454545454
   ,                1.5
   , 1.1818181818181817
   , 1.3409090909090906
   , 1.4999999999999996
   , 1.7045454545454544
   , 1.2954545454545459
   ,                1.5
   , 1.6590909090909092
   ,  1.818181818181818
   , 1.5000000000000004
   , 1.7045454545454548
   , 1.8181818181818186
   , 1.9090909090909092 ]

And for comparison, let us use the Jacobi method.

> testJacobi6 :: Int -> IO (Array U DIM2 Double)
> testJacobi6 nIter = do
>   mask    <- boundMask largerEgN largerEgN
>   val     <- boundValue largerEgN largerEgN bndFnEg1
>   initArr <- mkInitArrM largerEgN
>   solveLaplace nIter mask val initArr
ghci> testJacobi6 178 >>= return . pPrint
  [0.0, 1.0, 1.0, 1.0, 1.0, 0.0]
  [1.0, 1.0909090909090908, 1.1818181818181817, 1.2954545454545454, 1.5, 2.0]
  [1.0, 1.1818181818181817, 1.3409090909090908, 1.5, 1.7045454545454546, 2.0]
  [1.0, 1.2954545454545454, 1.5, 1.6590909090909092, 1.8181818181818183, 2.0]
  [1.0, 1.5, 1.7045454545454546, 1.8181818181818181, 1.9090909090909092, 2.0]
  [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]

Note that with a larger grid we need more points (178) before the Jacobi method converges.

Stencils

Since we are functional programmers, our natural inclination is to see if we can find an abstraction for (at least some) numerical methods. We notice that we are updating each grid element (except the boundary elements) by taking the North, East, South and West surrounding squares and calculating a linear combination of these.

Repa provides this abstraction and we can describe the update calculation as a stencil. (Lippmeier and Keller 2011) gives full details of stencils in repa.

> fivePoint :: Stencil DIM2 Double
> fivePoint = [stencil2|  0 1 0
>                         1 0 1
>                         0 1 0 |]

Using stencils allows us to modify our numerical method with a very simple change. For example, suppose we wish to use the nine point method (which is \mathcal{O}((\Delta x)^4)!) then we only need write down the stencil for it which is additionally a linear combination of North West, North East, South East and South West.

> ninePoint :: Stencil DIM2 Double
> ninePoint = [stencil2| 1 4 1
>                        4 0 4
>                        1 4 1 |]

We modify our solver above to take a stencil and also an Int which is used to normalise the factors in the stencil. For example, in the five point method this is 4.

> solveLaplaceStencil :: Monad m
>                        => Int
>                        -> Stencil DIM2 Double
>                        -> Int
>                        -> Array U DIM2 Double
>                        -> Array U DIM2 Double
>                        -> Array U DIM2 Double
>                        -> m (Array U DIM2 Double)
> solveLaplaceStencil !steps !st !nF !arrBoundMask !arrBoundValue !arrInit
>  = go steps arrInit
>  where
>    go 0 !arr = return arr
>    go n !arr
>      = do arr' <- relaxLaplace arr
>           go (n - 1) arr'
>
>    relaxLaplace arr
>      = computeP
>      $ R.szipWith (+) arrBoundValue
>      $ R.szipWith (*) arrBoundMask
>      $ R.smap (/ (fromIntegral nF))
>      $ mapStencil2 (BoundConst 0)
>      st arr

We can then test both methods.

> testStencil5 :: Int -> Int -> IO (Array U DIM2 Double)
> testStencil5 gridSize nIter = do
>   mask    <- boundMask gridSize gridSize
>   val     <- boundValue gridSize gridSize bndFnEg1
>   initArr <- mkInitArrM gridSize
>   solveLaplaceStencil nIter fivePoint 4 mask val initArr
ghci> testStencil5 5 178 >>= return . pPrint
  [0.0, 1.0, 1.0, 1.0, 1.0, 0.0]
  [1.0, 1.0909090909090908, 1.1818181818181817, 1.2954545454545454, 1.5, 2.0]
  [1.0, 1.1818181818181817, 1.3409090909090908, 1.5, 1.7045454545454546, 2.0]
  [1.0, 1.2954545454545454, 1.5, 1.6590909090909092, 1.8181818181818183, 2.0]
  [1.0, 1.5, 1.7045454545454546, 1.8181818181818181, 1.9090909090909092, 2.0]
  [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]
> testStencil9 :: Int -> Int -> IO (Array U DIM2 Double)
> testStencil9 gridSize nIter = do
>   mask    <- boundMask gridSize gridSize
>   val     <- boundValue gridSize gridSize bndFnEg1
>   initArr <- mkInitArrM gridSize
>   solveLaplaceStencil nIter ninePoint 20 mask val initArr
ghci> testStencil9 5 178 >>= return . pPrint
  [0.0, 1.0, 1.0, 1.0, 1.0, 0.0]
  [1.0, 1.0222650172207302, 1.1436086139049304, 1.2495750646811328, 1.4069077172153264, 2.0]
  [1.0, 1.1436086139049304, 1.2964314331751594, 1.4554776038855908, 1.6710941204241017, 2.0]
  [1.0, 1.2495750646811328, 1.455477603885591, 1.614523774596022, 1.777060571200304, 2.0]
  [1.0, 1.4069077172153264, 1.671094120424102, 1.777060571200304, 1.7915504172099226, 2.0]
  [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]

We note that the methods give different answers. Before explaining this, let us examine one more example where the exact solution is known.

We take the example from (Iserles 2009, chap. 8) where the boundary conditions are:

\displaystyle  \begin{aligned}  \phi(x, 0) &= 0 \\  \phi(x, 1) &= \frac{1}{(1 + x)^2 + 1} \\  \phi(0, y) &= \frac{y}{1 + y^2} \\  \phi(1, y) &= \frac{y}{4 + y^2}  \end{aligned}

This has the exact solution

\displaystyle  u(x, y) = \frac{y}{(1 + x)^2 + y^2}

And we can calculate the values of this function on a grid.

> analyticValue :: Monad m => Int -> m (Array U DIM2 Double)
> analyticValue gridSize = computeP $ fromFunction (Z :. gridSize + 1 :. gridSize + 1) f
>   where
>     f (Z :. ix :. iy) = y / ((1 + x)^2 + y^2)
>       where
>         y = fromIntegral iy / fromIntegral gridSize
>         x = fromIntegral ix / fromIntegral gridSize

Let us also solve it using the Jacobi method with a five point stencil and a nine point stencil. Here is the encoding of the boundary values.

> bndFnEg3 :: Int -> Int -> (Int, Int) -> Double
> bndFnEg3 _ m (0, j) |           j >= 0 && j <  m = y / (1 + y^2)
>   where y = (fromIntegral j) / (fromIntegral m)
> bndFnEg3 n m (i, j) | i == n && j >  0 && j <= m = y / (4 + y^2)
>   where y = fromIntegral j / fromIntegral m
> bndFnEg3 n _ (i, 0) |           i >  0 && i <= n = 0.0
> bndFnEg3 n m (i, j) | j == m && i >= 0 && i <  n = 1 / ((1 + x)^2 + 1)
>   where x = fromIntegral i / fromIntegral n
> bndFnEg3 _ _ _                                   = 0.0

We create a function to run a solver.

> runSolver ::
>   Monad m =>
>   Int ->
>   Int ->
>   (Int -> Int -> (Int, Int) -> Double) ->
>   (Int ->
>    Array U DIM2 Double ->
>    Array U DIM2 Double ->
>    Array U DIM2 Double ->
>    m (Array U DIM2 Double)) ->
>   m (Array U DIM2 Double)
> runSolver nGrid nIter boundaryFn solver = do
>   mask    <- boundMask nGrid nGrid
>   val     <- boundValue nGrid nGrid boundaryFn
>   initArr <- mkInitArrM nGrid
>   solver nIter mask val initArr

And put the five point and nine point solvers in the appropriate form.

> s5, s9 :: Monad m =>
>           Int ->
>           Array U DIM2 Double ->
>           Array U DIM2 Double ->
>           Array U DIM2 Double ->
>           m (Array U DIM2 Double)
> s5 n = solveLaplaceStencil n fivePoint 4
> s9 n = solveLaplaceStencil n ninePoint 20

And now we can see that the errors between the analytic solution and the five point method with a grid size of 8 are \cal{O}(10^{-4}).

ghci> liftA2 (-^) (analyticValue 7) (runSolver 7 200 bndFnEg3 s5) >>= return . pPrint
  [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
  [0.0, -3.659746856576884e-4, -5.792613003869074e-4, -5.919333582729558e-4, -4.617020226472812e-4, -2.7983716661839075e-4, -1.1394184484148084e-4, 0.0]
  [0.0, -4.0566163490589335e-4, -6.681826442424543e-4, -7.270498771604073e-4, -6.163531890425178e-4, -4.157604876017795e-4, -1.9717865146007263e-4, 0.0]
  [0.0, -3.4678314565880775e-4, -5.873627029994999e-4, -6.676042377350699e-4, -5.987527967581119e-4, -4.318102416048242e-4, -2.2116263241278578e-4, 0.0]
  [0.0, -2.635436147627873e-4, -4.55055831294085e-4, -5.329636937312088e-4, -4.965786933938399e-4, -3.7401874422060555e-4, -2.0043638973538114e-4, 0.0]
  [0.0, -1.7773949138776696e-4, -3.1086347862371855e-4, -3.714478154303591e-4, -3.5502855035249303e-4, -2.7528200465845587e-4, -1.5207424182367424e-4, 0.0]
  [0.0, -9.188482657347674e-5, -1.6196970595228066e-4, -1.9595925291693295e-4, -1.903987061394885e-4, -1.5064155667735002e-4, -8.533752030373543e-5, 0.0]
  [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

But using the nine point method significantly improves this.

ghci> liftA2 (-^) (analyticValue 7) (runSolver 7 200 bndFnEg3 s9) >>= return . pPrint
  [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
  [0.0, -2.7700522166329566e-7, -2.536751151638317e-7, -5.5431452705700934e-8, 7.393573120406671e-8, 8.403487600228132e-8, 4.188249685954659e-8, 0.0]
  [0.0, -2.0141002235463112e-7, -2.214645128950643e-7, -9.753369634157849e-8, 2.1887763435035623e-8, 6.305346988977334e-8, 4.3482495659663556e-8, 0.0]
  [0.0, -1.207601019737048e-7, -1.502713803391842e-7, -9.16850228516175e-8, -1.4654435886995998e-8, 2.732932558036083e-8, 2.6830928867571657e-8, 0.0]
  [0.0, -6.883445567013036e-8, -9.337114890983766e-8, -6.911451747027009e-8, -2.6104150896433254e-8, 4.667329939200826e-9, 1.1717137371469732e-8, 0.0]
  [0.0, -3.737430460254432e-8, -5.374955715231611e-8, -4.483740087546373e-8, -2.299792309368165e-8, -4.122571728437663e-9, 3.330287268177301e-9, 0.0]
  [0.0, -1.6802381437586167e-8, -2.5009212159532446e-8, -2.229028683853329e-8, -1.3101905282919546e-8, -4.1197137368165215e-9, 3.909041701444238e-10, 0.0]
  [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Bibliography

Iserles, A. 2009. A First Course in the Numerical Analysis of Differential Equations. A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press. http://books.google.co.uk/books?id=M0tkw4oUucoC.

Lippmeier, Ben, and Gabriele Keller. 2011. “Efficient Parallel Stencil Convolution in Haskell.” In Proceedings of the 4th ACM Symposium on Haskell, 59–70. Haskell ’11. New York, NY, USA: ACM. doi:10.1145/2034675.2034684. http://doi.acm.org/10.1145/2034675.2034684.

Posted in Haskell, Partial Differential Equations, Numerical Methods | 3 Comments