# Introduction

Tribbles originate from the planet Iota Geminorum IV and, according to Dr. McCoy, are born pregnant. No further details are given but we can follow Gurtin and MacCamy (1974) and perhaps recover some of what happens on the Enterprise.

Of course, age-dependent population models are of more than fictional use and can be applied, for example, to modelling the progression of Malaria in infected hosts. We roughly follow some of J. J. Thibodeaux and Schlittenhardt (2011) who themselves reference Belair, Mackey, and Mahaffy (1995).

• The use of the hmatrix package which now contains functions to solve tridiagonal systems used in this post. You will need to use HEAD until a new hackage / stackage release is made. My future plan is to use CUDA via accelerate and compare.

• The use of dimensions in a medium-sized example. It would have been nice to have tried the units package but it seemed harder work to use and, as ever, “Time’s wingèd chariot” was the enemy.

# Age-Dependent Populations

## McKendrick / von Foerster

McKendrick and von Foerster independently derived a model of age-dependent population growth.

Let $n(a,t)$ be the density of females of age $a$ at time $t$. The number of females between ages $a$ and $a + \delta a$ are thus $n(a, t)\delta a$. Assuming individuals are born at age $0$, we have

$\displaystyle \frac{\partial}{\partial t}(n(a, t)\delta a) = J(a, t) - J(a + \delta a, t) - \mu(a, t)n(a, t)\delta a$

where $\mu(a, t)$ is the death rate density and $J(a, t)$ denotes the rate of entry to the cohort of age $a$. Dividing by $\delta a$ we obtain

$\displaystyle \frac{\partial}{\partial t}n(a, t) = - \frac{J(a + \delta a, t) - J(a, t)}{\delta a} - \mu(a, t)n(a, t)$

which in the limit becomes

$\displaystyle \frac{\partial}{\partial t}n(a, t) = - \frac{\partial J(a, t)}{\partial a} - \mu(a, t)n(a, t)$

We can further assume that the rate of entry to a cohort is proportional to the density of individuals times a velocity of aging $v(a, t)$.

$\displaystyle J(a, t) = n(a, t)v(a, t)$

Occasionally there is some reason to assume that aging one year is different to experiencing one year but we further assume $v = 1$.

We thus obtain

$\displaystyle \frac{\partial n(a, t)}{\partial t} + \frac{\partial n(a, t)}{\partial a} = - \mu(a, t)n(a, t)$

## Gurtin / MacCamy

To solve any PDE we need boundary and initial conditions. The number of births at time $t$ is

$\displaystyle n(0, t) = \int_0^\infty n(a, t) m(a, N(t))\, \mathrm{d}a$

where $m$ is the natality aka birth-modulus and

$\displaystyle N(t) = \int_0^\infty n(a, t)\, \mathrm{d}a$

and we further assume that the initial condition

$\displaystyle n(a, 0) = n_0(a)$

for some given $n_0$.

Gurtin and MacCamy (1974) focus on the situation where

$\displaystyle m(a, N(t)) = \beta(N)e^{-\alpha a}$

and we can also assume that the birth rate of Tribbles decreases exponentially with age and further that Tribbles can live forever. Gurtin and MacCamy (1974) then transform the PDE to obtain a pair of linked ODEs which can then be solved numerically.

Of course, we know what happens in the Enterprise and rather than continue with this example, let us turn our attention to the more serious subject of Malaria.

# Malaria

I realise now that I went a bit overboard with references. Hopefully they don’t interrupt the flow too much.

The World Health Organisation (WHO) estimated that in 2015 there were 214 million new cases of malaria resulting in 438,000 deaths (source: Wikipedia).

The lifecycle of the plasmodium parasite that causes malaria is extremely ingenious. J. J. Thibodeaux and Schlittenhardt (2011) model the human segment of the plasmodium lifecycle and further propose a way of determing an optimal treatment for an infected individual. Hall et al. (2013) also model the effect of an anti-malarial. Let us content ourselves with reproducing part of the paper by J. J. Thibodeaux and Schlittenhardt (2011).

At one part of its sojourn in humans, plasmodium infects erythrocytes aka red blood cells. These latter contain haemoglobin (aka hemoglobin). The process by which red blood cells are produced, Erythropoiesis, is modulated in a feedback loop by erythropoietin. The plasmodium parasite severely disrupts this process. Presumably the resulting loss of haemoglobin is one reason that an infected individual feels ill.

As can be seen in the overview by Torbett and Friedman (2009), the full feedback loop is complex. So as not to lose ourselves in the details and following J. J. Thibodeaux and Schlittenhardt (2011) and Belair, Mackey, and Mahaffy (1995), we consider a model with two compartments.

• Precursors: prototype erythrocytes developing in the bone marrow with $p(\mu, t)$ being the density of such cells of age $\mu$ at time $t$.

• Erythrocytes: mature red blood cells circulating in the blood with $m(\mu, t)$ being the density of such cells of age $\nu$ at time $t$.

\displaystyle \begin{aligned} \frac{\partial p(\mu, t)}{\partial t} + g(E(t))\frac{\partial p(\mu, t)}{\partial \mu} &= \sigma(\mu, t, E(t))p(\mu, t) & 0 < \mu < \mu_F & & 0 < t < T \\ \frac{\partial m(\nu, t)}{\partial t} + \phantom{g(E(t))}\frac{\partial m(\nu, t)}{\partial \nu} &= -\gamma(\nu, t, M(t))m(\nu, t) & 0 < \nu < \nu_F & & 0 < t < T \end{aligned}

where $\sigma(\mu, t, E(t))$ is the birth rate of precursors and $\gamma(\nu, t, M(t))$ is the death rate of erythrocytes, $g(E(t))$ is the maturation rate of precursors and where

$\displaystyle M(t) = \int_0^{\nu_F} p(\nu, t) \,\mathrm{d}\nu$

As boundary conditions, we have that the number of precursors maturing must equal the production of number of erythrocytes

$\displaystyle m(0, t) = g(E(t))p(\mu_F, t)$

and the production of the of the number of precursors depends on the level of erythropoietin

$\displaystyle g(E(t))p(0, t) = \phi(t)E(t)$

where $\phi(t)$ is some proportionality function.

As initial conditions, we have

\displaystyle \begin{aligned} p(\mu, 0) &= p_0(\mu) \\ m(\nu, 0) &= m_0(\nu) \end{aligned}

We can further model the erythropoietin dynamics as

$\displaystyle \frac{\mathrm{d}E(t)}{\mathrm{d}t} = f(M(t), t) - a_E(P(t))E(t)$

where $f$ is the feedback function from the kidneys and the decay rate, $a_E$ depends on the total precursor population $P(t)$ (Sawyer, Krantz, and Goldwasser (1987)) although this often is taken to be a constant and I would feel more comfortable with a more recent citation and where

$\displaystyle P(t) = \int_0^{\mu_F} p(\mu, t) \,\mathrm{d}\mu$

As initial condition we have

$\displaystyle E(0) = E_0$

## A Finite Difference Attempt

Let us try solving the above model using a finite difference scheme observing that we currently have no basis for whether it has a solution and whether the finite difference scheme approximates such a solution! We follow J. J. Thibodeaux and Schlittenhardt (2011) who give a proof of convergence presumably with some conditions; any failure of the scheme is entirely mine.

Divide up the age and time ranges, $[0, \mu_F]$, $[0, \nu_F]$ and $[0, T]$ into equal sub-intervals, $[\mu_i, \mu_{i+1}]$, $[\nu_j, \nu_{j+1}]$ and $[t_k, t_{k+1}]$ where

\displaystyle \begin{aligned} \mu_i &= i\Delta\mu & & \mathrm{for} & i = 1 \ldots n_1 \\ \nu_j &= j\Delta\nu & & \mathrm{for} & j = 1 \ldots n_2 \\ t_k &= k\Delta t & & \mathrm{for} & k = 1 \ldots K \end{aligned}

where $\Delta\mu = \mu_F / n_1$, $\Delta\nu = \nu_F / n_2$ and $\Delta t = T / K$.

Denoting $p(\mu_i, t_k) = p_i^k$ and similarly we obtain

\displaystyle \begin{aligned} \frac{p_i^{k+1} - p_i^k}{\Delta t} + g^k\frac{p_i^{k+1} - p_{i-1}^{k+1}}{\Delta\mu} &= \sigma_i^k p_i^{k+1} \\ \frac{m_j^{k+1} - m_j^k}{\Delta t} + \phantom{g^k}\frac{m_j^{k+1} - m_{j-1}^{k+1}}{\Delta\mu} &= -\gamma_j^k m_j^{k+1} \end{aligned}

and

\displaystyle \begin{aligned} \frac{E^{k+1} - E^k}{\Delta t} &= f^k - a_E^k E^{k+1} \\ g^k p_0^{k+1} &= \phi^k E^k \\ m_0^{k+1} &= g^k m_{n_1}^{k+1} \end{aligned}

Re-arranging we get

\displaystyle \begin{aligned} -g^k\frac{\Delta t}{\Delta \mu}p_{i-1}^{k+1} + \bigg(1 + g^k\frac{\Delta t}{\Delta \mu} - \Delta t \sigma_i^k\bigg)p_i^{k+1} &= p_i^k \\ \frac{\Delta t}{\Delta \mu}m_{j-1}^{k+1} + \bigg(1 + \frac{\Delta t}{\Delta \mu} + \Delta t \gamma_j^k\bigg)m_j^{k+1} &= m_j^k \end{aligned}

Writing

\displaystyle \begin{aligned} d_{1,i}^k &= 1 + g^k\frac{\Delta t}{\Delta \mu} - \Delta t \sigma_i^k \\ d_{2,i}^k &= 1 + \frac{\Delta t}{\Delta \nu} - \Delta t \gamma_i^k \end{aligned}

We can express the above in matrix form

$\displaystyle \begin{bmatrix} g^k & 0 & 0 & \ldots & 0 & 0 \\ -g^k\frac{\Delta t}{\Delta \mu} & d_{1,1}^k & 0 & \ldots & 0 & 0\\ 0 & -g^k\frac{\Delta t}{\Delta \mu} & d_{1,2}^k & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots &\ -g^k\frac{\Delta t}{\Delta \mu} & d_{1,n_1}^k \\ \end{bmatrix} \begin{bmatrix} p_0^{k+1} \\ p_1^{k+1} \\ p_2^{k+1} \\ \ldots \\ p_{n_1}^{k+1} \end{bmatrix} = \begin{bmatrix} \phi^k E^k \\ p_1^k \\ p_2^k \\ \ldots \\ p_{n_1}^k \\ \end{bmatrix}$

$\displaystyle \begin{bmatrix} 1 & 0 & 0 & \ldots & 0 & 0 \\ -\frac{\Delta t}{\Delta \mu} & d_{2,1}^k & 0 & \ldots & 0 & 0\\ 0 & -\frac{\Delta t}{\Delta \mu} & d_{2,2}^k & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots &\ -\frac{\Delta t}{\Delta \mu} & d_{2,n_1}^k \\ \end{bmatrix} \begin{bmatrix} m_0^{k+1} \\ m_1^{k+1} \\ m_2^{k+1} \\ \ldots \\ m_{n_2}^{k+1} \end{bmatrix} = \begin{bmatrix} g^k p_{n_1}^{k+1} \\ m_1^k \\ m_2^k \\ \ldots \\ m_{n_1}^k \\ \end{bmatrix}$

Finally we can write

$\displaystyle E^{k+1} = \frac{E^k + \Delta t f^k}{1 + a_E^k\Delta T}$

> {-# OPTIONS_GHC -Wall #-}
> {-# LANGUAGE TypeFamilies #-}
> {-# LANGUAGE NoImplicitPrelude #-}
> {-# LANGUAGE FlexibleContexts #-}
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE TypeOperators #-}

> module Tribbles where

> import qualified Prelude as P

> import Numeric.Units.Dimensional.Prelude hiding (Unit)
> import Numeric.Units.Dimensional

> import Numeric.LinearAlgebra
> import Numeric.Integration.TanhSinh

> import Control.Monad.Writer


Substances like erythropoietin (EPO) are measured in International Units and these cannot be converted to Moles (see Jelkmann (2009) for much more detail) so we have to pretend it really is measured in Moles as there seems to be no easy way to define what the dimensional package calls a base dimension. A typical amount for a person is 15 milli-IU / mill-litre but can reach much higher levels after loss of blood.

> muPerMl :: (Fractional a, Num a) => Unit 'NonMetric DConcentration a
> muPerMl = (milli mole) / (milli litre)

> bigE'0 :: Concentration Double
> bigE'0 = 15.0 *~ muPerMl


Let’s set up our grid. We take these from Ackleh et al. (2006) but note that J. J. Thibodeaux and Schlittenhardt (2011) seem to have $T = 20$.

> deltaT, deltaMu, deltaNu :: Time Double

> deltaT = 0.05 *~ day
> deltaMu = 0.01 *~ day
> deltaNu = 0.05 *~ day

> bigT :: Time Double
> bigT = 100.0 *~ day

> muF, nuF :: Time Double
> muF = 5.9 *~ day
> nuF = 120.0 *~ day

> bigK :: Int
> bigK = floor (bigT / deltaT /~ one)

> n1 :: Int
> n1 = floor (muF / deltaMu /~ one)

> n2 :: Int
> n2 = floor (nuF / deltaNu /~ one)

> ts :: [Time Double]
> ts = take bigK $0.0 *~ day : (map (+ deltaT) ts)  The birth rate for precursors > betaThibodeaux :: Time Double -> > Frequency Double > betaThibodeaux mu > | mu < (0 *~ day) = error "betaThibodeaux: negative age" > | mu < (3 *~ day) = (2.773 *~ (one / day)) > | otherwise = (0.0 *~ (one /day))  > alphaThibodeaux :: Concentration Double -> > Frequency Double > alphaThibodeaux e = (0.5 *~ (muPerMl / day)) / ((1 *~ muPerMl) + e)  > sigmaThibodeaux :: Time Double -> > Time Double -> > Concentration Double -> > Frequency Double > sigmaThibodeaux mu _t e = gThibodeaux e * (betaThibodeaux mu - alphaThibodeaux e)  and an alternative birth rate > betaAckleh :: Time Double -> Frequency Double > betaAckleh mu > | mu < (0 *~ day) = error "betaAckleh: negative age" > | mu < (3 *~ day) = 2.773 *~ (one / day) > | otherwise = 0.000 *~ (one / day)  > sigmaAckleh :: Time Double -> > Time Double -> > Concentration Double -> > Frequency Double > sigmaAckleh mu _t e = betaAckleh mu * gAckleh e  J. J. Thibodeaux and Schlittenhardt (2011) give the maturation rate of precursors $g$ as > gThibodeaux :: Concentration Double -> Dimensionless Double > gThibodeaux e = d / n > where > n = ((3.02 *~ one) * e + (0.31 *~ muPerMl)) > d = (30.61 *~ muPerMl) + e  and Ackleh et al. (2006) give this as > gAckleh :: Concentration Double -> Dimensionless Double > gAckleh _e = 1.0 *~ one  As in J. J. Thibodeaux and Schlittenhardt (2011) we give quantities in terms of cells per kilogram of body weight. Note that these really are moles on this occasion. > type CellDensity = Quantity (DAmountOfSubstance / DTime / DMass)  Let’s set the initial conditions. > p'0 :: Time Double -> CellDensity Double > p'0 mu' = (1e11 *~ one) * pAux mu' > where > pAux mu > | mu < (0 *~ day) = error "p'0: negative age" > | mu < (3 *~ day) = 8.55e-6 *~ (mole / day / kilo gram) * > exp ((2.7519 *~ (one / day)) * mu) > | otherwise = 8.55e-6 *~ (mole / day / kilo gram) * > exp (8.319 *~ one - (0.0211 *~ (one / day)) * mu)  > m_0 :: Time Double -> CellDensity Double > m_0 nu' = (1e11 *~ one) * mAux nu' > where > mAux nu > | nu < (0 *~ day) = error "m_0: age less than zero" > | otherwise = 0.039827 *~ (mole / day / kilo gram) * > exp (((-0.0083) *~ (one / day)) * nu)  And check that these give plausible results. > m_0Untyped :: Double -> Double > m_0Untyped nu = m_0 (nu *~ day) /~ (mole / day / kilo gram)  > p'0Untyped :: Double -> Double > p'0Untyped mu = p'0 (mu *~ day) /~ (mole / day / kilo gram)  ghci> import Numeric.Integration.TanhSinh ghci> result$ relative 1e-6 $parTrap m_0Untyped 0.001 (nuF /~ day) 3.0260736659043414e11 ghci> result$ relative 1e-6 $parTrap p'0Untyped 0.001 (muF /~ day) 1.0453999900927126e10  We can now create the components for the first matrix equation. > g'0 :: Dimensionless Double > g'0 = gThibodeaux bigE'0  > d_1'0 :: Int -> Dimensionless Double > d_1'0 i = (1 *~ one) + (g'0 * deltaT / deltaMu) > - deltaT * sigmaThibodeaux ((fromIntegral i *~ one) * deltaMu) undefined bigE'0  > lowers :: [Dimensionless Double] > lowers = replicate n1 (negate$ g'0 * deltaT / deltaMu)

> diags :: [Dimensionless Double]
> diags = g'0 : map d_1'0 [1..n1]

> uppers :: [Dimensionless Double]
> uppers = replicate n1 (0.0 *~ one)


J. J. Thibodeaux and Schlittenhardt (2011) does not give a definition for $\phi$ so we use the equivalent $s_0$ from Ackleh et al. (2006) which references Banks et al. (2003): “$\times 10^{11}$ erythrocytes/kg body weight $\times$ mL plasma/mU Epo/day”

> s_0 :: Time Double ->
>        Quantity (DAmountOfSubstance / DTime / DMass / DConcentration) Double
> s_0 = const ((1e11 *~ one) * (4.45e-7 *~ (mole / day / kilo gram / muPerMl)))

> b'0 :: [CellDensity Double]
> b'0 = (s_0 (0.0 *~ day) * bigE'0) : (take n1 $map p'0 (iterate (+ deltaMu) deltaMu))  With these components in place we can now solve the implicit scheme and get the age distribution of precursors after one time step. > p'1 :: Matrix Double > p'1 = triDiagSolve (fromList (map (/~ one) lowers)) > (fromList (map (/~ one) diags)) > (fromList (map (/~ one) uppers)) > (((n1 P.+1 )><1) (map (/~ (mole / second / kilo gram)) b'0))  In order to create the components for the second matrix equation, we need the death rates of mature erythrocytes > gammaThibodeaux :: Time Double -> > Time Double -> > Quantity (DAmountOfSubstance / DMass) Double -> > Frequency Double > gammaThibodeaux _nu _t _bigM = 0.0083 *~ (one / day)  We note an alternative for the death rate > gammaAckleh :: Time Double -> > Time Double -> > Quantity (DAmountOfSubstance / DMass) Double -> > Frequency Double > gammaAckleh _nu _t bigM = (0.01 *~ (kilo gram / mole / day)) * bigM + 0.0001 *~ (one / day)  For the intial mature erythrocyte population we can either use the integral of the initial distribution > bigM'0 :: Quantity (DAmountOfSubstance / DMass) Double > bigM'0 = r *~ (mole / kilo gram) > where > r = result$ relative 1e-6 $parTrap m_0Untyped 0.001 (nuF /~ day)  ghci> bigM'0 3.0260736659043414e11 kg^-1 mol  or we can use the sum of the values used in the finite difference approximation > bigM'0' :: Quantity (DAmountOfSubstance / DMass) Double > bigM'0' = (* deltaNu)$ sum $map m_0$ take n2 $iterate (+ deltaNu) (0.0 *~ day)  ghci> bigM'0' 3.026741454719976e11 kg^-1 mol  Finally we can create the components > d_2'0 :: Int -> Dimensionless Double > d_2'0 i = (1 *~ one) + (g'0 * deltaT / deltaNu) > + deltaT * gammaThibodeaux ((fromIntegral i *~ one) * deltaNu) undefined bigM'0  > lowers2 :: [Dimensionless Double] > lowers2 = replicate n2 (negate$ deltaT / deltaNu)

> diags2 :: [Dimensionless Double]
> diags2 = (1.0 *~ one) : map d_2'0 [1..n2]

> uppers2 :: [Dimensionless Double]
> uppers2 = replicate n2 (0.0 *~ one)

> b_2'0 :: [CellDensity Double]
> b_2'0 = (g'0 * ((p'1 atIndex (n1,0)) *~ (mole / second / kilo gram))) :
>         (take n2 $map m_0 (iterate (+ deltaNu) deltaNu))  and then solve the implicit scheme to get the distribution of mature erythrocytes one time step ahead > m'1 :: Matrix Double > m'1 = triDiagSolve (fromList (map (/~ one) lowers2)) > (fromList (map (/~ one) diags2)) > (fromList (map (/~ one) uppers2)) > (((n2 P.+ 1)><1) (map (/~ (mole / second / kilo gram)) b_2'0))  We need to complete the homeostatic loop by implmenting the feedback from the kidneys to the bone marrow. Ackleh and Thibodeaux (2013) and Ackleh et al. (2006) give $f$ as > fAckleh :: Time Double -> > Quantity (DAmountOfSubstance / DMass) Double -> > Quantity (DConcentration / DTime) Double > fAckleh _t bigM = a / ((1.0 *~ one) + k * (bigM' ** r)) > where > a = 15600 *~ (muPerMl / day) > k = 0.0382 *~ one > r = 6.96 *~ one > bigM' = ((bigM /~ (mole / kilo gram)) *~ one) * (1e-11 *~ one)  The much older Belair, Mackey, and Mahaffy (1995) gives $f$ as > fBelair :: Time Double -> > Quantity (DAmountOfSubstance / DMass) Double -> > Quantity (DConcentration / DTime) Double > fBelair _t bigM = a / ((1.0 *~ one) + k * (bigM' ** r)) > where > a = 6570 *~ (muPerMl / day) > k = 0.0382 *~ one > r = 6.96 *~ one > bigM' = ((bigM /~ (mole / kilo gram)) *~ one) * (1e-11 *~ one)  For the intial precursor population we can either use the integral of the initial distribution result$ relative 1e-6 $parTrap p'0Untyped 0.001 (muF /~ day) > bigP'0 :: Quantity (DAmountOfSubstance / DMass) Double > bigP'0 = r *~ (mole / kilo gram) > where > r = result$ relative 1e-6 $parTrap p'0Untyped 0.001 (muF /~ day)  ghci> bigP'0 1.0453999900927126e10 kg^-1 mol  or we can use the sum of the values used in the finite difference approximation > bigP'0' :: Quantity (DAmountOfSubstance / DMass) Double > bigP'0' = (* deltaMu)$ sum $map p'0$ take n1 $iterate (+ deltaMu) (0.0 *~ day)  ghci> bigP'0' 1.0438999930030743e10 kg^-1 mol  J. J. Thibodeaux and Schlittenhardt (2011) give the following for $a_E$ > a_E :: Quantity (DAmountOfSubstance / DMass) Double -> Frequency Double > a_E bigP = ((n / d) /~ one) *~ (one / day) > where > n :: Dimensionless Double > n = bigP * (13.8 *~ (kilo gram / mole)) + 0.04 *~ one > d :: Dimensionless Double > d = (bigP /~ (mole / kilo gram)) *~ one + 0.08 *~ one  but from Ackleh et al. (2006) The only biological basis for the latter is that the decay rate of erythropoietin should be an increasing function of the precursor population and this function remains in the range 0.50–6.65 $\mathrm{days}^{-1}$ and, given this is at variance with their given function, it may be safer to use their alternative of > a_E' :: Quantity (DAmountOfSubstance / DMass) Double -> Frequency Double > a_E' _bigP = 6.65 *~ (one / day)  We now further calculate the concentration of EPO one time step ahead. > f'0 :: Quantity (DConcentration / DTime) Double > f'0 = fAckleh undefined bigM'0  > bigE'1 :: Concentration Double > bigE'1 = (bigE'0 + deltaT * f'0) / (1.0 *~ one + deltaT * a_E' bigP'0)  Having done this for one time step starting at $t=0$, it’s easy to generalize this to an arbitrary time step. > d_1 :: Dimensionless Double -> > Concentration Double -> > Int -> > Dimensionless Double > d_1 g e i = (1 *~ one) + (g * deltaT / deltaMu) > - deltaT * sigmaThibodeaux ((fromIntegral i *~ one) * deltaMu) undefined e  > d_2 :: Quantity (DAmountOfSubstance / DMass) Double -> > Int -> > Dimensionless Double > d_2 bigM i = (1 *~ one) + deltaT / deltaNu > + deltaT * gammaThibodeaux ((fromIntegral i *~ one) * deltaNu) undefined bigM  > oneStepM :: (Matrix Double, Matrix Double, Concentration Double, Time Double) -> > Writer [(Quantity (DAmountOfSubstance / DMass) Double, > Quantity (DAmountOfSubstance / DMass) Double, > Concentration Double)] > (Matrix Double, Matrix Double, Concentration Double, Time Double) > oneStepM (psPrev, msPrev, ePrev, tPrev) = do > let > g = gThibodeaux ePrev > ls = replicate n1 (negate$ g * deltaT / deltaMu)
>     ds = g : map (d_1 g ePrev)  [1..n1]
>     us = replicate n1 (0.0 *~ one)
>     b1'0 = (s_0 tPrev * ePrev) /~ (mole / second / kilo gram)
>     b1 = asColumn $vjoin [scalar b1'0, subVector 1 n1$ flatten psPrev]
>     psNew :: Matrix Double
>     psNew = triDiagSolve (fromList (map (/~ one) ls))
>                          (fromList (map (/~ one) ds))
>                          (fromList (map (/~ one) us))
>                          b1
>     ls2 = replicate n2 (negate $deltaT / deltaNu) > bigM :: Quantity (DAmountOfSubstance / DMass) Double > bigM = (* deltaNu)$ ((sumElements msPrev) *~ (mole / kilo gram / second))
>     ds2 = (1.0 *~ one) : map (d_2 bigM) [1..n2]
>     us2 = replicate n2 (0.0 *~ one)
>     b2'0 = (g * (psNew atIndex (n1, 0) *~ (mole / second / kilo gram))) /~
>            (mole / second / kilo gram)
>     b2 = asColumn $vjoin [scalar b2'0, subVector 1 n2$ flatten msPrev]
>     msNew :: Matrix Double
>     msNew = triDiagSolve (fromList (map (/~ one) ls2))
>                          (fromList (map (/~ one) ds2))
>                          (fromList (map (/~ one) us2))
>                          b2
>     bigP :: Quantity (DAmountOfSubstance / DMass) Double
>     bigP = (* deltaMu) $sumElements psPrev *~ (mole / kilo gram / second) > f :: Quantity (DConcentration / DTime) Double > f = fAckleh undefined bigM > eNew :: Concentration Double > eNew = (ePrev + deltaT * f) / (1.0 *~ one + deltaT * a_E' bigP) > tNew = tPrev + deltaT > tell [(bigP, bigM, ePrev)] > return (psNew, msNew, eNew, tNew)  We can now run the model for 100 days. > ys :: [(Quantity (DAmountOfSubstance / DMass) Double, > Quantity (DAmountOfSubstance / DMass) Double, > Concentration Double)] > ys = take 2000$
>      snd $> runWriter$
>      iterateM_ oneStepM ((((n1 P.+1 )><1) (map (/~ (mole / second / kilo gram)) b'0)),
>                          (((n2 P.+ 1)><1) (map (/~ (mole / second / kilo gram)) b_2'0)), > bigE'0, > (0.0 *~ day))  And now we can plot what happens for a period of 100 days. # References Ackleh, Azmy S., and Jeremy J. Thibodeaux. 2013. “A second-order finite difference approximation for a mathematical model of erythropoiesis.” Numerical Methods for Partial Differential Equations, no. November: n/a–n/a. doi:10.1002/num.21778. Ackleh, Azmy S., Keng Deng, Kazufumi Ito, and Jeremy Thibodeaux. 2006. “A Structured Erythropoiesis Model with Nonlinear Cell Maturation Velocity and Hormone Decay Rate.” Mathematical Biosciences 204 (1): 21–48. doi:http://dx.doi.org/10.1016/j.mbs.2006.08.004. Banks, H T, Cammey E Cole, Paul M Schlosser, and Hien T Tran. 2003. “Modeling and Optimal Regulation of Erythropoiesis Subject to Benzene Intoxication.” https://www.ncsu.edu/crsc/reports/ftp/pdf/crsc-tr03-49.pdf. Belair, Jacques, Michael C. Mackey, and Joseph M. Mahaffy. 1995. “Age-Structured and Two-Delay Models for Erythropoiesis.” Mathematical Biosciences 128 (1): 317–46. doi:http://dx.doi.org/10.1016/0025-5564(94)00078-E. Gurtin, Morton E, and Richard C MacCamy. 1974. “Non-Linear Age-Dependent Population Dynamics.” Archive for Rational Mechanics and Analysis 54 (3). Springer: 281–300. Hall, Adam J, Michael J Chappell, John AD Aston, and Stephen A Ward. 2013. “Pharmacokinetic Modelling of the Anti-Malarial Drug Artesunate and Its Active Metabolite Dihydroartemisinin.” Computer Methods and Programs in Biomedicine 112 (1). Elsevier: 1–15. Jelkmann, Wolfgang. 2009. “Efficacy of Recombinant Erythropoietins: Is There Unity of International Units?” Nephrology Dialysis Transplantation 24 (5): 1366. doi:10.1093/ndt/gfp058. Sawyer, Stephen T, SB Krantz, and E Goldwasser. 1987. “Binding and Receptor-Mediated Endocytosis of Erythropoietin in Friend Virus-Infected Erythroid Cells.” Journal of Biological Chemistry 262 (12). ASBMB: 5554–62. Thibodeaux, Jeremy J., and Timothy P. Schlittenhardt. 2011. “Optimal Treatment Strategies for Malaria Infection.” Bulletin of Mathematical Biology 73 (11): 2791–2808. doi:10.1007/s11538-011-9650-8. Torbett, Bruce E., and Jeffrey S. Friedman. 2009. “Erythropoiesis: An Overview.” In Erythropoietins, Erythropoietic Factors, and Erythropoiesis: Molecular, Cellular, Preclinical, and Clinical Biology, edited by Steven G. Elliott, Mary Ann Foote, and Graham Molineux, 3–18. Basel: Birkhäuser Basel. doi:10.1007/978-3-7643-8698-6_1. Advertisements # Laplace’s Equation in Haskell: Using a DSL for Stencils # Introduction Suppose we have a square thin plate of metal and we hold each of edges at a temperature which may vary along the edge but is fixed for all time. After some period depending on the conductivity of the metal, the temperature at every point on the plate will have stabilised. What is the temperature at any point? We can calculate this using by solving Laplace’s equation $\nabla^2 \phi = 0$ in 2 dimensions. Apart from the preceeding motivation, a more compelling reason for doing so is that it is a moderately simple equation, in so far as partial differential equations are simple, that has been well studied for centuries. In Haskell terms this gives us the opportunity to use the repa library and use hmatrix which is based on Lapack (as well as other libraries) albeit hmatrix only for illustratative purposes. I had originally intended this blog to contain a comparison repa’s performance against an equivalent C program even though this has already been undertaken by the repa team in their various publications. And indeed it is still my intention to produce such a comparision. However, as I investigated further, it turned out a fair amount of comparison work has already been done by a team from Intel which suggests there is currently a performance gap but one which is not so large that it outweighs the other benefits of Haskell. To be more specific, one way in which using repa stands out from the equivalent C implementation is that it gives a language in which we can specify the stencil being used to solve the equation. As an illustration we substitute the nine point method for the five point method merely by changing the stencil. ## A Motivating Example: The Steady State Heat Equation Fourier’s law states that the rate of heat transfer or the flux $\boldsymbol{\sigma}$ is proportional to the negative temperature gradient, as heat flows from hot to cold, and further that it flows in the direction of greatest temperature change. We can write this as $\displaystyle \boldsymbol{\sigma} = -k\nabla \phi$ where $\phi : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is the temperature at any given point on the plate and $k$ is the conductivity of the metal. Moreover, we know that for any region on the plate, the total amount of heat flowing in must be balanced by the amount of heat flowing out. We can write this as $\displaystyle \nabla \cdot \boldsymbol{\sigma} = 0$ Substituting the first equation into the second we obtain Laplace’s equation $\displaystyle \nabla^2 \phi = 0$ For example, suppose we hold the temperature of the edges of the plate as follows $\displaystyle \begin{matrix} \phi(x, 0) = 1 & \phi(x, 1) = 2 & \phi(0, y) = 1 & \phi(1, y) = 2 \end{matrix}$ then after some time the temperature of the plate will be as shown in the heatmap below. Notes: 1. Red is hot. 2. Blue is cold. 3. The heatmap is created by a finite difference method described below. 4. The $y$-axis points down (not up) i.e. $\phi(x,1)$ is at the bottom, reflecting the fact that we are using an array in the finite difference method and rows go down not up. 5. The corners are grey because in the five point finite difference method these play no part in determining temperatures in the interior of the plate. # Colophon Since the book I am writing contains C code (for performance comparisons), I need a way of being able to compile and run this code and include it “as is” in the book. Up until now, all my blog posts have contained Haskell and so I have been able to use BlogLiteratelyD which allows me to include really nice diagrams. But clearly this tool wasn’t really designed to handle other languages (although I am sure it could be made to do so). Using pandoc’s scripting capability with the small script provided #!/usr/bin/env runhaskell import Text.Pandoc.JSON doInclude :: Block -> IO Block doInclude cb@(CodeBlock ("verbatim", classes, namevals) contents) = case lookup "include" namevals of Just f -> return . (\x -> Para [Math DisplayMath x]) =<< readFile f Nothing -> return cb doInclude cb@(CodeBlock (id, classes, namevals) contents) = case lookup "include" namevals of Just f -> return . (CodeBlock (id, classes, namevals)) =<< readFile f Nothing -> return cb doInclude x = return x main :: IO () main = toJSONFilter doInclude I can then include C code blocks like this ~~~~ {.c include="Chap1a.c"} ~~~~ And format the whole document like this pandoc -s Chap1.lhs --filter=./Include -t markdown+lhs > Chap1Expanded.lhs BlogLiteratelyD Chap1Expanded.lhs > Chap1.html Sadly, the C doesn’t get syntax highlighting but this will do for now. PS Sadly, WordPress doesn’t seem to be able to handle \color{red} and \color{blue} in LaTeX so there are some references to blue and red which do not render. # Acknowledgements A lot of the code for this post is taken from the repa package itself. Many thanks to the repa team for providing the package and the example code. # Haskell Preamble > {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-}  > {-# LANGUAGE BangPatterns #-} > {-# LANGUAGE TemplateHaskell #-} > {-# LANGUAGE QuasiQuotes #-} > {-# LANGUAGE NoMonomorphismRestriction #-}  > module Chap1 ( > module Control.Applicative > , solveLaplaceStencil > , useBool > , boundMask > , boundValue > , bndFnEg1 > , fivePoint > , ninePoint > , testStencil5 > , testStencil9 > , analyticValue > , slnHMat4 > , slnHMat5 > , testJacobi4 > , testJacobi6 > , bndFnEg3 > , runSolver > , s5 > , s9 > ) where > > import Data.Array.Repa as R > import Data.Array.Repa.Unsafe as R > import Data.Array.Repa.Stencil as A > import Data.Array.Repa.Stencil.Dim2 as A  > import Prelude as P  > import Data.Packed.Matrix > import Numeric.LinearAlgebra.Algorithms  > import Chap1Aux  > import Control.Applicative  # Laplace’s Equation: The Five Point Formula We show how to apply finite difference methods to Laplace’s equation: $\displaystyle \nabla^2 u = 0$ where $\displaystyle \nabla^2 = \frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2}$ For a sufficiently smooth function (see (Iserles 2009, chap. 8)) we have \displaystyle \begin{aligned} \frac{\partial^2 u}{\partial x^2}\mathop{\Bigg|_{x = x_0 + k\Delta x}}_{y = y_0 + l\Delta x} &= \frac{1}{(\Delta x)^2}\Delta_{0,x}^2 u_{k,l} + \mathcal{O}((\Delta x)^2) \\ \frac{\partial^2 u}{\partial y^2}\mathop{\Bigg|_{x = x_0 + k\Delta x}}_{y = y_0 + l\Delta x} &= \frac{1}{(\Delta x)^2}\Delta_{0,y}^2 u_{k,l} + \mathcal{O}((\Delta x)^2) \end{aligned} where the central difference operator $\Delta_0$ is defined as $\displaystyle (\Delta_0 z)_k \triangleq z_{k + \frac{1}{2}} - z_{k - \frac{1}{2}}$ We are therefore led to consider the five point difference scheme. $\displaystyle \frac{1}{(\Delta x)^2}(\Delta_{0,x}^2 + \Delta_{0,y}^2) u_{k,l} = 0$ We can re-write this explicitly as $\displaystyle u_{k-1,l} + u_{k+1,l} + u_{k,l-1} + u_{k,l+1} - 4u_{k,l} = 0$ Specifically for the grid point (2,1) in a $4 \times 4$ grid we have $\displaystyle {{u_{1,1}}} + {{u_{3,1}}} + {{u_{2,0}}} + {{u_{2,2}}} - 4{{u_{2,1}}} = 0$ where blue indicates that the point is an interior point and red indicates that the point is a boundary point. For Dirichlet boundary conditions (which is all we consider in this post), the values at the boundary points are known. We can write the entire set of equations for this grid as $\displaystyle \begin{bmatrix} -4.0 & 1.0 & 1.0 & 0.0 \\ 1.0 & -4.0 & 0.0 & 1.0 \\ 1.0 & 0.0 & -4.0 & 1.0 \\ 0.0 & 1.0 & 1.0 & -4.0 \end{bmatrix} \begin{bmatrix} {{u_{11}}} \\ {{u_{21}}} \\ {{u_{12}}} \\ {{u_{22}}} \end{bmatrix} = \begin{bmatrix} -{{u_{10}}} + -{{u_{01}}} \\ -{{u_{20}}} + -{{u_{31}}} \\ -{{u_{02}}} + -{{u_{13}}} \\ -{{u_{23}}} + -{{u_{32}}} \end{bmatrix}$ ## A Very Simple Example Let us take the boundary conditions to be $\displaystyle \begin{matrix} u(x, 0) = 1 & u(x, 1) = 2 & u(0, y) = 1 & u(1, y) = 2 \end{matrix}$ With our $4 \times 4$ grid we can solve this exactly using the hmatrix package which has a binding to LAPACK. First we create a $4 \times 4$ matrix in hmatrix form > simpleEgN :: Int > simpleEgN = 4 - 1 > > matHMat4 :: IO (Matrix Double) > matHMat4 = do > matRepa <- computeP mkJacobiMat simpleEgN :: IO (Array U DIM2 Double)
>   return $(simpleEgN - 1) >< (simpleEgN - 1)$ toList matRepa

ghci> matHMat4
(2><2)
[ -4.0, 1.0
,  1.0, 0.0 ]


Next we create the column vector as presribed by the boundary conditions

> bndFnEg1 :: Int -> Int -> (Int, Int) -> Double
> bndFnEg1 _ m (0, j) |           j > 0 && j < m = 1.0
> bndFnEg1 n m (i, j) | i == n && j > 0 && j < m = 2.0
> bndFnEg1 n _ (i, 0) |           i > 0 && i < n = 1.0
> bndFnEg1 n m (i, j) | j == m && i > 0 && i < n = 2.0
> bndFnEg1 _ _ _                                 = 0.0

> bnd1 :: Int -> [(Int, Int)] -> Double
> bnd1 n = negate .
>          sum .
>          P.map (bndFnEg1 n n)

> bndHMat4 :: Matrix Double
> bndHMat4 = ((simpleEgN - 1) * (simpleEgN - 1)) >< 1 $> mkJacobiBnd fromIntegral bnd1 3  ghci> bndHMat4 (4><1) [ -2.0 , -3.0 , -3.0 , -4.0 ]  > slnHMat4 :: IO (Matrix Double) > slnHMat4 = matHMat4 >>= return . flip linearSolve bndHMat4  ghci> slnHMat4 (4><1) [ 1.25 , 1.5 , 1.4999999999999998 , 1.7499999999999998 ]  # The Jacobi Method Inverting a matrix is expensive so instead we use the (possibly most) classical of all iterative methods, Jacobi iteration. Given $A\boldsymbol{x} = \boldsymbol{b}$ and an estimated solution $\boldsymbol{x}_i^{[k]}$, we can generate an improved estimate $\boldsymbol{x}_i^{[k+1]}$. See (Iserles 2009, chap. 12) for the details on convergence and convergence rates. $\displaystyle \boldsymbol{x}_i^{[k+1]} = \frac{1}{A_{i,i}}\Bigg[\boldsymbol{b}_i - \sum_{j \neq i} A_{i,j}\boldsymbol{x}_j^{[k]}\Bigg]$ The simple example above does not really give a clear picture of what happens in general during the update of the estimate. Here is a larger example Sadly, WordPress does not seem to be able to render $16 \times 16$ matrices written in LaTeX so you will have to look at the output from hmatrix in the larger example below. You can see that this matrix is sparse and has a very clear pattern. Expanding the matrix equation for a ${\text{point}}$ not in the ${\text{boundary}}$ we get $\displaystyle x_{i,j}^{[k+1]} = \frac{1}{4}(x^{[k]}_{i-1,j} + x^{[k]}_{i,j-1} + x^{[k]}_{i+1,j} + x^{[k]}_{i,j+1})$ Cleary the values of the points in the boundary are fixed and must remain at those values for every iteration. Here is the method using repa. To produce an improved estimate, we define a function relaxLaplace and we pass in a repa matrix representing our original estimate $\boldsymbol{x}_i^{[k]}$ and receive the one step update $\boldsymbol{x}_i^{[k+1]}$ also as a repa matrix. We pass in a boundary condition mask which specifies which points are boundary points; a point is a boundary point if its value is 1.0 and not if its value is 0.0. > boundMask :: Monad m => Int -> Int -> m (Array U DIM2 Double) > boundMask gridSizeX gridSizeY = computeP$
>                                 fromFunction (Z :. gridSizeX + 1 :. gridSizeY + 1) f
>   where
>     f (Z :. _ix :.  iy) | iy == 0         = 0
>     f (Z :. _ix :.  iy) | iy == gridSizeY = 0
>     f (Z :.  ix :. _iy) | ix == 0         = 0
>     f (Z :.  ix :. _iy) | ix == gridSizeX = 0
>     f _                                   = 1


Better would be to use at least a Bool as the example below shows but we wish to modify the code from the repa git repo as little as possible.

> useBool :: IO (Array U DIM1 Double)
> useBool = computeP $> R.map (fromIntegral . fromEnum)$
>           fromFunction (Z :. (3 :: Int)) (const True)

ghci> useBool
AUnboxed (Z :. 3) (fromList [1.0,1.0,1.0])


We further pass in the boundary conditions. We construct these by using a function which takes the grid size in the $x$ direction, the grid size in the $y$ direction and a given pair of co-ordinates in the grid and returns a value at this position.

> boundValue :: Monad m =>
>               Int ->
>               Int ->
>               (Int -> Int -> (Int, Int) -> Double) ->
>               m (Array U DIM2 Double)
> boundValue gridSizeX gridSizeY bndFn =
>   computeP $> fromFunction (Z :. gridSizeX + 1 :. gridSizeY + 1) g > where > g (Z :. ix :. iy) = bndFn gridSizeX gridSizeY (ix, iy)  Note that we only update an element in the repa matrix representation of the vector if it is not on the boundary. > relaxLaplace > :: Monad m > => Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> m (Array U DIM2 Double) > > relaxLaplace arrBoundMask arrBoundValue arr > = computeP >$ R.zipWith (+) arrBoundValue
>     $R.zipWith (*) arrBoundMask >$ unsafeTraverse arr id elemFn
>   where
>     _ :. height :. width
>       = extent arr
>
>     elemFn !get !d@(sh :. i :. j)
>       = if isBorder i j
>         then  get d
>         else (get (sh :. (i-1) :. j)
>               +   get (sh :. i     :. (j-1))
>               +   get (sh :. (i+1) :. j)
>               +   get (sh :. i     :. (j+1))) / 4
>     isBorder !i !j
>       =  (i == 0) || (i >= width  - 1)
>          || (j == 0) || (j >= height - 1)


We can use this to iterate as many times as we like.

> solveLaplace
>         => Int
>         -> Array U DIM2 Double
>         -> Array U DIM2 Double
>         -> Array U DIM2 Double
>         -> m (Array U DIM2 Double)
>
> solveLaplace steps arrBoundMask arrBoundValue arrInit
>  = go steps arrInit
>   where
>     go !i !arr
>       | i == 0
>       = return     arr
>
>       | otherwise
>       = do arr' <- relaxLaplace arrBoundMask arrBoundValue arr
>            go (i - 1) arr'


For our small example, we set the initial array to $0$ at every point. Note that the function which updates the grid, relaxLaplace will immediately over-write the points on the boundary with values given by the boundary condition.

> mkInitArrM :: Monad m => Int -> m (Array U DIM2 Double)
> mkInitArrM n = computeP $fromFunction (Z :. (n + 1) :. (n + 1)) (const 0.0)  We can now test the Jacobi method > testJacobi4 :: Int -> IO (Array U DIM2 Double) > testJacobi4 nIter = do > mask <- boundMask simpleEgN simpleEgN > val <- boundValue simpleEgN simpleEgN bndFnEg1 > initArr <- mkInitArrM simpleEgN > solveLaplace nIter mask val initArr  After 55 iterations, we obtain convergence up to the limit of accuracy of double precision floating point numbers. Note this only provides a solution of the matrix equation which is an approximation to Laplace’s equation. To obtain a more accurate result for the latter we need to use a smaller grid size. ghci> testJacobi4 55 >>= return . pPrint [0.0, 1.0, 1.0, 0.0] [1.0, 1.25, 1.5, 2.0] [1.0, 1.5, 1.75, 2.0] [0.0, 2.0, 2.0, 0.0]  ## A Larger Example Armed with Jacobi, let us now solve a large example. > largerEgN, largerEgN2 :: Int > largerEgN = 6 - 1 > largerEgN2 = (largerEgN - 1) * (largerEgN - 1)  First let us use hmatrix. > matHMat5 :: IO (Matrix Double) > matHMat5 = do > matRepa <- computeP$ mkJacobiMat largerEgN :: IO (Array U DIM2 Double)
>   return $largerEgN2 >< largerEgN2$ toList matRepa

ghci> matHMat5
(16><16)
[ -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0,  0.0,  1.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  0.0,  0.0,  0.0,  1.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  0.0, -4.0,  1.0,  0.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0,  0.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0,  1.0
,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  0.0,  1.0,  0.0,  0.0,  1.0, -4.0 ]

> bndHMat5 :: Matrix Double
> bndHMat5 = largerEgN2>< 1 $mkJacobiBnd fromIntegral bnd1 5  ghci> bndHMat5 (16><1) [ -2.0 , -1.0 , -1.0 , -3.0 , -1.0 , 0.0 , 0.0 , -2.0 , -1.0 , 0.0 , 0.0 , -2.0 , -3.0 , -2.0 , -2.0 , -4.0 ]  > slnHMat5 :: IO (Matrix Double) > slnHMat5 = matHMat5 >>= return . flip linearSolve bndHMat5  ghci> slnHMat5 (16><1) [ 1.0909090909090908 , 1.1818181818181817 , 1.2954545454545454 , 1.5 , 1.1818181818181817 , 1.3409090909090906 , 1.4999999999999996 , 1.7045454545454544 , 1.2954545454545459 , 1.5 , 1.6590909090909092 , 1.818181818181818 , 1.5000000000000004 , 1.7045454545454548 , 1.8181818181818186 , 1.9090909090909092 ]  And for comparison, let us use the Jacobi method. > testJacobi6 :: Int -> IO (Array U DIM2 Double) > testJacobi6 nIter = do > mask <- boundMask largerEgN largerEgN > val <- boundValue largerEgN largerEgN bndFnEg1 > initArr <- mkInitArrM largerEgN > solveLaplace nIter mask val initArr  ghci> testJacobi6 178 >>= return . pPrint [0.0, 1.0, 1.0, 1.0, 1.0, 0.0] [1.0, 1.0909090909090908, 1.1818181818181817, 1.2954545454545454, 1.5, 2.0] [1.0, 1.1818181818181817, 1.3409090909090908, 1.5, 1.7045454545454546, 2.0] [1.0, 1.2954545454545454, 1.5, 1.6590909090909092, 1.8181818181818183, 2.0] [1.0, 1.5, 1.7045454545454546, 1.8181818181818181, 1.9090909090909092, 2.0] [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]  Note that with a larger grid we need more points (178) before the Jacobi method converges. # Stencils Since we are functional programmers, our natural inclination is to see if we can find an abstraction for (at least some) numerical methods. We notice that we are updating each grid element (except the boundary elements) by taking the North, East, South and West surrounding squares and calculating a linear combination of these. Repa provides this abstraction and we can describe the update calculation as a stencil. (Lippmeier and Keller 2011) gives full details of stencils in repa. > fivePoint :: Stencil DIM2 Double > fivePoint = [stencil2| 0 1 0 > 1 0 1 > 0 1 0 |]  Using stencils allows us to modify our numerical method with a very simple change. For example, suppose we wish to use the nine point method (which is $\mathcal{O}((\Delta x)^4)$!) then we only need write down the stencil for it which is additionally a linear combination of North West, North East, South East and South West. > ninePoint :: Stencil DIM2 Double > ninePoint = [stencil2| 1 4 1 > 4 0 4 > 1 4 1 |]  We modify our solver above to take a stencil and also an Int which is used to normalise the factors in the stencil. For example, in the five point method this is 4. > solveLaplaceStencil :: Monad m > => Int > -> Stencil DIM2 Double > -> Int > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> m (Array U DIM2 Double) > solveLaplaceStencil !steps !st !nF !arrBoundMask !arrBoundValue !arrInit > = go steps arrInit > where > go 0 !arr = return arr > go n !arr > = do arr' <- relaxLaplace arr > go (n - 1) arr' > > relaxLaplace arr > = computeP >$ R.szipWith (+) arrBoundValue
>      $R.szipWith (*) arrBoundMask >$ R.smap (/ (fromIntegral nF))
>      mapStencil2 (BoundConst 0) > st arr  We can then test both methods. > testStencil5 :: Int -> Int -> IO (Array U DIM2 Double) > testStencil5 gridSize nIter = do > mask <- boundMask gridSize gridSize > val <- boundValue gridSize gridSize bndFnEg1 > initArr <- mkInitArrM gridSize > solveLaplaceStencil nIter fivePoint 4 mask val initArr  ghci> testStencil5 5 178 >>= return . pPrint [0.0, 1.0, 1.0, 1.0, 1.0, 0.0] [1.0, 1.0909090909090908, 1.1818181818181817, 1.2954545454545454, 1.5, 2.0] [1.0, 1.1818181818181817, 1.3409090909090908, 1.5, 1.7045454545454546, 2.0] [1.0, 1.2954545454545454, 1.5, 1.6590909090909092, 1.8181818181818183, 2.0] [1.0, 1.5, 1.7045454545454546, 1.8181818181818181, 1.9090909090909092, 2.0] [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]  > testStencil9 :: Int -> Int -> IO (Array U DIM2 Double) > testStencil9 gridSize nIter = do > mask <- boundMask gridSize gridSize > val <- boundValue gridSize gridSize bndFnEg1 > initArr <- mkInitArrM gridSize > solveLaplaceStencil nIter ninePoint 20 mask val initArr  ghci> testStencil9 5 178 >>= return . pPrint [0.0, 1.0, 1.0, 1.0, 1.0, 0.0] [1.0, 1.0222650172207302, 1.1436086139049304, 1.2495750646811328, 1.4069077172153264, 2.0] [1.0, 1.1436086139049304, 1.2964314331751594, 1.4554776038855908, 1.6710941204241017, 2.0] [1.0, 1.2495750646811328, 1.455477603885591, 1.614523774596022, 1.777060571200304, 2.0] [1.0, 1.4069077172153264, 1.671094120424102, 1.777060571200304, 1.7915504172099226, 2.0] [0.0, 2.0, 2.0, 2.0, 2.0, 0.0]  We note that the methods give different answers. Before explaining this, let us examine one more example where the exact solution is known. We take the example from (Iserles 2009, chap. 8) where the boundary conditions are: \displaystyle \begin{aligned} \phi(x, 0) &= 0 \\ \phi(x, 1) &= \frac{1}{(1 + x)^2 + 1} \\ \phi(0, y) &= \frac{y}{1 + y^2} \\ \phi(1, y) &= \frac{y}{4 + y^2} \end{aligned} This has the exact solution $\displaystyle u(x, y) = \frac{y}{(1 + x)^2 + y^2}$ And we can calculate the values of this function on a grid. > analyticValue :: Monad m => Int -> m (Array U DIM2 Double) > analyticValue gridSize = computeP fromFunction (Z :. gridSize + 1 :. gridSize + 1) f
>   where
>     f (Z :. ix :. iy) = y / ((1 + x)^2 + y^2)
>       where
>         y = fromIntegral iy / fromIntegral gridSize
>         x = fromIntegral ix / fromIntegral gridSize


Let us also solve it using the Jacobi method with a five point stencil and a nine point stencil. Here is the encoding of the boundary values.

> bndFnEg3 :: Int -> Int -> (Int, Int) -> Double
> bndFnEg3 _ m (0, j) |           j >= 0 && j <  m = y / (1 + y^2)
>   where y = (fromIntegral j) / (fromIntegral m)
> bndFnEg3 n m (i, j) | i == n && j >  0 && j <= m = y / (4 + y^2)
>   where y = fromIntegral j / fromIntegral m
> bndFnEg3 n _ (i, 0) |           i >  0 && i <= n = 0.0
> bndFnEg3 n m (i, j) | j == m && i >= 0 && i <  n = 1 / ((1 + x)^2 + 1)
>   where x = fromIntegral i / fromIntegral n
> bndFnEg3 _ _ _                                   = 0.0


We create a function to run a solver.

> runSolver ::
>   Int ->
>   Int ->
>   (Int -> Int -> (Int, Int) -> Double) ->
>   (Int ->
>    Array U DIM2 Double ->
>    Array U DIM2 Double ->
>    Array U DIM2 Double ->
>    m (Array U DIM2 Double)) ->
>   m (Array U DIM2 Double)
> runSolver nGrid nIter boundaryFn solver = do
>   val     <- boundValue nGrid nGrid boundaryFn
>   initArr <- mkInitArrM nGrid
>   solver nIter mask val initArr


And put the five point and nine point solvers in the appropriate form.

> s5, s9 :: Monad m =>
>           Int ->
>           Array U DIM2 Double ->
>           Array U DIM2 Double ->
>           Array U DIM2 Double ->
>           m (Array U DIM2 Double)
> s5 n = solveLaplaceStencil n fivePoint 4
> s9 n = solveLaplaceStencil n ninePoint 20


And now we can see that the errors between the analytic solution and the five point method with a grid size of 8 are $\cal{O}(10^{-4})$.

ghci> liftA2 (-^) (analyticValue 7) (runSolver 7 200 bndFnEg3 s5) >>= return . pPrint
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, -3.659746856576884e-4, -5.792613003869074e-4, -5.919333582729558e-4, -4.617020226472812e-4, -2.7983716661839075e-4, -1.1394184484148084e-4, 0.0]
[0.0, -4.0566163490589335e-4, -6.681826442424543e-4, -7.270498771604073e-4, -6.163531890425178e-4, -4.157604876017795e-4, -1.9717865146007263e-4, 0.0]
[0.0, -3.4678314565880775e-4, -5.873627029994999e-4, -6.676042377350699e-4, -5.987527967581119e-4, -4.318102416048242e-4, -2.2116263241278578e-4, 0.0]
[0.0, -2.635436147627873e-4, -4.55055831294085e-4, -5.329636937312088e-4, -4.965786933938399e-4, -3.7401874422060555e-4, -2.0043638973538114e-4, 0.0]
[0.0, -1.7773949138776696e-4, -3.1086347862371855e-4, -3.714478154303591e-4, -3.5502855035249303e-4, -2.7528200465845587e-4, -1.5207424182367424e-4, 0.0]
[0.0, -9.188482657347674e-5, -1.6196970595228066e-4, -1.9595925291693295e-4, -1.903987061394885e-4, -1.5064155667735002e-4, -8.533752030373543e-5, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]


But using the nine point method significantly improves this.

ghci> liftA2 (-^) (analyticValue 7) (runSolver 7 200 bndFnEg3 s9) >>= return . pPrint
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, -2.7700522166329566e-7, -2.536751151638317e-7, -5.5431452705700934e-8, 7.393573120406671e-8, 8.403487600228132e-8, 4.188249685954659e-8, 0.0]
[0.0, -2.0141002235463112e-7, -2.214645128950643e-7, -9.753369634157849e-8, 2.1887763435035623e-8, 6.305346988977334e-8, 4.3482495659663556e-8, 0.0]
[0.0, -1.207601019737048e-7, -1.502713803391842e-7, -9.16850228516175e-8, -1.4654435886995998e-8, 2.732932558036083e-8, 2.6830928867571657e-8, 0.0]
[0.0, -6.883445567013036e-8, -9.337114890983766e-8, -6.911451747027009e-8, -2.6104150896433254e-8, 4.667329939200826e-9, 1.1717137371469732e-8, 0.0]
[0.0, -3.737430460254432e-8, -5.374955715231611e-8, -4.483740087546373e-8, -2.299792309368165e-8, -4.122571728437663e-9, 3.330287268177301e-9, 0.0]
[0.0, -1.6802381437586167e-8, -2.5009212159532446e-8, -2.229028683853329e-8, -1.3101905282919546e-8, -4.1197137368165215e-9, 3.909041701444238e-10, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]


# Bibliography

Iserles, A. 2009. A First Course in the Numerical Analysis of Differential Equations. A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press. http://books.google.co.uk/books?id=M0tkw4oUucoC.

Lippmeier, Ben, and Gabriele Keller. 2011. “Efficient Parallel Stencil Convolution in Haskell.” In Proceedings of the 4th ACM Symposium on Haskell, 59–70. Haskell ’11. New York, NY, USA: ACM. doi:10.1145/2034675.2034684. http://doi.acm.org/10.1145/2034675.2034684.