# Naperian Functors meet surface sea temperatures

Blog post on my new site.

# A Monoidal Category Example

I have never felt entirely comfortable with Haskell’s arrows and skimming the literature for their categorical basis didn’t reveal anything as straightforward as monads or applicatives. It did however lead me to start thinking about monoidal categories and since I always want an example, I thought I would write up Hilbert spaces.

Let $H_1$ and $H_2$ be Hilbert spaces then as vector spaces we can form the tensor product $H_1 \otimes H_2$. The tensor product can be defined as the free vector space on $H_1$ and $H_2$ as sets (that is purely formal sums of $(u,v)$) modulo a relation $\sim$ defined by \displaystyle \begin{aligned} (u_1,v) + (u_2,v) \sim (u_1 + u_2,v) \\ (u,v_1) + (u,v_2) \sim (u,v_1 + v_2) \\ c(u,v) \sim (cu,v) \\ c(u,v) \sim (u,cv) \end{aligned}

Slightly overloading notation, we can define an inner product on the tensored space by $\displaystyle \langle u_1 \otimes v_1, u_2 \otimes v_2\rangle = \langle u_1, v_1 \rangle \langle u_2, v_2\rangle$

Of course this might not be complete so we define the tensor product on Hilbert spaces to be the completion of this inner product.

For Hilbert spaces to form a monoidal category, we take the arrows (in the categorical sense) to be linear continuous maps and the bifunctor to be the tensor product. We also need an identity object $I$ which we take to be $\mathbb{R}$ considered as a Hilbert space. We should check the coherence conditions but the associativity of the tensor product and the fact that our Hilbert spaces are over the $\mathbb{R}$ make this straightforward.

Now for some slightly interesting properties of this category.

• The tensor product is not the product in the categorical sense. If $\{u_i\}$ and $\{v_i\}$ are (orthonormal) bases for $H_1$ and $H_2$ then $\{u_i \otimes v_j\}$ is a (orthonormal) basis for $H_1 \otimes H_2$. Thus a linear combination of basis vectors in the tensor product cannot be expressed as the tensor of basis vectors in the component spaces.

• There is no diagonal arrow $\Delta : X \rightarrow X \otimes X$. Suppose there were such a diagonal then for arbitrary $\lambda$ we would have $\Delta(\lambda u) = (\lambda u) \otimes (\lambda u) = \lambda^2 (u \otimes u)$ and since $\Delta$ must be linear this is not possible.

Presumably the latter is equivalent to the statement in quantum mechanics of “no cloning”.

# The Implicit Euler Method

We can solve our differential equation using the Implicit Euler method which is unconditionally stable. We can also take this opportunity to use the Vector Package rather than Arrays as it has a richer set of combinators and to tidy up the code to make the payoff explicit (thanks to suggestions by Ben Moseley).

First let’s get our imports arranged so the code isn’t too cluttered with qualified names.

{-# LANGUAGE RecordWildCards #-}
import Prelude hiding
( length
, scanl
, scanr
, zip
, tail
, init
, last
)
import qualified Data.Vector as V
import Data.Vector
( (!)
, generate
, length
, Vector
, prescanl
, scanl
, scanr
, zip
, tail
, init
, last
, unfoldr
)

Define a type synomym for the payoff function and put all the parameters for the trade in one place.

type Payoff = Double -> Double

data Config = Config {
r      :: Double,
sigma  :: Double,
t      :: Double,
m      :: Int, -- Space steps
n      :: Int, -- Time steps
xMax   :: Double,
deltaX :: Double,
deltaT :: Double
}

defaultConf :: Config
defaultConf = Config {
r = 0.05,
sigma = 0.20,
t = 3.0,
m = 80,
n = 800,
xMax = 150,
deltaX = xMax defaultConf / fromIntegral (m defaultConf),
deltaT = t defaultConf / fromIntegral (n defaultConf)
}

class Comonad c where
coreturn :: c a -> a
(=>>) :: c a -> (c a -> b) -> c b

Instead of using an Array to represent a "time slice" we use a Vector. Notice we can use the Vector library functions to implement the cobind.

data PointedArray a = PointedArray Int (Vector a)
deriving Show

instance Functor PointedArray where
fmap f (PointedArray i x) = PointedArray i (fmap f x)

coreturn (PointedArray i x) = x!i
(PointedArray i x) =>> f =
PointedArray i (V.map (f . flip PointedArray x) (generate (length x) id)) $\frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + r S\frac{\partial V}{\partial S} - rV = 0$

Again we can approximate this by a difference equation but in this case we approximate the time derivative by $\frac{z^n_i - z^{n-1}_i}{\Delta t}$. \begin{aligned} \frac{z^{n-1}_i - z^n_i}{\Delta t} + \frac{1}{2} \sigma^2 (i\Delta x)^2\frac{z^n_{i+1} - 2z^n_i + z^n_{i-1}}{\Delta x^2} + r(i\Delta x)\frac{z^n_{i+1} - z^n_{i-1}}{2\Delta x} - rz^n_i = 0\end{aligned}

Re-arranging (and not forgetting we are going backwards in time so that n is earlier than n − 1), we obtain an implicit equation for zn in terms zn − 1: \begin{aligned} z^{n-1}_i = A_i z^n_{i-1} + B_i z^n_i + C_i z^n_{i+1}\end{aligned}

where \begin{aligned} A_i &= \frac{\Delta t}{2}(ri - \sigma^2 i^2) \\ B_i &= 1 + \Delta t(r + \sigma^2 i^2) \\ C_i &= -\frac{\Delta t}{2}(ri + \sigma^2 i^2)\end{aligned}

Thus we have a matrix equation which we need to invert. Fortunately the matrix is tridiagonal so we can use the Thomas Algorithm to invert it.

Let’s represent our tridiagonal matrix as a Vector of triples

data TriDiMatrix a = TriDiMatrix (Vector (a, a, a))
deriving Show

Here’s our tridiagonal matrix. Note that we have encoded two of the boundary conditions in this (the case for S(t, 0) and for S(t, x) as x → ∞).

triDi =
TriDiMatrix $unfoldr h 0 where h :: Int -> Maybe ((Double, Double, Double), Int) h k | k == m' + 1 = Nothing h k | k == m' = Just (( 0.0, 1.0, 0.0), k + 1) h k | k == 0 = Just (( 0.0, 1.0, 0.0), k + 1) h k = Just ((afn k, bfn k, cfn k), k + 1) afn i = deltaT' * (r' * (fromIntegral i) - sigma'^2 * (fromIntegral i)^2) / 2 bfn i = 1 + deltaT' * (r' + sigma'^2 * (fromIntegral i)^2) cfn i = negate$ deltaT' * (r' * (fromIntegral i) + sigma'^2 * (fromIntegral i)^2) / 2

m'      = m      defaultConf
deltaT' = deltaT defaultConf
r'      = r      defaultConf
sigma'  = sigma  defaultConf

Next we implement the Thomas Algorithm. solveTriDi takes a (tridiagonal) matrix and a vector and returns the application of the inverse of the matrix to the vector.

solveTriDi :: TriDiMatrix Double -> Vector Double -> Vector Double
solveTriDi (TriDiMatrix m) d = z
where

-- Forward sweep
c' = prescanl f (let (a1, b1, c1) = m!0 in c1 / b1) (tail m)
where f ci' (ai, bi, ci) = ci / (bi - ci'*ai)

d' = scanl g d1' $zip (tail m) (zip c' (tail d)) where g di' ((ai, bi, ci), (ci', di)) = (di - di'*ai) / (bi - ci'*ai) d1 = d!0 (_, b1, _) = m!0 d1' = d1 / b1 -- Backward substitution z = scanr h xn$ zip c' (init d')
where
h (ci', di') xi1 = di' - ci'*xi1
xn = last d'

Now set up the boundary condition for our equation which is now a function of the payoff.

pricePArrAtT :: Config -> Payoff -> Int -> PointedArray Double
pricePArrAtT cfg@(Config {..}) fn i =
PointedArray i (V.fromList [ fn (deltaX * (fromIntegral j)) | j <- [0..m] ])

Finally we can price our call option without having to worry about the stability of the solution.

callPrices :: Int -> [Double]
callPrices i = map coreturn $iterate (=>> oneStep)$ pricePArrAtT defaultConf fn i
where
oneStep (PointedArray j x) = (solveTriDi triDi x)!j
fn x                       = max 0 (x - k) -- A call
k                          = 50.0          -- The strike

callPrice i = (callPrices i)!!(n defaultConf - 1)

# Solving a Partial Differential Equation Comonadically

Suppose we want to find the price of a European call option. Then we need to solve the Black-Scholes equation: \begin{aligned} \label {BlackScholeshDiffEqn0} \frac{\partial z}{\partial t} + \frac{1}{2}\sigma^2 x^2\frac{\partial^2 z}{\partial x^2} + rx\frac{\partial z}{\partial x} - rz = 0\end{aligned}

Although this particular equation can be solved explicitly, under more realistic assumptions we have to rely on numerical methods. We can approximate the partial differential equation by a difference equation (the minus sign on the left hand side is because we are stepping backwards in time): \begin{aligned} \frac{z^{n+1}_i - z^n_i}{-\Delta t} + \frac{1}{2} \sigma^2 (i\Delta x)^2\frac{z^n_{i+1} - 2z^n_i + z^n_{i-1}}{\Delta x^2} + r(i\Delta x)\frac{z^n_{i+1} - z^n_{i-1}}{2\Delta x} - rz^n_i = 0\end{aligned}

Re-arranging we obtain: \begin{aligned} z^{n+1} = z^n_{i-1}\delta t(\frac{1}{2} \sigma^2i^2 - ri) + z^n_i(1 - \delta t(r + \delta^2i^2)) + z^n_{n+1}\delta t(\frac{1}{2} \sigma^2i^2 + ri)\end{aligned}

import Data.Array

coreturn :: c a -> a
(=>>) :: c a -> (c a -> b) -> c b

We make each "time slice" of the differential equation an array with a distinguished element.

data PointedArray i a = PointedArray i (Array i a)
deriving Show

We make this into a functor by using the functor instance of the underlying array.

instance Ix i => Functor (PointedArray i) where
fmap f (PointedArray i a) = PointedArray i (fmap f a) 

An array with a distinguished element is a comonad in which the cobind updates each element of the array simultaneously.

instance Ix i => Comonad (PointedArray i) where
coreturn (PointedArray i a) = a!i
(PointedArray i a) =>> f =
PointedArray i (listArray (bounds a)
(map (f . flip PointedArray a) (range $bounds a))) Now let’s set up the parameters for our equation. • The interest rate — 5% seems a bit high these days r = 0.05 • The volatility — 20% seems a bit low these days sigma = 0.2 • The strike k = 50.0 • Tthe time horizon in years t = 3.0 • The granularity of the space component of the grid — 3 times the strike should be good enough m = 80 xMax = 150 deltaX = xMax / (fromIntegral m) • The granularity of time component of the grid. A necessary condition for the explicit Euler method (which we are using) to be stable is $\Delta T \le \frac{1}{\sigma^2n^2}$ n = 800 deltaT = t / (fromIntegral n) Now we can encode the backwards step of the difference equation and two of the boundary conditions: f (PointedArray j x) | j == 0 = 0.0 f (PointedArray j x) | j == m = xMax - k f (PointedArray j x) = a * x!(j-1) + b * x!j + c * x!(j+1) where a = deltaT * (sigma^2 * (fromIntegral j)^2 - r * (fromIntegral j)) / 2 b = 1 - deltaT * (r + sigma^2 * (fromIntegral j)^2) c = deltaT * (sigma^2 * (fromIntegral j)^2 + r * (fromIntegral j)) / 2 Finally we can set the boundary condition at maturity time to be the payoff of a call. priceAtT :: PointedArray Int Double priceAtT = PointedArray 0 (listArray (0, m) [ max 0 (deltaX * (fromIntegral j) - k) | j <- [0..m] ]) And now we can iterate backwards in time to find the price today (but only at points on the grid). prices = iterate (=>> f) priceAtT If we want the price of the option for a given stock price today the we can just pull out the required value. pricesAtM m (PointedArray _ x) = x!m price m = last$ map (pricesAtM m) $take n$ iterate (=>> f) priceAtT

# Anamorphism Example

There don’t seem to be many examples of anamorphisms around. Here’s one to make up the lack.

Let’s start off with our target language.

> {-# LANGUAGE
>     DeriveFunctor,
>     DeriveFoldable,
>     DeriveTraversable #-}
>
> import Data.Foldable
> import Data.Traversable
>
> data TermF a = PlusF a a
>              | MultF a a
>              | IntConstF Int
>              deriving (Show, Ord, Eq, Functor, Foldable, Traversable)
>
> newtype Mu f = In {out :: f (Mu f)}
>
> type Term = Mu TermF
>
> type Algebra f a   = f a -> a
> type CoAlgebra f a = a -> f a
>
> cata :: Functor f => Algebra f a -> (Mu f -> a)
> cata f = f . fmap (cata f) . out
>
> ana :: Functor f => CoAlgebra f a -> (a -> Mu f)
> ana f = In. fmap (ana f) . f
>
> showF :: Term -> String
> showF = cata alg
>   where
>     alg (IntConstF x) = show x
>     alg (PlusF x y) = "(" ++ x ++ "+" ++ y ++ ")"
>     alg (MultF x y) = "(" ++ x ++ "*" ++ y ++ ")"


Now we can convert ints into a representation in our language just using 2’s and 1’s.

> toBin :: Int -> Term
> toBin = ana coAlg
>   where
>     coAlg :: Int -> TermF Int
>     coAlg 0 = IntConstF 0
>     coAlg 1 = IntConstF 1
>     coAlg 2 = IntConstF 2
>     coAlg n | n mod 2 == 0 = MultF 2 (n div 2)
>     coAlg n | otherwise      = PlusF 1 (n-1)


Here’s an example:

*Main> showF $toBin 31 "(1+(2*(1+(2*(1+(2*(1+2)))))))"  # Monadic Caching Folds One of my colleagues (Roland Zumkeller) posted some nifty functions to count the number of expressions in an AST for the DSL we work on. This led to an email and chat discussion that I have summarised in this post. Any errors are entirely mine. Let’s start off with our target language. It’s easy to write an interpreeter and also a function to count nodes in it. > {-# LANGUAGE > DeriveFunctor, > DeriveFoldable, > DeriveTraversable, > RankNTypes, > FlexibleContexts, > NoMonomorphismRestriction, > UndecidableInstances #-} > > import Prelude hiding (mapM) > import Data.Foldable > import Data.Traversable > import Data.Monoid > import Control.Monad.Writer hiding (mapM) > import Control.Monad.State hiding (mapM) > import qualified Data.Map as Map > > data Term = Plus Term Term > | Mult Term Term > | IntConst Int > deriving Show > > interp :: Term -> Int > interp (Plus x y) = interp x + interp y > interp (Mult x y) = interp x * interp y > interp (IntConst x) = x > > numTerms :: Term -> Int > numTerms (Plus x y) = numTerms x + numTerms y > numTerms (Mult x y) = numTerms x + numTerms y > numTerms (IntConst _) = 1  Obviously there is a pattern here and we can abstract it by using a catamorphism. > data TermF a = PlusF a a > | MultF a a > | IntConstF Int > deriving (Show, Ord, Eq, Functor, Foldable, Traversable) > > newtype Mu f = In {in_ :: f (Mu f)} > > instance Eq (f (Mu f)) => Eq (Mu f) where > In x == In y = x == y > instance Ord (f (Mu f)) => Ord (Mu f) where > In x compare In y = x compare y > > type Term' = Mu TermF > > type Algebra f a = f a -> a > > cata :: Functor f => Algebra f a -> (Mu f -> a) > cata f (In x) = f (fmap (cata f) x) > > countAlgebra :: Num a => Algebra TermF (Sum a) > countAlgebra = \t -> fold t mappend (Sum 1) > > countTerms = getSum . cata countAlgebra > > testTerm :: Term' > testTerm = let x = (In$ MultF (In $IntConstF 2) (In$ IntConstF 3))
>                y = (In $MultF (In$ IntConstF 5) (In $IntConstF 7)) > in In$ PlusF x y


But perhaps we would like a monadic catamorphism; in other words we would like to work with algebras in the Kleisli category (of the given monad).

Perhaps Mendler style folds will help. We can capture the naturality condition in the definition of a Mendler algebra by using a rank n type.

> type MendlerAlgebra f c = forall a. (a -> c) -> f a -> c
>
> mcata :: MendlerAlgebra f c -> (Mu f -> c)
> mcata phi = phi (mcata phi) . in_


Ordinary folds and Mendler style folds are equivalent. One of the advantages of using Mendler style folds is that they can be generalzed to adjoint folds.

> toMCata :: Functor f => Algebra f a -> MendlerAlgebra f a
> toMCata phi = \gamma -> phi . fmap gamma
>
> fromMCata :: MendlerAlgebra f a -> Algebra f a
> fromMCata phi = phi id


And now I can create a "monadic" algebra over which I can fold (Mendler style).

> countTermsMAlgebra :: (MonadWriter (Sum t) m, Num t) =>
>                       MendlerAlgebra TermF (m ())
> countTermsMAlgebra h (PlusF x y)   = h x >> h y >> tell (Sum 1)
> countTermsMAlgebra h (MultF x y)   = h x >> h y >> tell (Sum 1)
> countTermsMAlgebra h (IntConstF _) = tell (Sum 1)
>
> countTerms' :: Num a => Term' -> a
> countTerms' t = getSum $execWriter$ mcata countTermsMAlgebra t


Sadly we are still not working on an algebra in the Kleisli category which is obvious if we use the isomorphism from Mendler style algebras.

> countTermsAlgebra ::  (MonadWriter (Sum t) m, Num t) =>
>                       Algebra TermF (m ())
> countTermsAlgebra = fromMCata countTermsMAlgebra


However if you make some more assumptions about the algebra functor (that it is Traversable) then one can do this. It’s not clear (to me at least) what this requirement is in categorical terms.

> type MonadicAlgebra f m a = f a -> m a
>
> cataM :: (Traversable f, Monad m) =>
>          MonadicAlgebra f m a -> (Mu f -> m a)
> cataM f = f <=< mapM (cataM f) . in_


One can go further and memoise monadic catamorphisms.

> type CacheT a b = StateT (Map.Map a b)
>
> memoise :: (Monad m, Ord a) =>
>            ((a -> CacheT a b m b) -> (a -> CacheT a b m b)) ->
>            (a -> CacheT a b m b)
> memoise f x = gets (Map.lookup x) >>= (maybe return)
>   (do y <- f (memoise f) x; modify (Map.insert x y); return y)
>
> cataMemoM :: (Traversable f, Monad m, Ord (Mu f)) =>
>              MonadicAlgebra f m a -> (Mu f -> m a)
> cataMemoM f = (evalStateT Map.empty)
>   . memoise (\cataM_f -> lift . f <=< mapM cataM_f . in_)


And now we can count unique nodes in our DSL.

> countTermsMonadicAlgebra :: MonadicAlgebra TermF (Writer (Sum Integer)) ()
> countTermsMonadicAlgebra = const $tell$ Sum 1
>
> countTermsSharedM :: Mu TermF -> Integer
> countTermsSharedM = getSum .
>                     execWriter .


You need caching to avoid counting the shared terms. The point is that the monadic action (tell) should not be repeated when the same term is encountered again.

Here’s an example of our program counting the unique nodes in an expression in our language.

> t = let x = In $MultF (In$ IntConstF 2) (In $IntConstF 3) > in In$ PlusF x x

*Main> countTermsSharedM t
4
*Main> countTerms t
7


# Isomorphic Types

Nothing to do with differential geometry but there was a question on the haskell-cafe mailing list asking for “a proof that initial algebras & final coalgebras of a CPO coincide”. I presume that means a CPO-category.

A category is a CPO-category iff

• There is a terminator, 1.
• Each hom-set, ${\rm Hom}(A,B)$, is a CPO with a bottom.
• For any three objects, arrow composition ${\rm Hom}(A,B) \times {\rm Hom}(B,C) \longrightarrow {\rm Hom}(A,C)$ is strict continuous.
• Idempotents split, that is, if $f\circ f = f$ then $f= u\circ d$ and $d\circ u = 1$.

The following lemma answers the poster’s question.

Lemma
If $T$ is a covariant endofunctor on a CPO-category, then $$ is an initial $T$-algebra with respect to the sub-category of strict maps if and only if $$ is a final $T$-co-algebra.

Proof
See Freyd.

We can do a bit more. We borrow some notation from Meijer, Fokkinga and Paterson. If, for a given functor, the initial algebra exists and $\phi : FA \longrightarrow A$ is an algebra then we denote the unique arrow from the initial algebra to this algebra as $(\![\phi]\!)$. Dually, if for a given functor, the final co-algebra exists and $\phi : A \longrightarrow FA$ is a co-algebra then we denote the unique arrow to the final co-algebra as ${[\!(\phi)\!]}$.

Theorem
In a CPO-category, if $\phi : TA \longrightarrow A$ is an isomorphism then $(\![\phi]\!) \circ {[\!({\phi^{-1}})\!]} = 1$ and ${[\!({\phi^{-1}})\!]} \circ (\![\phi]\!) = 1$.

Proof
Let $a : TA \longrightarrow A$ be the initial $T$-algebra and let $b : B \longrightarrow TB$ be the final $T$-co-alegebra. Then by the above lemma, we have that $A=B$ and $b=a^{-1}$. We also have Since there is only one arrow from the initial algebra to itself $1_A$ then we must have ${[\!({\phi^{-1}})\!]} \circ (\![\phi]\!) = 1$.

For the other way, there is always a morphism from the algebra $\phi : TC \longrightarrow C$ to the initial algebra. Therefore, $\phi : TC \longrightarrow C$ must be isomorphic to the initial algebra.