Backpropogation is Just Steepest Descent with Automatic Differentiation

Preface

The intended audience of this article is someone who knows something about Machine Learning and Artifical Neural Networks (ANNs) in particular and who recalls that fitting an ANN required a technique called backpropagation. The goal of this post is to refresh the reader’s knowledge of ANNs and backpropagation and to show that the latter is merely a specialised version of automatic differentiation, a tool that all Machine Learning practitioners should know about and have in their toolkit.

Introduction

The problem is simple to state: we have a (highly) non-linear function, the cost function of an Artificial Neural Network (ANN), and we wish to minimize this so as to estimate the parameters / weights of the function.

In order to minimise the function, one obvious approach is to use steepest descent: start with random values for the parameters to be estimated, find the direction in which the the function decreases most quickly, step a small amount in that direction and repeat until close enough.

But we have two problems:

  • We have an algorithm or a computer program that calculates the non-linear function rather than the function itself.

  • The function has a very large number of parameters, hundreds if not thousands.

One thing we could try is bumping each parameter by a small amount to get partial derivatives numerically

\displaystyle   \frac{\partial E(\ldots, w, \ldots)}{\partial w} \approx \frac{E(\ldots, w + \epsilon, \ldots) - E(\ldots, w, \ldots)}{\epsilon}

But this would mean evaluating our function many times and moreover we could easily get numerical errors as a result of the vagaries of floating point arithmetic.

As an alternative we could turn our algorithm or computer program into a function more recognisable as a mathematical function and then compute the differential itself as a function either by hand or by using a symbolic differentiation package. For the complicated expression which is our mathematical function, the former would be error prone and the latter could easily generate something which would be even more complex and costly to evaluate than the original expression.

The standard approach is to use a technique called backpropagation and the understanding and application of this technique forms a large part of many machine learning lecture courses.

Since at least the 1960s techniques for automatically differentiating computer programs have been discovered and re-discovered. Anyone who knows about these techniques and reads about backpropagation quickly realises that backpropagation is just automatic differentiation and steepest descent.

This article is divided into

  • Refresher on neural networks and backpropagation;

  • Methods for differentiation;

  • Backward and forward automatic differentiation and

  • Concluding thoughts.

The only thing important to remember throughout is the chain rule

\displaystyle   (g \circ f)'(a) = g'(f(a))\cdot f'(a)

in alternative notation

\displaystyle   \frac{\mathrm{d} (g \circ f)}{\mathrm{d} x}(a) =  \frac{\mathrm{d} g}{\mathrm{d} y}(f(a)) \frac{\mathrm{d} f}{\mathrm{d} x}(a)

where y = f(x). More suggestively we can write

\displaystyle   \frac{\mathrm{d} g}{\mathrm{d} x} =  \frac{\mathrm{d} g}{\mathrm{d} y} \frac{\mathrm{d} y}{\mathrm{d} x}

where it is understood that \mathrm{d} g / \mathrm{d} x and \mathrm{d} y / \mathrm{d} x are evaluated at a and \mathrm{d} g / \mathrm{d} y is evaluated at f(a).

For example,

\displaystyle   \frac{\mathrm{d}}{\mathrm{d} x} \sqrt{3 \sin(x)} =  \frac{\mathrm{d}}{\mathrm{d} x} (3 \sin(x)) \cdot \frac{\mathrm{d}}{\mathrm{d} y} \sqrt{y} =  3 \cos(x) \cdot \frac{1}{2\sqrt{y}} =  \frac{3\cos(x)}{2\sqrt{3\sin(x)}}

Acknowledgements

Sadly I cannot recall all the sources I looked at in order to produce this article but I have made heavy use of the following.

Neural Network Refresher

Here is our model, with \boldsymbol{x} the input, \hat{\boldsymbol{y}} the predicted output and \boldsymbol{y} the actual output and w^{(k)} the weights in the k-th layer. We have concretised the transfer function as \tanh but it is quite popular to use the \text{logit} function.

\displaystyle   \begin{aligned}  a_i^{(1)} &= \sum_{j=0}^{N^{(1)}} w_{ij}^{(1)} x_j \\  z_i^{(1)} &= \tanh(a_i^{(1)}) \\  a_i^{(2)} &= \sum_{j=0}^{N^{(2)}} w_{ij}^{(2)} z_j^{(1)} \\  \dots     &= \ldots \\  a_i^{(L-1)} &= \sum_{j=0}^{N^{(L-1)}} w_{ij}^{(L-1)} z_j^{(L-2)} \\  z_j^{(L-1)} &= \tanh(a_j^{(L-1)}) \\  \hat{y}_i &= \sum_{j=0}^{N^{(L)}} w_{ij}^{(L)} z_j^{(L-1)} \\  \end{aligned}

with the loss or cost function

\displaystyle   E(\boldsymbol{w}; \boldsymbol{x}, \boldsymbol{y}) = \frac{1}{2}\|(\hat{\boldsymbol{y}} - \boldsymbol{y})\|^2

The diagram below depicts a neural network with a single hidden layer.

In order to apply the steepest descent algorithm we need to calculate the differentials of this latter function with respect to the weights, that is, we need to calculate

\displaystyle   \Delta w_{ij} = \frac{\partial E}{\partial w_{ij}}

Applying the chain rule

\displaystyle   \Delta w_{ij} =  \frac{\partial E}{\partial w_{ij}} =  \frac{\partial E}{\partial a_i}\frac{\partial a_i}{\partial w_{ij}}

Since

\displaystyle   a_j^{(l)} = \sum_{i=0}^N w_{ij}^{(l)}z_i^{(l-1)}

we have

\displaystyle   \frac{\partial a_i^{(l)}}{\partial w_{ij}^{(l)}} =  \frac{\sum_{k=0}^M w_{kj}^{(l)}z_k^{(l-1)}}{\partial w_{ij}^{(l)}} =  z_i^{(l-1)}

Defining

\displaystyle   \delta_j^{(l)} \equiv  \frac{\partial E}{\partial a_j^{(l)}}

we obtain

\displaystyle   \Delta w_{ij}^{(l)} =  \frac{\partial E}{\partial w_{ij}^{(l)}} =  \delta_j^{(l)} z_i^{(l-1)}

Finding the z_i for each layer is straightforward: we start with the inputs and propagate forward. In order to find the \delta_j we need to start with the outputs a propagate backwards:

For the output layer we have (since \hat{y}_j = a_j)

\displaystyle   \delta_j = \frac{\partial E}{\partial a_j} = \frac{\partial E}{\partial y_j} = \frac{\partial}{\partial y_j}\bigg(\frac{1}{2}\sum_{i=0}^M (\hat{y}_i - y_i)^2\bigg) = \hat{y}_j - y_j

For a hidden layer using the chain rule

\displaystyle   \delta_j^{(l-1)} = \frac{\partial E}{\partial a_j^{(l-1)}} =  \sum_k \frac{\partial E}{\partial a_k^{(l)}}\frac{\partial a_k^{(l)}}{\partial a_j^{(l-1)}}

Now

\displaystyle   a_k^{(l)} = \sum_i w_{ki}^{(l)}z_i^{(l-1)} = \sum_i w_{ki}^{(l)} f(a_i^{(l-1)})

so that

\displaystyle   \frac{\partial a_k^{(l)}}{\partial a_j^{(l-1)}} =  \frac{\sum_i w_{ki}^{(l)} f(a_i^{(l-1)})}{\partial a_j^{(l-1)}} =  w_{kj}^{(l)}\,f'(a_j^{(l-1)})

and thus

\displaystyle   \delta_j^{(l-1)} =  \sum_k \frac{\partial E}{\partial a_k^{(l)}}\frac{\partial a_k^{(l)}}{\partial a_j^{(l-1)}} =  \sum_k \delta_k^{(l)} w_{kj}^{(l)}\, f'(a_j^{(l-1)}) =  f'(a_j^{(l-1)}) \sum_k \delta_k^{(l)} w_{kj}^{(l)}

Summarising

  1. We calculate all a_j and z_j for each layer starting with the input layer and propagating forward.

  2. We evaluate \delta_j^{(L)} in the output layer using \delta_j = \hat{y}_j - y_j.

  3. We evaluate \delta_j in each layer using \delta_j^{(l-1)} = f'(a_j^{(l-1)})\sum_k \delta_k^{(l)} w_{kj}^{(l)} starting with the output layer and propagating backwards.

  4. Use \partial E / \partial w_{ij}^{(l)} = \delta_j^{(l)} z_i^{(l-1)} to obtain the required derivatives in each layer.

For the particular activation function \tanh we have f'(a) = \tanh' (a) = 1 - \tanh^2(a). And finally we can use the partial derivatives to step in the right direction using steepest descent

\displaystyle   w' = w - \gamma\nabla E(w)

where \gamma is the step size aka the learning rate.

Differentiation

So now we have an efficient algorithm for differentiating the cost function for an ANN and thus estimating its parameters but it seems quite complex. In the introduction we alluded to other methods of differentiation. Let us examine those in a bit more detail before moving on to a general technique for differentiating programs of which backpropagation turns out to be a specialisation.

Numerical Differentiation

Consider the function f(x) = e^x then its differential f'(x) = e^x and we can easily compare a numerical approximation of this with the exact result. The numeric approximation is given by

\displaystyle   f'(x) \approx \frac{f(x + \epsilon) - f(x)}{\epsilon}

In theory we should get a closer and closer approximation as epsilon decreases but as the chart below shows at some point (with \epsilon \approx 2^{-26}) the approximation worsens as a result of the fact that we are using floating point arithmetic. For a complex function such as one which calculates the cost function of an ANN, there is a risk that we may end up getting a poor approximation for the derivative and thus a poor estimate for the parameters of the model.

Symbolic Differentiation

Suppose we have the following program (written in Python)

import numpy as np

def many_sines(x):
    y = x
    for i in range(1,7):
        y = np.sin(x+y)
    return y

When we unroll the loop we are actually evaluating

\displaystyle   f(x) = \sin(x + \sin(x + \sin(x + \sin(x + \sin(x + \sin(x + x))))))

Now suppose we want to get the differential of this function. Symbolically this would be

\displaystyle   \begin{aligned}  f'(x) &=           (((((2\cdot \cos(2x)+1)\cdot \\        &\phantom{=} \cos(\sin(2x)+x)+1)\cdot \\        &\phantom{=} \cos(\sin(\sin(2x)+x)+x)+1)\cdot \\        &\phantom{=} \cos(\sin(\sin(\sin(2x)+x)+x)+x)+1)\cdot \\        &\phantom{=} \cos(\sin(\sin(\sin(\sin(2x)+x)+x)+x)+x)+1)\cdot \\        &\phantom{=} \cos(\sin(\sin(\sin(\sin(\sin(2x)+x)+x)+x)+x)+x)  \end{aligned}

Typically the non-linear function that an ANN gives is much more complex than the simple function given above. Thus its derivative will correspondingly more complex and therefore expensive to compute. Moreover calculating this derivative by hand could easily introduce errors. And in order to have a computer perform the symbolic calculation we would have to encode our cost function somehow so that it is amenable to this form of manipulation.

Automatic Differentiation

Reverse Mode

Traditionally, forward mode is introduced first as this is considered easier to understand. We introduce reverse mode first as it can be seen to be a generalization of backpropagation.

Consider the function

\displaystyle   f(x) = \exp(\exp(x) + (\exp(x))^2) + \sin(\exp(x) + (\exp(x))^2)

Let us write this a data flow graph.

We can thus re-write our function as a sequence of simpler functions in which each function only depends on variables earlier in the sequence.

\displaystyle   \begin{aligned}  u_7    &= f_7(u_6, u_5, u_4, u_3, u_2, u_1) \\  u_6    &= f_6(u_5, u_4, u_3, u_2, u_1) \\  \ldots &= \ldots \\  u_1    &= f_1(u_1)  \end{aligned}

\displaystyle   \begin{aligned}  \mathrm{d}u_7    &= \frac{\partial f_7}{\partial u_6} \mathrm{d} u_6 +                      \frac{\partial f_7}{\partial u_5} \mathrm{d} u_5 +                      \frac{\partial f_7}{\partial u_4} \mathrm{d} u_4 +                      \frac{\partial f_7}{\partial u_3} \mathrm{d} u_3 +                      \frac{\partial f_7}{\partial u_2} \mathrm{d} u_2 +                      \frac{\partial f_7}{\partial u_1} \mathrm{d} u_1 \\  \mathrm{d}u_6    &= \frac{\partial f_6}{\partial u_5} \mathrm{d} u_5 +                      \frac{\partial f_6}{\partial u_4} \mathrm{d} u_4 +                      \frac{\partial f_6}{\partial u_3} \mathrm{d} u_3 +                      \frac{\partial f_6}{\partial u_2} \mathrm{d} u_2 +                      \frac{\partial f_6}{\partial u_1} \mathrm{d} u_1 \\  \ldots           &= \ldots \\  \mathrm{d}u_1    &= \frac{\partial f_1}{\partial u_1} \mathrm{d} u_1  \end{aligned}

In our particular example, since u_1, \dots, u_5 do not depend on u_6

\displaystyle   \begin{aligned}  \frac{\mathrm{d}u_7}{\mathrm{d}u_6} &= 1  \end{aligned}

Further u_6 does not depend on u_5 so we also have

\displaystyle   \begin{aligned}  \frac{\mathrm{d}u_7}{\mathrm{d}u_5} &= 1 \\  \end{aligned}

Now things become more interesting as u_6 and u_5 both depend on u_4 and so the chain rule makes an explicit appearance

\displaystyle   \begin{aligned}  \frac{\mathrm{d}u_7}{\mathrm{d}u_4} &=   \frac{\mathrm{d}u_7}{\mathrm{d}u_6}\frac{\mathrm{d}u_6}{\mathrm{d}u_4} +   \frac{\mathrm{d}u_7}{\mathrm{d}u_5}\frac{\mathrm{d}u_5}{\mathrm{d}u_4} \\  &= \frac{\mathrm{d}u_7}{\mathrm{d}u_6}\exp{u_4} +     \frac{\mathrm{d}u_7}{\mathrm{d}u_5}\cos{u_5}  \end{aligned}

Carrying on

\displaystyle   \begin{aligned}  \frac{\mathrm{d}u_7}{\mathrm{d}u_3} &=   \frac{\mathrm{d}u_7}{\mathrm{d}u_4}\frac{\mathrm{d}u_4}{\mathrm{d}u_3} \\  &= \frac{\mathrm{d}u_7}{\mathrm{d}u_4} \\  \frac{\mathrm{d}u_7}{\mathrm{d}u_2} &=   \frac{\mathrm{d}u_7}{\mathrm{d}u_4}\frac{\mathrm{d}u_4}{\mathrm{d}u_2} +   \frac{\mathrm{d}u_7}{\mathrm{d}u_3}\frac{\mathrm{d}u_3}{\mathrm{d}u_2} \\  &= \frac{\mathrm{d}u_7}{\mathrm{d}u_4} + 2u_2\frac{\mathrm{d}u_7}{\mathrm{d}u_4} \\  \frac{\mathrm{d}u_7}{\mathrm{d}u_1} &=   \frac{\mathrm{d}u_7}{\mathrm{d}u_2}\frac{\mathrm{d}u_2}{\mathrm{d}u_1} \\  &=\frac{\mathrm{d}u_7}{\mathrm{d}u_2}\exp{u_2}  \end{aligned}

Note that having worked from top to bottom (the forward sweep) in the graph to calculate the function itself, we have to work backwards from bottom to top (the backward sweep) to calculate the derivative.

So provided we can translate our program into a call graph, we can apply this procedure to calculate the differential with the same complexity as the original program.

The pictorial representation of an ANN is effectively the data flow graph of the cost function (without the final cost calculation itself) and its differential can be calculated as just being identical to backpropagation.

Forward Mode

An alternative method for automatic differentiation is called forward mode and has a simple implementation. Let us illustrate this using Haskell 98. The actual implementation is about 20 lines of code.

First some boilerplate declarations that need not concern us further.

> {-# LANGUAGE NoMonomorphismRestriction #-}
> 
> module AD (
>     Dual(..)
>   , f
>   , idD
>   , cost
>   , zs
>   ) where
> 
> default ()

Let us define dual numbers

> data Dual = Dual Double Double
>   deriving (Eq, Show)

We can think of these pairs as first order polynomials in the indeterminate \epsilon, x + \epsilon x' such that \epsilon^2 = 0

Thus, for example, we have

\displaystyle   \begin{aligned}  (x + \epsilon x') + (y + \epsilon y') &= ((x + y) + \epsilon (x' + y')) \\  (x + \epsilon x')(y + \epsilon y') &= xy + \epsilon (xy' + x'y) \\  \log (x + \epsilon x') &=  \log x (1 + \epsilon \frac {x'}{x}) =  \log x + \epsilon\frac{x'}{x} \\  \sqrt{(x + \epsilon x')} &=  \sqrt{x(1 + \epsilon\frac{x'}{x})} =  \sqrt{x}(1 + \epsilon\frac{1}{2}\frac{x'}{x}) =  \sqrt{x} + \epsilon\frac{1}{2}\frac{x'}{\sqrt{x}} \\  \ldots &= \ldots  \end{aligned}

Notice that these equations implicitly encode the chain rule. For example, we know, using the chain rule, that

\displaystyle   \frac{\mathrm{d}}{\mathrm{d} x}\log(\sqrt x) =  \frac{1}{\sqrt x}\frac{1}{2}x^{-1/2} =  \frac{1}{2x}

And using the example equations above we have

\displaystyle   \begin{aligned}  \log(\sqrt {x + \epsilon x'}) &= \log (\sqrt{x} + \epsilon\frac{1}{2}\frac{x'}{\sqrt{x}}) \\                                &= \log (\sqrt{x}) + \epsilon\frac{\frac{1}{2}\frac{x'}{\sqrt{x}}}{\sqrt{x}} \\                                &= \log (\sqrt{x}) + \epsilon x'\frac{1}{2x}  \end{aligned}

Notice that dual numbers carry around the calculation and the derivative of the calculation. To actually evaluate \log(\sqrt{x}) at a particular value, say 2, we plug in 2 for x and 1 for x'

\displaystyle   \log (\sqrt(2 + \epsilon 1) = \log(\sqrt{2}) + \epsilon\frac{1}{4}

Thus the derivative of \log(\sqrt{x}) at 2 is 1/4.

With a couple of helper functions we can implement this rule (\epsilon^2 = 0) by making Dual an instance of Num, Fractional and Floating.

> constD :: Double -> Dual
> constD x = Dual x 0
> 
> idD :: Double -> Dual
> idD x = Dual x 1.0

Let us implement the rules above by declaring Dual to be an instance of Num. A Haskell class such as Num simply states that it is possible to perform a (usually) related collection of operations on any type which is declared as an instance of that class. For example, Integer and Double are both types which are instances on Num and thus one can add, multiply, etc. values of these types (but note one cannot add an Integer to a Double without first converting a value of the former to a value of the latter).

As an aside, we will never need the functions signum and abs and declare them as undefined; in a robust implementation we would specify an error if they were ever accidentally used.

> instance Num Dual where
>   fromInteger n             = constD $ fromInteger n
>   (Dual x x') + (Dual y y') = Dual (x + y) (x' + y')
>   (Dual x x') * (Dual y y') = Dual (x * y) (x * y' + y * x')
>   negate (Dual x x')        = Dual (negate x) (negate x')
>   signum _                  = undefined
>   abs _                     = undefined

We need to be able to perform division on Dual so we further declare it to be an instance of Fractional.

> instance Fractional Dual where
>   fromRational p = constD $ fromRational p
>   recip (Dual x x') = Dual (1.0 / x) (-x' / (x * x))

We want to be able to perform the same operations on Dual as we can on Float and Double. Thus we make Dual an instance of Floating which means we can now operate on values of this type as though, in some sense, they are the same as values of Float or Double (in Haskell 98 only instances for Float and Double are defined for the class Floating).

> instance Floating Dual where
>   pi = constD pi
>   exp   (Dual x x') = Dual (exp x)   (x' * exp x)
>   log   (Dual x x') = Dual (log x)   (x' / x)
>   sqrt  (Dual x x') = Dual (sqrt x)  (x' / (2 * sqrt x))
>   sin   (Dual x x') = Dual (sin x)   (x' * cos x)
>   cos   (Dual x x') = Dual (cos x)   (x' * (- sin x))
>   sinh  (Dual x x') = Dual (sinh x)  (x' * cosh x)
>   cosh  (Dual x x') = Dual (cosh x)  (x' * sinh x)
>   asin  (Dual x x') = Dual (asin x)  (x' / sqrt (1 - x*x))
>   acos  (Dual x x') = Dual (acos x)  (x' / (-sqrt (1 - x*x)))
>   atan  (Dual x x') = Dual (atan x)  (x' / (1 + x*x))
>   asinh (Dual x x') = Dual (asinh x) (x' / sqrt (1 + x*x))
>   acosh (Dual x x') = Dual (acosh x) (x' / (sqrt (x*x - 1)))
>   atanh (Dual x x') = Dual (atanh x) (x' / (1 - x*x))

That’s all we need to do. Let us implement the function we considered earlier.

> f =  sqrt . (* 3) . sin

The compiler can infer its type

ghci> :t f
  f :: Floating c => c -> c

We know the derivative of the function and can also implement it directly in Haskell.

> f' x = 3 * cos x / (2 * sqrt (3 * sin x))

Now we can evaluate the function along with its automatically calculated derivative and compare that with the derivative we calculated symbolically by hand.

ghci> f $ idD 2
  Dual 1.6516332160855343 (-0.3779412091869595)

ghci> f' 2
  -0.3779412091869595

To see that we are not doing symbolic differentiation (it’s easy to see we are not doing numerical differentiation) let us step through the actual evaluation.

\displaystyle   \begin{aligned}  f\,\$\,\mathrm{idD}\,2 &\longrightarrow \mathrm{sqrt} \cdot \lambda x \rightarrow 3x \cdot \sin \$\,\mathrm{idD}\,2 \\  &\longrightarrow \mathrm{sqrt} \cdot \lambda x \rightarrow 3x \cdot \sin \$\,\mathrm{Dual}\,2\,1 \\  &\longrightarrow \mathrm{sqrt} \cdot \lambda x \rightarrow 3x\,\$\, \mathrm{Dual}\,\sin(2)\,\cos(2) \\  &\longrightarrow \mathrm{sqrt} \,\$\, \mathrm{Dual}\,3\,0 \times \mathrm{Dual}\,\sin(2)\,\cos(2) \\  &\longrightarrow \mathrm{sqrt} \,\$\, \mathrm{Dual}\,(3\sin(2))\, (3\cos(2) + 0\sin(2)) \\  &\longrightarrow \mathrm{sqrt} \,\$\, \mathrm{Dual}\,(3\sin(2))\, (3\cos(2)) \\  &\longrightarrow \mathrm{Dual}\,(\mathrm{sqrt} (3\sin(2)))\, (3\cos(2)) / (2\,\mathrm{sqrt}(3\sin(2))) \\  &\longrightarrow 1.6516332160855343 -0.3779412091869595  \end{aligned}

A Simple Application

In order not to make this blog post too long let us apply AD to finding parameters for a simple regression. The application to ANNs is described in a previous blog post. Note that in a real application we would use the the Haskell AD and furthermore use reverse AD as in this case it would be more efficient.

First our cost function

\displaystyle   L(\boldsymbol{x}, \boldsymbol{y}, m, c) = \frac{1}{2n}\sum_{i=1}^n (y - (mx + c))^2

> cost m c xs ys = (/ (2 * (fromIntegral $ length xs))) $
>                  sum $
>                  zipWith errSq xs ys
>   where
>     errSq x y = z * z
>       where
>         z = y - (m * x + c)
ghci> :t cost
  cost :: Fractional a => a -> a -> [a] -> [a] -> a

Some test data

> xs = [1,2,3,4,5,6,7,8,9,10]
> ys = [3,5,7,9,11,13,15,17,19,21]

and a learning rate

> gamma = 0.04

Now we create a function of the two parameters in our model by applying the cost function to the data. We need the (partial) derivatives of both the slope and the offset.

> g m c = cost m c xs ys

Now we can take use our Dual numbers to calculate the required partial derivatives and update our estimates of the parameter. We create a stream of estimates.

> zs = (0.1, 0.1) : map f zs
>   where
> 
>     deriv (Dual _ x') = x'
> 
>     f (c, m) = (c - gamma * cDeriv, m - gamma * mDeriv)
>       where
>         cDeriv = deriv $ g (constD m) $ idD c
>         mDeriv = deriv $ flip g (constD c) $ idD m

And we can calculate the cost of each estimate to check our algorithm converges and then take the the estimated parameters when the change in cost per iteration has reached an acceptable level.

ghci> take 2 $ drop 1000 $ map (\(c, m) -> cost m c xs ys) zs
  [1.9088215184565296e-9,1.876891490619424e-9]

ghci> take 2 $ drop 1000 zs
  [(0.9998665320141327,2.0000191714150106),(0.999867653022265,2.0000190103927853)]

Concluding Thoughts

Efficiency

Perhaps AD is underused because of efficiency?

It seems that the Financial Services industry is aware that AD is more efficient than current practice albeit the technique is only slowly permeating. Order of magnitude improvements have been reported.

Perhaps AD is slowly permeating into Machine Learning as well but there seem to be no easy to find benchmarks.

Automatic Differentiation Tools

If it were only possible to implement automatic differentiation in Haskell then its applicability would be somewhat limited. Fortunately this is not the case and it can be used in many languages.

In general, there are three different approaches:

  • Operator overloading: available for Haskell and C++. See the Haskell ad package and the C++ FADBAD approach using templates.

  • Source to source translators: available for Fortran, C and other languages e.g., ADIFOR, TAPENADE and see the wikipedia entry for a more comprehensive list.

  • New languages with built-in AD primitives. I have not listed any as it seems unlikely that anyone practicing Machine Learning would want to transfer their existing code to a research language. Maybe AD researchers could invest time in understanding what language feature improvements are needed to support AD natively in existing languages.

About these ads

One thought on “Backpropogation is Just Steepest Descent with Automatic Differentiation

  1. Pingback: Backpropogation is Just Steepest Descent with Automatic Differentiation | Boardmad

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s