Floating Point: A Faustian Bargain?

Every so often, someone bitten by floating point arithmetic behaving in an unexpected way is tempted to suggest that a calculation should be done be precisely and rounding done at the end. With floating point rounding is done at every step.

Here’s an example of why floating point might really be the best option for numerical calculations.

Suppose you wish to find the roots of a quintic equation.

> import Data.List
> import Data.Ratio
> p :: Num a => a -> a
> p x = x^5 - 2*x^4 - 3*x^3 + 3*x^2 - 2*x - 1

We can do so using Newton-Raphson using automatic differentiation to calculate the derivative (even though for polynomials this is trivial).

> nr :: Fractional a => [a]
> nr = unfoldr g 0
>   where
>     g z = let u = z - (p z) / (h z) in Just (u, u)
>     h z = let [y] = grad (\[x] -> p x) [z] in y

After 7 iterations we see the size of the denominator is quite large (33308 digits) and the calculation takes many seconds.

ghci> length \$ show \$ denominator (nr!!7)
33308

On the other hand if we use floating point we get an answer accurate to 1 in $2^{53}$ after 7 iterations very quickly.

ghci> mapM_ putStrLn \$ map show \$ take 7 nr
-0.5
-0.3368421052631579
-0.31572844839628944
-0.31530116270327685
-0.31530098645936266
-0.3153009864593327
-0.3153009864593327

The example is taken from here who refers the reader to Nick Higham’s book: Accuracy and Stability of Numerical Algorithms.

Of course we should check we found a right answer.

ghci> p \$ nr!!6
0.0

3 thoughts on “Floating Point: A Faustian Bargain?”

1. AG

There’s a third option, use continued fractions for exact arithmetic using lazy lists.
There are potential efficiency concerns here too, but it’s bounded only by the required amount of computation for a given result.
See:
http://dl.acm.org/citation.cfm?id=62681
Exact real computer arithmetic with continued fractions

2. Dominic Steinitz

I don’t see how that would work here. I start with a (non-continued) fraction and get a new (non-continued) fraction. These fractions contain more and more digits.

• AG

The idea (about which I am not certain) is that instead of computing it exactly and then rounding to the desired accuracy, one uses exact arithmetic via lazy lists to perform (describe really) the computation (i.e. create a thunk) but the list is only evaluated to the desired degree of accuracy at the ending. There are bounds on the accuracy of continued fraction representations of decimals which should make it feasible to know how many digits is sufficient.