fast-mult-0.1.0.0: Numeric type with asymptotically faster multiplications.

Safe HaskellNone
LanguageHaskell2010

Data.FastMult

Synopsis

Documentation

data FastMult n Source #

FastMult is a Numeric type that can be used in any place a 'Num a' is required. It represents a standard integer using three components, which multiplied together represent the stored number:

  1. The number's sign
  2. An unsigned machine word.
  3. A (possibly empty) list of BigNats, which are the internal type for Integers which are too large to fit in a machine word.

Each BigNat in the list has a scale. It's scale is the log base 2 of the number of words to store the machine word, minus 1.

Note that we never store BigNats with length of only one machine word in this list, we instead convert them to an ordinary unsigned machine word and multiply them by item 2 in the list above. Only then if the result overflows we place them in this BigNat list.

This is a few examples of "MachineWords -> Scale"

2 -> 0 3 -> 1 4 -> 1 5 -> 2 6..8 -> 2 9..16 -> 3 17..32 -> 4

etc.

Note this "scale" has the very nice property that multipling BigNats of scale x always results in a BigNat of scale x+1.

The list of BigNats only ever contains one BigNat of each "scale". As the size of BigNats increases exponentially with scale, this list should always be relatively small. The BigNat list is always sorted as well, smallest to largest.

When we multiply two FastMults, we merge the BigNat lists. This is basically a simple merge of sorted list, but with one significant change. Note that we said that the BigNat list cannot contain two BigNats of the same scale. So if find that a BigNat in the left hand list of the multiplication is the same scale as a BigNat in right hand list, we multiply these two BigNats to create a BigNat one "scale" larger. We then continue the merge, including this new BigNat.

As a result, we only ever multiply numbers of the same "scale", that is, no more than double the length of one another.

Why do we do this? Well, an ordinary product, say product [1..1000000], towards the end of the list involves multiplications of a very large number by a machine word. These take O(n) time. So the whole product takes O(n^2) time.

If we instead did the following:

   product x y = product x mid * product mid y
     mid = (x + y) div 2

   (suitible base case here)
 

We find that this runs a lot faster. The reason is that with this approach we're minimising products involving very large numbers, and importantly, multiplying two n length numbers doesn't take O(n^2) but more like O(n*log(n)) time. For this reason it's better to do a few multiplication of large numbers by large numbers, instead of lots of multiplications of large numbers by small numbers.

But to do this I've had to redefine product. What if you don't want to change the algorithm, but just want to use one that's already been written, perhaps inefficiently. Well this is where FastMult is useful. Instead of making the algorithm smarter, FastMult just makes numbers smarter. The numbers themselves reorder the multiplications so you don't have too.

As well as having the advantage of speeding up existing algorithms, FastMult dynamically behaves differently based on what numbers it's actually multiplying and always maintains the invariant that multiplications will not be performed between numbers greater than twice the size each other.

At this point I haven't mentioned the meaning of the FastMult type parameter n'. FastMult can also add paralellism to your multiplication algorithms. However, sparking new GHC threads has a cost, so we only want to do it for large multiplications. Multiplications of scale > n will spark a new thread, so n = 0 will spark new threads for any multiplication involving at least 3 machine words. This is probably too small, you can experiment with different numbers. Note that n represents the scale, not size, so for example setting n=4 will only spark threads for multiplications involving at least 33 machine words.

How well parallelism works (or if it works at all) hasn't been tested yet however.

We include an ordinary machine word in the type as an optimisation for single machine word numbers. This is because multiplying BigNats involves calling GMP using a C call, which is a large overhead for small multiplications.

To use FastMult, all you have to do is import it's type, not it's implementation. If you're not interested in parallelism, just import FastMultSeq.

For example, just compare in GHCi:

 product [1..100000]
 

and:

 product [1::FastMultSeq..100000]
 

and you should find the latter completes much faster.

Converting to and from Integers can be done with the toInteger and fromInteger class methods from Integral and Num respectively.

Instances

KnownNat n => Enum (FastMult n) Source # 
KnownNat n => Eq (FastMult n) Source # 

Methods

(==) :: FastMult n -> FastMult n -> Bool #

(/=) :: FastMult n -> FastMult n -> Bool #

KnownNat n => Integral (FastMult n) Source # 
KnownNat n => Num (FastMult n) Source # 
KnownNat n => Ord (FastMult n) Source # 

Methods

compare :: FastMult n -> FastMult n -> Ordering #

(<) :: FastMult n -> FastMult n -> Bool #

(<=) :: FastMult n -> FastMult n -> Bool #

(>) :: FastMult n -> FastMult n -> Bool #

(>=) :: FastMult n -> FastMult n -> Bool #

max :: FastMult n -> FastMult n -> FastMult n #

min :: FastMult n -> FastMult n -> FastMult n #

KnownNat n => Read (FastMult n) Source # 
KnownNat n => Real (FastMult n) Source # 

Methods

toRational :: FastMult n -> Rational #

KnownNat n => Show (FastMult n) Source # 

Methods

showsPrec :: Int -> FastMult n -> ShowS #

show :: FastMult n -> String #

showList :: [FastMult n] -> ShowS #

type FastMultSeq = FastMult 4294967295 Source #

A type synonym for a fully sequential FastMult. The parameter is supposed to be WORD_MAX, but I couldn't find that defined, anyway what's important is that anything of scale smaller than 0xFFFFFFFF will be sequential, which is everything.

simplify :: KnownNat n => FastMult n -> FastMult n Source #

simplify returns a FastMult the same as it's argument but "simplified".

To explain this, consider the following for x :: FastMult:

 f x = (show x, x + 1)
 

It will multiply out x twice, once for the addition, and once for show. Note that the list of BigInts in x is generally a small number, as only one BigInt is stored for each scale, and the sizes of scales increase exponentially, but there may be some multiplications required nevertheless. A better way to write this is as follows:

 f x = let y = simplify x in (show y, y + 1)
 

This will ensure that x is multiplied out only once.

Unfortunately using simplify stops your algorithms from being generic, so it might be better to define simplify as id with a rewrite rule. I'll think about this.