ZX     None %&,BDT[b&A type synonym for a fully sequential ". The parameter is supposed to be WORD_MAXe, but I couldn't find that defined, anyway what's important is that anything of scale smaller than  0xFFFFFFFF) will be sequential, which is everything. is a Numeric type that can be used in any place a 'Num a' is required. It represents a standard integer using three components, which multiplied together represent the stored number: The number's signAn unsigned machine word.A (possibly empty) list of #s, which are the internal type for /s which are too large to fit in a machine word.Each q in the list has a scale. It's scale is the log base 2 of the number of words to store the machine word, minus 1.Note that we never store BigNats with length of only one machine word in this list, we instead convert them to an ordinary unsigned machine word and multiply them by item 2 in the list above. Only then if the result overflows we place them in this  list.1This is a few examples of "MachineWords -> Scale"H2 -> 0 3 -> 1 4 -> 1 5 -> 2 6..8 -> 2 9..16 -> 3 17..32 -> 4etc.=Note this "scale" has the very nice property that multipling  s of scale x always results in a  of scale x+1. The list of s only ever contains one ! of each "scale". As the size of Ys increases exponentially with scale, this list should always be relatively small. The 4 list is always sorted as well, smallest to largest.When we multiply two s, we merge the BigNat lists. This is basically a simple merge of sorted list, but with one significant change. Note that we said that the  list cannot contain two )s of the same scale. So if find that a D in the left hand list of the multiplication is the same scale as a - in right hand list, we multiply these two s to create a K one "scale" larger. We then continue the merge, including this new BigNat.wAs a result, we only ever multiply numbers of the same "scale", that is, no more than double the length of one another.2Why do we do this? Well, an ordinary product, say product [1..1000000]n, towards the end of the list involves multiplications of a very large number by a machine word. These take O(n)" time. So the whole product takes O(n^2) time. If we instead did the following: B product x y = product x mid * product mid y mid = (x + y) " 2 (suitible base case here) We find that this runs a lot faster. The reason is that with this approach we're minimising products involving very large numbers, and importantly, multiplying two n length numbers doesn't take O(n^2) but more like  O(n*log(n)) time. For this reason it's better to do a few multiplication of large numbers by large numbers, instead of lots of multiplications of large numbers by small numbers.But to do this I've had to redefine product. What if you don't want to change the algorithm, but just want to use one that's already been written, perhaps inefficiently. Well this is where 7 is useful. Instead of making the algorithm smarter, f just makes numbers smarter. The numbers themselves reorder the multiplications so you don't have too.DAs well as having the advantage of speeding up existing algorithms,  dynamically behaves differently based on what numbers it's actually multiplying and always maintains the invariant that multiplications will not be performed between numbers greater than twice the size each other.5At this point I haven't mentioned the meaning of the  type parameter n'.  can also add paralellism to your multiplication algorithms. However, sparking new GHC threads has a cost, so we only want to do it for large multiplications. Multiplications of  scale > n will spark a new thread, so n = 0 will spark new threads for any multiplication involving at least 3 machine words. This is probably too small, you can experiment with different numbers. Note that n8 represents the scale, not size, so for example setting n=4S will only spark threads for multiplications involving at least 33 machine words.RHow well parallelism works (or if it works at all) hasn't been tested yet however.We include an ordinary machine word in the type as an optimisation for single machine word numbers. This is because multiplying [s involves calling GMP using a C call, which is a large overhead for small multiplications.To use z, all you have to do is import it's type, not it's implementation. If you're not interested in parallelism, just import ."For example, just compare in GHCi:  product [1..100000] and: # product [1::FastMultSeq..100000] 5and you should find the latter completes much faster.Converting to and from s can be done with the  and  class methods from  and  respectively. returns a , the same as it's argument but "simplified".,To explain this, consider the following for  x :: FastMult:  f x = (show x, x + 1) It will multiply out x, twice, once for the addition, and once for . Note that the list of BigInts in x, is generally a small number, as only one BigInt is stored for each scale, and the sizes of scales increase exponentially, but there may be some multiplications required nevertheless. A better way to write this is as follows: . f x = let y = simplify x in (show y, y + 1) This will ensure that x is multiplied out only once.Unfortunately using Y stops your algorithms from being generic, so it might be better to define simplify as , with a rewrite rule. I'll think about this. !"#$%&  !"#$%& None'      !"#$$%%&'()*+,-.(fast-mult-0.1.0.1-1hoL1R5o0Nl1NNKtdHLnkGData.FastMult.Internal Data.FastMult FastMultSeqFastMultsimplify$fReadFastMult$fShowFastMult$fIntegralFastMult$fRealFastMult $fNumFastMult$fEnumFastMult $fOrdFastMult $fEqFastMult$fShowBigNatWithScale $fShowSign integer-gmpGHC.Integer.TypeBigNatIntegerbaseGHC.Realdiv toIntegerGHC.Num fromIntegerIntegralNumGHC.ShowshowGHC.BaseidBigNatMultResultScaleLTScaleEQScaleGTBigNatWithScaleSignNegPos multSigns negateSign getBigNatsingletonStrictListbinaryViaIntegerunaryViaInteger