!! UPDATE TODO !! !! UPDATE BENCHMARKS !! * custom serialization functions for lists of 'WordX's - benchmark chunk size speedup for more complicated computations of list elements => to be expected that we get no speedup anymore or even a slowdown => adapt Blaze.ByteString.Builder.Word accordingly. * fast serialization for 'Text' values (currently unpacking to 'String' is the fastest :-/) * implementation - further encodings for 'Char' - think about end-of-buffer wrapping when copying bytestrings - toByteStringIO with accumulator capability => provide 'toByteStringIO_' - allow buildr/foldr deforestation to happen for input to 'fromWriteList' (or whatever stream fusion framework is in place for lists) - implement 'toByteString' with an amortized O(n) runtime using the exponentional scaling trick. If the start size is chosen wisely this may even be faster than 'S.pack', as the one copy per element is cheaper than one list thunk per element. It is even likely that we can amortize three copies per element, which allows to avoid spilling any buffer space by doing a last compaction copy. - we could provide builders that honor alignment restrictions, either as builder transformers or as specialized write to builder converters. The trick is for the driver to ensure that the buffer beginning is aligned to the largest aligning (8 or 16 bytes?) required. This is probably the case by default. Then we can always align a pointer in the buffer by appropriately aligning the write pointer. * extend tests to new functions * benchmarks - understand why the declarative blaze-builder version is the fastest serializer for Word64 little-endian and big-endian - check the cost of using `mappend` on builders instead of writes. - show that using toByteStringIO has an advantage over toLazyByteString - check performance of toByteStringIO - compare speed of 'L.pack' to speed of 'toLazyByteString . fromWord8s' * documentation - sort out formultion: "serialization" vs. "encoding" * check portability to Hugs * performance: - check if reordering 'pe' and 'pf' change performance; it seems that 'pe' is only a reader argument while 'pf' is a state argument. - perhaps we could improve performance by taking page size, page alignment, and memory access alignment into account. - detect machine endianness and use host order writes for the supported endianness. - introduce a type 'BoundedWrite' that encapsulates a 'Write' generator with a bound on the number of bytes maximally written by the write. This way we can achieve data independence for the size check by sacrificing just a little bit of buffer space at buffer ends. - investigate where we would profit from static bounds on number of bytes written (e.g. to make the control flow more linear) * testing - port tests from 'Data.Binary.Builder' to ensure that the word writes and builders are working correctly. I may have missed some pitfalls about word types in Haskell during porting the functions from 'Data.Binary.Builder'. * portability - port to Hugs - test lower versions of GHC * deployment - add source repository to 'blaze-html' and 'blaze-builder' cabal files