|bzlib-0.3: Compression and decompression in the bzip2 format||Contents||Index|
|Portability||portable (H98 + FFI)|
|Pure stream based interface to lower level bzlib wrapper
|Compression and decompression
|compressDefault :: BlockSize -> ByteString -> ByteString|
|decompressDefault :: ByteString -> ByteString|
|data BlockSize |
The block size affects both the compression ratio achieved, and the amount
of memory needed for compression and decompression.
BlockSize 1 through BlockSize 9 specify the block size to be 100,000
bytes through 900,000 bytes respectively. The default is to use the maximum
Larger block sizes give rapidly diminishing marginal returns. Most of the
compression comes from the first two or three hundred k of block size, a
fact worth bearing in mind when using bzip2 on small machines. It is also
important to appreciate that the decompression memory requirement is set at
compression time by the choice of block size.
- In general, try and use the largest block size memory constraints allow,
since that maximises the compression achieved.
- Compression and decompression speed are virtually unaffected by block
Another significant point applies to files which fit in a single block -
that means most files you'd encounter using a large block size. The amount
of real memory touched is proportional to the size of the file, since the
file is smaller than a block. For example, compressing a file 20,000 bytes
long with the flag BlockSize 9 will cause the compressor to allocate
around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it.
Similarly, the decompressor will allocate 3700k but only touch 100k + 20000
* 4 = 180 kbytes.
|DefaultBlockSize||The default block size is also the maximum.
|BlockSize Int||A specific block size between 1 and 9.
|The same but with the full set of parameters
|compressFull :: BlockSize -> Verbosity -> WorkFactor -> ByteString -> ByteString|
|decompressFull :: Verbosity -> MemoryLevel -> ByteString -> ByteString|
|data WorkFactor |
The WorkFactor parameter controls how the compression phase behaves when
presented with worst case, highly repetitive, input data. If compression
runs into difficulties caused by repetitive data, the library switches from
the standard sorting algorithm to a fallback algorithm. The fallback is
slower than the standard algorithm by perhaps a factor of three, but always
behaves reasonably, no matter how bad the input.
Lower values of WorkFactor reduce the amount of effort the standard
algorithm will expend before resorting to the fallback. You should set this
parameter carefully; too low, and many inputs will be handled by the
fallback algorithm and so compress rather slowly, too high, and your
average-to-worst case compression times can become very large. The default
value of 30 gives reasonable behaviour over a wide range of circumstances.
- Note that the compressed output generated is the same regardless of
whether or not the fallback algorithm is used.
|DefaultWorkFactor||The default work factor is 30.
|WorkFactor Int||Allowable values range from 1 to 250 inclusive.
|data MemoryLevel |
|For files compressed with the default 900k block size, decompression will
require about 3700k to decompress. To support decompression of any file in
less than 4Mb there is the option to decompress using approximately half
this amount of memory, about 2300k. Decompression speed is also halved,
so you should use this option only where necessary.
|MinMemoryLevel||Use minimum memory dusing decompression. This
halves the memory needed but also halves the
|data Verbosity |
|The Verbosity parameter is a number between 0 and 4. 0 is silent, and
greater numbers give increasingly verbose monitoring/debugging output.
|Silent||No output. This is the default.
|Verbosity Int||A specific level between 0 and 4.
|Produced by Haddock version 0.8|