| Portability | portable (H98 + FFI) | 
|---|---|
| Stability | provisional | 
| Maintainer | duncan.coutts@worc.ox.ac.uk | 
Codec.Compression.BZip.Internal
Description
Pure stream based interface to lower level bzlib wrapper
- compressDefault :: BlockSize -> ByteString -> ByteString
- decompressDefault :: ByteString -> ByteString
- data BlockSize
- compressFull :: BlockSize -> Verbosity -> WorkFactor -> ByteString -> ByteString
- decompressFull :: Verbosity -> MemoryLevel -> ByteString -> ByteString
- data WorkFactor
- data MemoryLevel
- data Verbosity
Compression and decompression
The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression.
BlockSize 1BlockSize 9
Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size.
- In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved.
- Compression and decompression speed are virtually unaffected by block size.
Another significant point applies to files which fit in a single block -
 that means most files you'd encounter using a large block size. The amount
 of real memory touched is proportional to the size of the file, since the
 file is smaller than a block. For example, compressing a file 20,000 bytes
 long with the flag BlockSize 9
Constructors
| DefaultBlockSize | The default block size is also the maximum. | 
| BlockSize Int | A specific block size between 1 and 9. | 
The same but with the full set of parameters
compressFull :: BlockSize -> Verbosity -> WorkFactor -> ByteString -> ByteStringSource
decompressFull :: Verbosity -> MemoryLevel -> ByteString -> ByteStringSource
data WorkFactor Source
The WorkFactor parameter controls how the compression phase behaves when
 presented with worst case, highly repetitive, input data. If compression
 runs into difficulties caused by repetitive data, the library switches from
 the standard sorting algorithm to a fallback algorithm. The fallback is
 slower than the standard algorithm by perhaps a factor of three, but always
 behaves reasonably, no matter how bad the input.
Lower values of WorkFactor reduce the amount of effort the standard
 algorithm will expend before resorting to the fallback. You should set this
 parameter carefully; too low, and many inputs will be handled by the
 fallback algorithm and so compress rather slowly, too high, and your
 average-to-worst case compression times can become very large. The default
 value of 30 gives reasonable behaviour over a wide range of circumstances.
- Note that the compressed output generated is the same regardless of whether or not the fallback algorithm is used.
Constructors
| DefaultWorkFactor | The default work factor is 30. | 
| WorkFactor Int | Allowable values range from 1 to 250 inclusive. | 
Instances
data MemoryLevel Source
For files compressed with the default 900k block size, decompression will require about 3700k to decompress. To support decompression of any file in less than 4Mb there is the option to decompress using approximately half this amount of memory, about 2300k. Decompression speed is also halved, so you should use this option only where necessary.
Constructors
| DefaultMemoryLevel | The default. | 
| MinMemoryLevel | Use minimum memory dusing decompression. This halves the memory needed but also halves the decompression speed. | 
Instances