bzlib-0.5.0.1: Compression and decompression in the bzip2 format

Portabilityportable (H98 + FFI)
Stabilityprovisional
Maintainerduncan@haskell.org

Codec.Compression.BZip.Internal

Contents

Description

Pure stream based interface to lower level bzlib wrapper

Synopsis

Compression

data CompressParams Source

The full set of parameters for compression. The defaults are defaultCompressParams.

The compressBufferSize is the size of the first output buffer containing the compressed data. If you know an approximate upper bound on the size of the compressed data then setting this parameter can save memory. The default compression output buffer size is 16k. If your extimate is wrong it does not matter too much, the default buffer size will be used for the remaining chunks.

defaultCompressParams :: CompressParamsSource

The default set of parameters for compression. This is typically used with the compressWith function with specific paramaters overridden.

Decompression

data DecompressParams Source

The full set of parameters for decompression. The defaults are defaultDecompressParams.

The decompressBufferSize is the size of the first output buffer, containing the uncompressed data. If you know an exact or approximate upper bound on the size of the decompressed data then setting this parameter can save memory. The default decompression output buffer size is 32k. If your extimate is wrong it does not matter too much, the default buffer size will be used for the remaining chunks.

One particular use case for setting the decompressBufferSize is if you know the exact size of the decompressed data and want to produce a strict Data.ByteString.ByteString. The compression and deccompression functions use lazy Data.ByteString.Lazy.ByteStrings but if you set the decompressBufferSize correctly then you can generate a lazy Data.ByteString.Lazy.ByteString with exactly one chunk, which can be converted to a strict Data.ByteString.ByteString in O(1) time using Data.ByteString.concat . Data.ByteString.Lazy.toChunks.

defaultDecompressParams :: DecompressParamsSource

The default set of parameters for decompression. This is typically used with the compressWith function with specific paramaters overridden.

The compression parameter types

data BlockSize Source

The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression.

BlockSize 1 through BlockSize 9 specify the block size to be 100,000 bytes through 900,000 bytes respectively. The default is to use the maximum block size.

Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size.

  • In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved.
  • Compression and decompression speed are virtually unaffected by block size.

Another significant point applies to files which fit in a single block - that means most files you'd encounter using a large block size. The amount of real memory touched is proportional to the size of the file, since the file is smaller than a block. For example, compressing a file 20,000 bytes long with the flag BlockSize 9 will cause the compressor to allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes.

Constructors

DefaultBlockSize

The default block size is also the maximum.

BlockSize Int

A specific block size between 1 and 9.

data WorkFactor Source

The WorkFactor parameter controls how the compression phase behaves when presented with worst case, highly repetitive, input data. If compression runs into difficulties caused by repetitive data, the library switches from the standard sorting algorithm to a fallback algorithm. The fallback is slower than the standard algorithm by perhaps a factor of three, but always behaves reasonably, no matter how bad the input.

Lower values of WorkFactor reduce the amount of effort the standard algorithm will expend before resorting to the fallback. You should set this parameter carefully; too low, and many inputs will be handled by the fallback algorithm and so compress rather slowly, too high, and your average-to-worst case compression times can become very large. The default value of 30 gives reasonable behaviour over a wide range of circumstances.

  • Note that the compressed output generated is the same regardless of whether or not the fallback algorithm is used.

Constructors

DefaultWorkFactor

The default work factor is 30.

WorkFactor Int

Allowable values range from 1 to 250 inclusive.

data MemoryLevel Source

For files compressed with the default 900k block size, decompression will require about 3700k to decompress. To support decompression of any file in less than 4Mb there is the option to decompress using approximately half this amount of memory, about 2300k. Decompression speed is also halved, so you should use this option only where necessary.

Constructors

DefaultMemoryLevel

The default.

MinMemoryLevel

Use minimum memory dusing decompression. This halves the memory needed but also halves the decompression speed.