bzlib-0.2: Compression and decompression in the bzip2 formatContentsIndex
Codec.Compression.BZip
Portabilityportable (H98 + FFI)
Stabilityexperimental
Maintainerduncan.coutts@worc.ox.ac.uk
Contents
Compression
Decompression
Description

Compression and decompression of data streams in the bzip2 format.

bzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.

http://www.bzip.org/

Synopsis
compress :: ByteString -> ByteString
compressWith :: BlockSize -> ByteString -> ByteString
data BlockSize
= DefaultBlockSize
| BlockSize Int
decompress :: ByteString -> ByteString
Documentation

This module provides pure functions for compressing and decompressing streams of data represented by lazy ByteStrings. This makes it easy to use either in memory or with disk or network IO.

For example a simple bzip compression program is just:

 import qualified Data.ByteString.Lazy as ByteString
 import qualified Codec.Compression.BZip as BZip

 main = ByteString.interact BZip.compress

Or you could lazily read in and decompress a .bz2 file using:

 content <- fmap BZip.decompress (readFile file)
Compression
compress :: ByteString -> ByteString

Compress a stream of data into the bzip2 format.

This uses the default compression level which uses the largest compression block size for the highest compression level. Use compressWith to adjust the compression block size.

compressWith :: BlockSize -> ByteString -> ByteString
Like compress but with an extra parameter to specify the block size used for compression.
data BlockSize

The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression.

BlockSize 1 through BlockSize 9 specify the block size to be 100,000 bytes through 900,000 bytes respectively. The default is to use the maximum block size.

Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size.

  • In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved.
  • Compression and decompression speed are virtually unaffected by block size.

Another significant point applies to files which fit in a single block - that means most files you'd encounter using a large block size. The amount of real memory touched is proportional to the size of the file, since the file is smaller than a block. For example, compressing a file 20,000 bytes long with the flag BlockSize 9 will cause the compressor to allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes.

Constructors
DefaultBlockSizeThe default block size is also the maximum.
BlockSize IntA specific block size between 1 and 9.
show/hide Instances
Decompression
decompress :: ByteString -> ByteString

Decompress a stream of data in the bzip2 format.

There are a number of errors that can occur. In each case an exception will be thrown. The possible error conditions are:

  • if the stream does not start with a valid gzip header
  • if the compressed stream is corrupted
  • if the compressed stream ends permaturely

Note that the decompression is performed lazily. Errors in the data stream may not be detected until the end of the stream is demanded (since it is only at the end that the final checksum can be checked). If this is important to you, you must make sure to consume the whole decompressed stream before doing any IO action that depends on it.

Produced by Haddock version 0.8