Portability | portable (H98 + FFI) |
---|---|
Stability | provisional |
Maintainer | duncan@haskell.org |
Compression and decompression of data streams in the bzip2 format.
bzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
- compress :: ByteString -> ByteString
- decompress :: ByteString -> ByteString
- compressWith :: CompressParams -> ByteString -> ByteString
- decompressWith :: DecompressParams -> ByteString -> ByteString
- data CompressParams = CompressParams {}
- defaultCompressParams :: CompressParams
- data DecompressParams = DecompressParams {}
- defaultDecompressParams :: DecompressParams
- data BlockSize
- data WorkFactor
- data MemoryLevel
Documentation
This module provides pure functions for compressing and decompressing
streams of data in the bzip2 format represented by lazy ByteString
s.
This makes it easy to use either in memory or with disk or network IO.
For example a simple bzip compression program is just:
import qualified Data.ByteString.Lazy as ByteString import qualified Codec.Compression.BZip as BZip main = ByteString.interact BZip.compress
Or you could lazily read in and decompress a .bz2
file using:
content <- fmap BZip.decompress (readFile file)
Simple compression and decompression
compress :: ByteString -> ByteStringSource
Compress a stream of data into the bzip2 format.
This uses the default compression level which uses the largest compression
block size for the highest compression level. Use compressWith
to adjust
the compression block size.
decompress :: ByteString -> ByteStringSource
Decompress a stream of data in the bzip2 format.
There are a number of errors that can occur. In each case an exception will be thrown. The possible error conditions are:
- if the stream does not start with a valid gzip header
- if the compressed stream is corrupted
- if the compressed stream ends permaturely
Note that the decompression is performed lazily. Errors in the data stream may not be detected until the end of the stream is demanded (since it is only at the end that the final checksum can be checked). If this is important to you, you must make sure to consume the whole decompressed stream before doing any IO action that depends on it.
Extended api with control over compression parameters
compressWith :: CompressParams -> ByteString -> ByteStringSource
Like compress
but with the ability to specify compression parameters.
Typical usage:
compressWith defaultCompressParams { ... }
In particular you can set the compression block size:
compressWith defaultCompressParams { compressBlockSize = BlockSize 1 }
decompressWith :: DecompressParams -> ByteString -> ByteStringSource
Like decompress
but with the ability to specify various decompression
parameters. Typical usage:
decompressWith defaultDecompressParams { ... }
data CompressParams Source
The full set of parameters for compression. The defaults are
defaultCompressParams
.
The compressBufferSize
is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k
. If your extimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
defaultCompressParams :: CompressParamsSource
The default set of parameters for compression. This is typically used with
the compressWith
function with specific paramaters overridden.
data DecompressParams Source
The full set of parameters for decompression. The defaults are
defaultDecompressParams
.
The decompressBufferSize
is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k
. If your
extimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize
is if you
know the exact size of the decompressed data and want to produce a strict
Data.ByteString.ByteString
. The compression and deccompression functions
use lazy Data.ByteString.Lazy.ByteString
s but if you set the
decompressBufferSize
correctly then you can generate a lazy
Data.ByteString.Lazy.ByteString
with exactly one chunk, which can be
converted to a strict Data.ByteString.ByteString
in O(1)
time using
.
Data.ByteString.concat
. Data.ByteString.Lazy.toChunks
defaultDecompressParams :: DecompressParamsSource
The default set of parameters for decompression. This is typically used with
the compressWith
function with specific paramaters overridden.
The compression parameter types
The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression.
through BlockSize
1
specify the block size to be 100,000
bytes through 900,000 bytes respectively. The default is to use the maximum
block size.
BlockSize
9
Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size.
- In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved.
- Compression and decompression speed are virtually unaffected by block size.
Another significant point applies to files which fit in a single block -
that means most files you'd encounter using a large block size. The amount
of real memory touched is proportional to the size of the file, since the
file is smaller than a block. For example, compressing a file 20,000 bytes
long with the flag
will cause the compressor to allocate
around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it.
Similarly, the decompressor will allocate 3700k but only touch 100k + 20000
* 4 = 180 kbytes.
BlockSize
9
DefaultBlockSize | The default block size is also the maximum. |
BlockSize Int | A specific block size between 1 and 9. |
data WorkFactor Source
The WorkFactor
parameter controls how the compression phase behaves when
presented with worst case, highly repetitive, input data. If compression
runs into difficulties caused by repetitive data, the library switches from
the standard sorting algorithm to a fallback algorithm. The fallback is
slower than the standard algorithm by perhaps a factor of three, but always
behaves reasonably, no matter how bad the input.
Lower values of WorkFactor
reduce the amount of effort the standard
algorithm will expend before resorting to the fallback. You should set this
parameter carefully; too low, and many inputs will be handled by the
fallback algorithm and so compress rather slowly, too high, and your
average-to-worst case compression times can become very large. The default
value of 30 gives reasonable behaviour over a wide range of circumstances.
- Note that the compressed output generated is the same regardless of whether or not the fallback algorithm is used.
DefaultWorkFactor | The default work factor is 30. |
WorkFactor Int | Allowable values range from 1 to 250 inclusive. |
data MemoryLevel Source
For files compressed with the default 900k block size, decompression will require about 3700k to decompress. To support decompression of any file in less than 4Mb there is the option to decompress using approximately half this amount of memory, about 2300k. Decompression speed is also halved, so you should use this option only where necessary.
DefaultMemoryLevel | The default. |
MinMemoryLevel | Use minimum memory dusing decompression. This halves the memory needed but also halves the decompression speed. |