-- Hoogle documentation, generated by Haddock
-- See Hoogle, http://www.haskell.org/hoogle/
-- | Compression and decompression in the bzip2 format
--
-- Compression and decompression in the bzip2 format
@package bzlib
@version 0.4
-- | Pure stream based interface to lower level bzlib wrapper
module Codec.Compression.BZip.Internal
compressDefault :: BlockSize -> ByteString -> ByteString
decompressDefault :: ByteString -> ByteString
-- | The block size affects both the compression ratio achieved, and the
-- amount of memory needed for compression and decompression.
--
-- BlockSize 1 through BlockSize 9
-- specify the block size to be 100,000 bytes through 900,000 bytes
-- respectively. The default is to use the maximum block size.
--
-- Larger block sizes give rapidly diminishing marginal returns. Most of
-- the compression comes from the first two or three hundred k of block
-- size, a fact worth bearing in mind when using bzip2 on small machines.
-- It is also important to appreciate that the decompression memory
-- requirement is set at compression time by the choice of block size.
--
--
-- - In general, try and use the largest block size memory constraints
-- allow, since that maximises the compression achieved.
-- - Compression and decompression speed are virtually unaffected by
-- block size.
--
--
-- Another significant point applies to files which fit in a single block
-- - that means most files you'd encounter using a large block size. The
-- amount of real memory touched is proportional to the size of the file,
-- since the file is smaller than a block. For example, compressing a
-- file 20,000 bytes long with the flag BlockSize 9 will
-- cause the compressor to allocate around 7600k of memory, but only
-- touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor
-- will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes.
data BlockSize
-- | The default block size is also the maximum.
DefaultBlockSize :: BlockSize
-- | A specific block size between 1 and 9.
BlockSize :: Int -> BlockSize
compressFull :: BlockSize -> Verbosity -> WorkFactor -> ByteString -> ByteString
decompressFull :: Verbosity -> MemoryLevel -> ByteString -> ByteString
-- | The WorkFactor parameter controls how the compression phase
-- behaves when presented with worst case, highly repetitive, input data.
-- If compression runs into difficulties caused by repetitive data, the
-- library switches from the standard sorting algorithm to a fallback
-- algorithm. The fallback is slower than the standard algorithm by
-- perhaps a factor of three, but always behaves reasonably, no matter
-- how bad the input.
--
-- Lower values of WorkFactor reduce the amount of effort the
-- standard algorithm will expend before resorting to the fallback. You
-- should set this parameter carefully; too low, and many inputs will be
-- handled by the fallback algorithm and so compress rather slowly, too
-- high, and your average-to-worst case compression times can become very
-- large. The default value of 30 gives reasonable behaviour over a wide
-- range of circumstances.
--
--
-- - Note that the compressed output generated is the same regardless
-- of whether or not the fallback algorithm is used.
--
data WorkFactor
-- | The default work factor is 30.
DefaultWorkFactor :: WorkFactor
-- | Allowable values range from 1 to 250 inclusive.
WorkFactor :: Int -> WorkFactor
-- | For files compressed with the default 900k block size, decompression
-- will require about 3700k to decompress. To support decompression of
-- any file in less than 4Mb there is the option to decompress using
-- approximately half this amount of memory, about 2300k. Decompression
-- speed is also halved, so you should use this option only where
-- necessary.
data MemoryLevel
-- | The default.
DefaultMemoryLevel :: MemoryLevel
-- | Use minimum memory dusing decompression. This halves the memory needed
-- but also halves the decompression speed.
MinMemoryLevel :: MemoryLevel
-- | The Verbosity parameter is a number between 0 and 4. 0 is
-- silent, and greater numbers give increasingly verbose
-- monitoring/debugging output.
data Verbosity
-- | No output. This is the default.
Silent :: Verbosity
-- | A specific level between 0 and 4.
Verbosity :: Int -> Verbosity
-- | Compression and decompression of data streams in the bzip2 format.
--
-- bzip2 is a freely available, patent free, high-quality data
-- compressor. It typically compresses files to within 10% to 15% of the
-- best available techniques (the PPM family of statistical compressors),
-- whilst being around twice as fast at compression and six times faster
-- at decompression.
--
-- http://www.bzip.org/
module Codec.Compression.BZip
-- | Compress a stream of data into the bzip2 format.
--
-- This uses the default compression level which uses the largest
-- compression block size for the highest compression level. Use
-- compressWith to adjust the compression block size.
compress :: ByteString -> ByteString
-- | Like compress but with an extra parameter to specify the block
-- size used for compression.
compressWith :: BlockSize -> ByteString -> ByteString
-- | The block size affects both the compression ratio achieved, and the
-- amount of memory needed for compression and decompression.
--
-- BlockSize 1 through BlockSize 9
-- specify the block size to be 100,000 bytes through 900,000 bytes
-- respectively. The default is to use the maximum block size.
--
-- Larger block sizes give rapidly diminishing marginal returns. Most of
-- the compression comes from the first two or three hundred k of block
-- size, a fact worth bearing in mind when using bzip2 on small machines.
-- It is also important to appreciate that the decompression memory
-- requirement is set at compression time by the choice of block size.
--
--
-- - In general, try and use the largest block size memory constraints
-- allow, since that maximises the compression achieved.
-- - Compression and decompression speed are virtually unaffected by
-- block size.
--
--
-- Another significant point applies to files which fit in a single block
-- - that means most files you'd encounter using a large block size. The
-- amount of real memory touched is proportional to the size of the file,
-- since the file is smaller than a block. For example, compressing a
-- file 20,000 bytes long with the flag BlockSize 9 will
-- cause the compressor to allocate around 7600k of memory, but only
-- touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor
-- will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes.
data BlockSize
-- | The default block size is also the maximum.
DefaultBlockSize :: BlockSize
-- | A specific block size between 1 and 9.
BlockSize :: Int -> BlockSize
-- | Decompress a stream of data in the bzip2 format.
--
-- There are a number of errors that can occur. In each case an exception
-- will be thrown. The possible error conditions are:
--
--
-- - if the stream does not start with a valid gzip header
-- - if the compressed stream is corrupted
-- - if the compressed stream ends permaturely
--
--
-- Note that the decompression is performed lazily. Errors in the
-- data stream may not be detected until the end of the stream is
-- demanded (since it is only at the end that the final checksum can be
-- checked). If this is important to you, you must make sure to consume
-- the whole decompressed stream before doing any IO action that depends
-- on it.
decompress :: ByteString -> ByteString