-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compression and decompression in the bzip2 format -- -- This package provides a pure interface for compressing and -- decompressing streams of data represented as lazy ByteStrings. -- It uses the bz2 C library so it has high performance. -- -- It provides a convenient high level API suitable for most tasks and -- for the few cases where more control is needed it provides access to -- the full bzip2 feature set. @package bzlib @version 0.5.0.2 -- | Pure stream based interface to lower level bzlib wrapper module Codec.Compression.BZip.Internal compress :: CompressParams -> ByteString -> ByteString -- | The full set of parameters for compression. The defaults are -- defaultCompressParams. -- -- The compressBufferSize is the size of the first output buffer -- containing the compressed data. If you know an approximate upper bound -- on the size of the compressed data then setting this parameter can -- save memory. The default compression output buffer size is -- 16k. If your extimate is wrong it does not matter too much, -- the default buffer size will be used for the remaining chunks. data CompressParams CompressParams :: BlockSize -> WorkFactor -> Int -> CompressParams compressBlockSize :: CompressParams -> BlockSize compressWorkFactor :: CompressParams -> WorkFactor compressBufferSize :: CompressParams -> Int -- | The default set of parameters for compression. This is typically used -- with the compressWith function with specific paramaters -- overridden. defaultCompressParams :: CompressParams decompress :: DecompressParams -> ByteString -> ByteString -- | The full set of parameters for decompression. The defaults are -- defaultDecompressParams. -- -- The decompressBufferSize is the size of the first output -- buffer, containing the uncompressed data. If you know an exact or -- approximate upper bound on the size of the decompressed data then -- setting this parameter can save memory. The default decompression -- output buffer size is 32k. If your extimate is wrong it does -- not matter too much, the default buffer size will be used for the -- remaining chunks. -- -- One particular use case for setting the decompressBufferSize is -- if you know the exact size of the decompressed data and want to -- produce a strict Data.ByteString.ByteString. The compression -- and deccompression functions use lazy -- Data.ByteString.Lazy.ByteStrings but if you set the -- decompressBufferSize correctly then you can generate a lazy -- Data.ByteString.Lazy.ByteString with exactly one chunk, which -- can be converted to a strict Data.ByteString.ByteString in -- O(1) time using Data.ByteString.concat . -- Data.ByteString.Lazy.toChunks. data DecompressParams DecompressParams :: MemoryLevel -> Int -> DecompressParams decompressMemoryLevel :: DecompressParams -> MemoryLevel decompressBufferSize :: DecompressParams -> Int -- | The default set of parameters for decompression. This is typically -- used with the compressWith function with specific paramaters -- overridden. defaultDecompressParams :: DecompressParams -- | The block size affects both the compression ratio achieved, and the -- amount of memory needed for compression and decompression. -- -- BlockSize 1 through BlockSize 9 -- specify the block size to be 100,000 bytes through 900,000 bytes -- respectively. The default is to use the maximum block size. -- -- Larger block sizes give rapidly diminishing marginal returns. Most of -- the compression comes from the first two or three hundred k of block -- size, a fact worth bearing in mind when using bzip2 on small machines. -- It is also important to appreciate that the decompression memory -- requirement is set at compression time by the choice of block size. -- -- -- -- Another significant point applies to files which fit in a single block -- - that means most files you'd encounter using a large block size. The -- amount of real memory touched is proportional to the size of the file, -- since the file is smaller than a block. For example, compressing a -- file 20,000 bytes long with the flag BlockSize 9 will -- cause the compressor to allocate around 7600k of memory, but only -- touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor -- will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes. data BlockSize -- | The default block size is also the maximum. DefaultBlockSize :: BlockSize -- | A specific block size between 1 and 9. BlockSize :: Int -> BlockSize -- | The WorkFactor parameter controls how the compression phase -- behaves when presented with worst case, highly repetitive, input data. -- If compression runs into difficulties caused by repetitive data, the -- library switches from the standard sorting algorithm to a fallback -- algorithm. The fallback is slower than the standard algorithm by -- perhaps a factor of three, but always behaves reasonably, no matter -- how bad the input. -- -- Lower values of WorkFactor reduce the amount of effort the -- standard algorithm will expend before resorting to the fallback. You -- should set this parameter carefully; too low, and many inputs will be -- handled by the fallback algorithm and so compress rather slowly, too -- high, and your average-to-worst case compression times can become very -- large. The default value of 30 gives reasonable behaviour over a wide -- range of circumstances. -- -- data WorkFactor -- | The default work factor is 30. DefaultWorkFactor :: WorkFactor -- | Allowable values range from 1 to 250 inclusive. WorkFactor :: Int -> WorkFactor -- | For files compressed with the default 900k block size, decompression -- will require about 3700k to decompress. To support decompression of -- any file in less than 4Mb there is the option to decompress using -- approximately half this amount of memory, about 2300k. Decompression -- speed is also halved, so you should use this option only where -- necessary. data MemoryLevel -- | The default. DefaultMemoryLevel :: MemoryLevel -- | Use minimum memory dusing decompression. This halves the memory needed -- but also halves the decompression speed. MinMemoryLevel :: MemoryLevel -- | Compression and decompression of data streams in the bzip2 format. -- -- bzip2 is a freely available, patent free, high-quality data -- compressor. It typically compresses files to within 10% to 15% of the -- best available techniques (the PPM family of statistical compressors), -- whilst being around twice as fast at compression and six times faster -- at decompression. -- -- http://www.bzip.org/ module Codec.Compression.BZip -- | Compress a stream of data into the bzip2 format. -- -- This uses the default compression level which uses the largest -- compression block size for the highest compression level. Use -- compressWith to adjust the compression block size. compress :: ByteString -> ByteString -- | Decompress a stream of data in the bzip2 format. -- -- There are a number of errors that can occur. In each case an exception -- will be thrown. The possible error conditions are: -- -- -- -- Note that the decompression is performed lazily. Errors in the -- data stream may not be detected until the end of the stream is -- demanded (since it is only at the end that the final checksum can be -- checked). If this is important to you, you must make sure to consume -- the whole decompressed stream before doing any IO action that depends -- on it. decompress :: ByteString -> ByteString -- | Like compress but with the ability to specify compression -- parameters. Typical usage: -- --
--   compressWith defaultCompressParams { ... }
--   
-- -- In particular you can set the compression block size: -- --
--   compressWith defaultCompressParams { compressBlockSize = BlockSize 1 }
--   
compressWith :: CompressParams -> ByteString -> ByteString -- | Like decompress but with the ability to specify various -- decompression parameters. Typical usage: -- --
--   decompressWith defaultDecompressParams { ... }
--   
decompressWith :: DecompressParams -> ByteString -> ByteString -- | The full set of parameters for compression. The defaults are -- defaultCompressParams. -- -- The compressBufferSize is the size of the first output buffer -- containing the compressed data. If you know an approximate upper bound -- on the size of the compressed data then setting this parameter can -- save memory. The default compression output buffer size is -- 16k. If your extimate is wrong it does not matter too much, -- the default buffer size will be used for the remaining chunks. data CompressParams CompressParams :: BlockSize -> WorkFactor -> Int -> CompressParams compressBlockSize :: CompressParams -> BlockSize compressWorkFactor :: CompressParams -> WorkFactor compressBufferSize :: CompressParams -> Int -- | The default set of parameters for compression. This is typically used -- with the compressWith function with specific paramaters -- overridden. defaultCompressParams :: CompressParams -- | The full set of parameters for decompression. The defaults are -- defaultDecompressParams. -- -- The decompressBufferSize is the size of the first output -- buffer, containing the uncompressed data. If you know an exact or -- approximate upper bound on the size of the decompressed data then -- setting this parameter can save memory. The default decompression -- output buffer size is 32k. If your extimate is wrong it does -- not matter too much, the default buffer size will be used for the -- remaining chunks. -- -- One particular use case for setting the decompressBufferSize is -- if you know the exact size of the decompressed data and want to -- produce a strict Data.ByteString.ByteString. The compression -- and deccompression functions use lazy -- Data.ByteString.Lazy.ByteStrings but if you set the -- decompressBufferSize correctly then you can generate a lazy -- Data.ByteString.Lazy.ByteString with exactly one chunk, which -- can be converted to a strict Data.ByteString.ByteString in -- O(1) time using Data.ByteString.concat . -- Data.ByteString.Lazy.toChunks. data DecompressParams DecompressParams :: MemoryLevel -> Int -> DecompressParams decompressMemoryLevel :: DecompressParams -> MemoryLevel decompressBufferSize :: DecompressParams -> Int -- | The default set of parameters for decompression. This is typically -- used with the compressWith function with specific paramaters -- overridden. defaultDecompressParams :: DecompressParams -- | The block size affects both the compression ratio achieved, and the -- amount of memory needed for compression and decompression. -- -- BlockSize 1 through BlockSize 9 -- specify the block size to be 100,000 bytes through 900,000 bytes -- respectively. The default is to use the maximum block size. -- -- Larger block sizes give rapidly diminishing marginal returns. Most of -- the compression comes from the first two or three hundred k of block -- size, a fact worth bearing in mind when using bzip2 on small machines. -- It is also important to appreciate that the decompression memory -- requirement is set at compression time by the choice of block size. -- -- -- -- Another significant point applies to files which fit in a single block -- - that means most files you'd encounter using a large block size. The -- amount of real memory touched is proportional to the size of the file, -- since the file is smaller than a block. For example, compressing a -- file 20,000 bytes long with the flag BlockSize 9 will -- cause the compressor to allocate around 7600k of memory, but only -- touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor -- will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes. data BlockSize -- | The default block size is also the maximum. DefaultBlockSize :: BlockSize -- | A specific block size between 1 and 9. BlockSize :: Int -> BlockSize -- | The WorkFactor parameter controls how the compression phase -- behaves when presented with worst case, highly repetitive, input data. -- If compression runs into difficulties caused by repetitive data, the -- library switches from the standard sorting algorithm to a fallback -- algorithm. The fallback is slower than the standard algorithm by -- perhaps a factor of three, but always behaves reasonably, no matter -- how bad the input. -- -- Lower values of WorkFactor reduce the amount of effort the -- standard algorithm will expend before resorting to the fallback. You -- should set this parameter carefully; too low, and many inputs will be -- handled by the fallback algorithm and so compress rather slowly, too -- high, and your average-to-worst case compression times can become very -- large. The default value of 30 gives reasonable behaviour over a wide -- range of circumstances. -- -- data WorkFactor -- | The default work factor is 30. DefaultWorkFactor :: WorkFactor -- | Allowable values range from 1 to 250 inclusive. WorkFactor :: Int -> WorkFactor -- | For files compressed with the default 900k block size, decompression -- will require about 3700k to decompress. To support decompression of -- any file in less than 4Mb there is the option to decompress using -- approximately half this amount of memory, about 2300k. Decompression -- speed is also halved, so you should use this option only where -- necessary. data MemoryLevel -- | The default. DefaultMemoryLevel :: MemoryLevel -- | Use minimum memory dusing decompression. This halves the memory needed -- but also halves the decompression speed. MinMemoryLevel :: MemoryLevel