-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compression and decompression in the gzip and zlib formats -- -- This package provides a pure interface for compressing and -- decompressing streams of data represented as lazy ByteStrings. -- It uses the zlib C library so it has high performance. It -- supports the "zlib", "gzip" and "raw" compression formats. -- -- It provides a convenient high level API suitable for most tasks and -- for the few cases where more control is needed it provides access to -- the full zlib feature set. @package zlib @version 0.7.0.0 -- | Pure and IO stream based interfaces to lower level zlib wrapper module Codec.Compression.Zlib.Internal -- | Compress a data stream provided as a lazy ByteString. -- -- There are no expected error conditions. All input data streams are -- valid. It is possible for unexpected errors to occur, such as running -- out of memory, or finding the wrong version of the zlib C library, -- these are thrown as exceptions. compress :: Format -> CompressParams -> ByteString -> ByteString -- | Decompress a data stream provided as a lazy ByteString. -- -- It will throw an exception if any error is encountered in the input -- data. If you need more control over error handling then use one the -- incremental versions, decompressST or decompressIO. decompress :: Format -> DecompressParams -> ByteString -> ByteString -- | The unfolding of the compression process, where you provide a sequence -- of uncompressed data chunks as input and receive a sequence of -- compressed data chunks as output. The process is incremental, in that -- the demand for input and provision of output are interleaved. data CompressStream m CompressInputRequired :: (ByteString -> m (CompressStream m)) -> CompressStream m [compressSupplyInput] :: CompressStream m -> ByteString -> m (CompressStream m) CompressOutputAvailable :: !ByteString -> m (CompressStream m) -> CompressStream m [compressOutput] :: CompressStream m -> !ByteString [compressNext] :: CompressStream m -> m (CompressStream m) CompressStreamEnd :: CompressStream m -- | Incremental compression in the ST monad. Using ST makes -- it possible to write pure lazy functions while making use of -- incremental compression. -- -- Chunk size must fit into CUInt. compressST :: Format -> CompressParams -> CompressStream (ST s) -- | Incremental compression in the IO monad. -- -- Chunk size must fit into CUInt. compressIO :: Format -> CompressParams -> CompressStream IO -- | A fold over the CompressStream in the given monad. -- -- One way to look at this is that it runs the stream, using callback -- functions for the three stream events. foldCompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> m a -> CompressStream m -> m a -- | A variant on foldCompressStream that is pure rather than -- operating in a monad and where the input is provided by a lazy -- ByteString. So we only have to deal with the output and end -- parts, making it just like a foldr on a list of output chunks. -- -- For example: -- --
-- toChunks = foldCompressStreamWithInput (:) [] --foldCompressStreamWithInput :: (ByteString -> a -> a) -> a -> (forall s. CompressStream (ST s)) -> ByteString -> a -- | The unfolding of the decompression process, where you provide a -- sequence of compressed data chunks as input and receive a sequence of -- uncompressed data chunks as output. The process is incremental, in -- that the demand for input and provision of output are interleaved. -- -- To indicate the end of the input supply an empty input chunk. Note -- that for gzipFormat with the default -- decompressAllMembers True you will have to do this, as -- the decompressor will look for any following members. With -- decompressAllMembers False the decompressor knows when -- the data ends and will produce DecompressStreamEnd without you -- having to supply an empty chunk to indicate the end of the input. data DecompressStream m DecompressInputRequired :: (ByteString -> m (DecompressStream m)) -> DecompressStream m [decompressSupplyInput] :: DecompressStream m -> ByteString -> m (DecompressStream m) DecompressOutputAvailable :: !ByteString -> m (DecompressStream m) -> DecompressStream m [decompressOutput] :: DecompressStream m -> !ByteString [decompressNext] :: DecompressStream m -> m (DecompressStream m) -- | Includes any trailing unconsumed input data. DecompressStreamEnd :: ByteString -> DecompressStream m [decompressUnconsumedInput] :: DecompressStream m -> ByteString -- | An error code DecompressStreamError :: DecompressError -> DecompressStream m [decompressStreamError] :: DecompressStream m -> DecompressError -- | The possible error cases when decompressing a stream. -- -- This can be shown to give a human readable error message. data DecompressError -- | The compressed data stream ended prematurely. This may happen if the -- input data stream was truncated. TruncatedInput :: DecompressError -- | It is possible to do zlib compression with a custom dictionary. This -- allows slightly higher compression ratios for short files. However -- such compressed streams require the same dictionary when -- decompressing. This error is for when we encounter a compressed stream -- that needs a dictionary, and it's not provided. DictionaryRequired :: DecompressError -- | If the stream requires a dictionary and you provide one with the wrong -- DictionaryHash then you will get this error. DictionaryMismatch :: DecompressError -- | If the compressed data stream is corrupted in any way then you will -- get this error, for example if the input data just isn't a compressed -- zlib data stream. In particular if the data checksum turns out to be -- wrong then you will get all the decompressed data but this error at -- the end, instead of the normal successful StreamEnd. DataFormatError :: String -> DecompressError -- | Incremental decompression in the ST monad. Using ST -- makes it possible to write pure lazy functions while making use -- of incremental decompression. -- -- Chunk size must fit into CUInt. decompressST :: Format -> DecompressParams -> DecompressStream (ST s) -- | Incremental decompression in the IO monad. -- -- Chunk size must fit into CUInt. decompressIO :: Format -> DecompressParams -> DecompressStream IO -- | A fold over the DecompressStream in the given monad. -- -- One way to look at this is that it runs the stream, using callback -- functions for the four stream events. foldDecompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> (ByteString -> m a) -> (DecompressError -> m a) -> DecompressStream m -> m a -- | A variant on foldCompressStream that is pure rather than -- operating in a monad and where the input is provided by a lazy -- ByteString. So we only have to deal with the output, end and -- error parts, making it like a foldr on a list of output chunks. -- -- For example: -- --
-- toChunks = foldDecompressStreamWithInput (:) [] throw --foldDecompressStreamWithInput :: (ByteString -> a -> a) -> (ByteString -> a) -> (DecompressError -> a) -> (forall s. DecompressStream (ST s)) -> ByteString -> a -- | The full set of parameters for compression. The defaults are -- defaultCompressParams. -- -- The compressBufferSize is the size of the first output buffer -- containing the compressed data. If you know an approximate upper bound -- on the size of the compressed data then setting this parameter can -- save memory. The default compression output buffer size is -- 16k. If your estimate is wrong it does not matter too much, -- the default buffer size will be used for the remaining chunks. data CompressParams CompressParams :: !CompressionLevel -> !Method -> !WindowBits -> !MemoryLevel -> !CompressionStrategy -> !Int -> Maybe ByteString -> CompressParams [compressLevel] :: CompressParams -> !CompressionLevel [compressMethod] :: CompressParams -> !Method [compressWindowBits] :: CompressParams -> !WindowBits [compressMemoryLevel] :: CompressParams -> !MemoryLevel [compressStrategy] :: CompressParams -> !CompressionStrategy [compressBufferSize] :: CompressParams -> !Int [compressDictionary] :: CompressParams -> Maybe ByteString -- | The default set of parameters for compression. This is typically used -- with compressWith or compressWith with specific -- parameters overridden. defaultCompressParams :: CompressParams -- | The full set of parameters for decompression. The defaults are -- defaultDecompressParams. -- -- The decompressBufferSize is the size of the first output -- buffer, containing the uncompressed data. If you know an exact or -- approximate upper bound on the size of the decompressed data then -- setting this parameter can save memory. The default decompression -- output buffer size is 32k. If your estimate is wrong it does -- not matter too much, the default buffer size will be used for the -- remaining chunks. -- -- One particular use case for setting the decompressBufferSize is -- if you know the exact size of the decompressed data and want to -- produce a strict ByteString. The compression and decompression -- functions use lazy ByteStrings but if you set the -- decompressBufferSize correctly then you can generate a lazy -- ByteString with exactly one chunk, which can be converted to a -- strict ByteString in O(1) time using concat -- . toChunks. data DecompressParams DecompressParams :: !WindowBits -> !Int -> Maybe ByteString -> Bool -> DecompressParams [decompressWindowBits] :: DecompressParams -> !WindowBits [decompressBufferSize] :: DecompressParams -> !Int [decompressDictionary] :: DecompressParams -> Maybe ByteString [decompressAllMembers] :: DecompressParams -> Bool -- | The default set of parameters for decompression. This is typically -- used with decompressWith or decompressWith with specific -- parameters overridden. defaultDecompressParams :: DecompressParams -- | The format used for compression or decompression. There are three -- variations. data Format -- | The gzip format uses a header with a checksum and some optional -- meta-data about the compressed file. It is intended primarily for -- compressing individual files but is also sometimes used for network -- protocols such as HTTP. The format is described in detail in RFC #1952 -- http://www.ietf.org/rfc/rfc1952.txt gzipFormat :: Format -- | The zlib format uses a minimal header with a checksum but no other -- meta-data. It is especially designed for use in network protocols. The -- format is described in detail in RFC #1950 -- http://www.ietf.org/rfc/rfc1950.txt zlibFormat :: Format -- | The 'raw' format is just the compressed data stream without any -- additional header, meta-data or data-integrity checksum. The format is -- described in detail in RFC #1951 -- http://www.ietf.org/rfc/rfc1951.txt rawFormat :: Format -- | This is not a format as such. It enabled zlib or gzip decoding with -- automatic header detection. This only makes sense for decompression. gzipOrZlibFormat :: Format -- | The compression level parameter controls the amount of compression. -- This is a trade-off between the amount of compression and the time -- required to do the compression. newtype CompressionLevel CompressionLevel :: Int -> CompressionLevel -- | The default CompressionLevel. defaultCompression :: CompressionLevel -- | No compression, just a block copy. noCompression :: CompressionLevel -- | The fastest compression method (less compression). bestSpeed :: CompressionLevel -- | The slowest compression method (best compression). bestCompression :: CompressionLevel -- | A specific compression level in the range 0..9. Throws an -- error for arguments outside of this range. -- --
-- compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel -- decompressTotal windowBits = 2^windowBits ---- -- For example, for compression with the default windowBits = 15 -- and memLevel = 8 uses 256Kb. So for example a -- network server with 100 concurrent compressed streams would use -- 25Mb. The memory per stream can be halved (at the cost of -- somewhat degraded and slower compression) by reducing the -- windowBits and memLevel by one. -- -- Decompression takes less memory, the default windowBits = 15 -- corresponds to just 32Kb. newtype MemoryLevel MemoryLevel :: Int -> MemoryLevel -- | The default MemoryLevel. Equivalent to memoryLevel -- 8. defaultMemoryLevel :: MemoryLevel -- | Use minimum memory. This is slow and reduces the compression ratio. -- Equivalent to memoryLevel 1. minMemoryLevel :: MemoryLevel -- | Use maximum memory for optimal compression speed. Equivalent to -- memoryLevel 9. maxMemoryLevel :: MemoryLevel -- | A specific memory level in the range 1..9. Throws an error -- for arguments outside of this range. memoryLevel :: Int -> MemoryLevel -- | The strategy parameter is used to tune the compression algorithm. -- -- The strategy parameter only affects the compression ratio but not the -- correctness of the compressed output even if it is not set -- appropriately. data CompressionStrategy -- | Use this default compression strategy for normal data. defaultStrategy :: CompressionStrategy -- | Use the filtered compression strategy for data produced by a filter -- (or predictor). Filtered data consists mostly of small values with a -- somewhat random distribution. In this case, the compression algorithm -- is tuned to compress them better. The effect of this strategy is to -- force more Huffman coding and less string matching; it is somewhat -- intermediate between defaultStrategy and -- huffmanOnlyStrategy. filteredStrategy :: CompressionStrategy -- | Use the Huffman-only compression strategy to force Huffman encoding -- only (no string match). huffmanOnlyStrategy :: CompressionStrategy -- | Use rleStrategy to limit match distances to one (run-length -- encoding). rleStrategy is designed to be almost as fast as -- huffmanOnlyStrategy, but give better compression for PNG image -- data. rleStrategy :: CompressionStrategy -- | fixedStrategy prevents the use of dynamic Huffman codes, -- allowing for a simpler decoder for special applications. fixedStrategy :: CompressionStrategy instance GHC.Generics.Generic Codec.Compression.Zlib.Internal.CompressParams instance GHC.Show.Show Codec.Compression.Zlib.Internal.CompressParams instance GHC.Classes.Ord Codec.Compression.Zlib.Internal.CompressParams instance GHC.Classes.Eq Codec.Compression.Zlib.Internal.CompressParams instance GHC.Generics.Generic Codec.Compression.Zlib.Internal.DecompressParams instance GHC.Show.Show Codec.Compression.Zlib.Internal.DecompressParams instance GHC.Classes.Ord Codec.Compression.Zlib.Internal.DecompressParams instance GHC.Classes.Eq Codec.Compression.Zlib.Internal.DecompressParams instance GHC.Generics.Generic Codec.Compression.Zlib.Internal.DecompressError instance GHC.Classes.Ord Codec.Compression.Zlib.Internal.DecompressError instance GHC.Classes.Eq Codec.Compression.Zlib.Internal.DecompressError instance GHC.Show.Show Codec.Compression.Zlib.Internal.DecompressError instance GHC.Exception.Type.Exception Codec.Compression.Zlib.Internal.DecompressError -- | Compression and decompression of data streams in the raw deflate -- format. -- -- The format is described in detail in RFC #1951: -- http://www.ietf.org/rfc/rfc1951.txt -- -- See also the zlib home page: http://zlib.net/ module Codec.Compression.Zlib.Raw -- | Compress a stream of data into the raw deflate format. compress :: ByteString -> ByteString -- | Decompress a stream of data in the raw deflate format. decompress :: ByteString -> ByteString -- | The possible error cases when decompressing a stream. -- -- This can be shown to give a human readable error message. data DecompressError -- | The compressed data stream ended prematurely. This may happen if the -- input data stream was truncated. TruncatedInput :: DecompressError -- | It is possible to do zlib compression with a custom dictionary. This -- allows slightly higher compression ratios for short files. However -- such compressed streams require the same dictionary when -- decompressing. This error is for when we encounter a compressed stream -- that needs a dictionary, and it's not provided. DictionaryRequired :: DecompressError -- | If the stream requires a dictionary and you provide one with the wrong -- DictionaryHash then you will get this error. DictionaryMismatch :: DecompressError -- | If the compressed data stream is corrupted in any way then you will -- get this error, for example if the input data just isn't a compressed -- zlib data stream. In particular if the data checksum turns out to be -- wrong then you will get all the decompressed data but this error at -- the end, instead of the normal successful StreamEnd. DataFormatError :: String -> DecompressError -- | Like compress but with the ability to specify various -- decompression parameters. compressWith :: CompressParams -> ByteString -> ByteString -- | Like decompress but with the ability to specify various -- decompression parameters. decompressWith :: DecompressParams -> ByteString -> ByteString -- | The full set of parameters for compression. The defaults are -- defaultCompressParams. -- -- The compressBufferSize is the size of the first output buffer -- containing the compressed data. If you know an approximate upper bound -- on the size of the compressed data then setting this parameter can -- save memory. The default compression output buffer size is -- 16k. If your estimate is wrong it does not matter too much, -- the default buffer size will be used for the remaining chunks. data CompressParams CompressParams :: !CompressionLevel -> !Method -> !WindowBits -> !MemoryLevel -> !CompressionStrategy -> !Int -> Maybe ByteString -> CompressParams [compressLevel] :: CompressParams -> !CompressionLevel [compressMethod] :: CompressParams -> !Method [compressWindowBits] :: CompressParams -> !WindowBits [compressMemoryLevel] :: CompressParams -> !MemoryLevel [compressStrategy] :: CompressParams -> !CompressionStrategy [compressBufferSize] :: CompressParams -> !Int [compressDictionary] :: CompressParams -> Maybe ByteString -- | The default set of parameters for compression. This is typically used -- with compressWith or compressWith with specific -- parameters overridden. defaultCompressParams :: CompressParams -- | The full set of parameters for decompression. The defaults are -- defaultDecompressParams. -- -- The decompressBufferSize is the size of the first output -- buffer, containing the uncompressed data. If you know an exact or -- approximate upper bound on the size of the decompressed data then -- setting this parameter can save memory. The default decompression -- output buffer size is 32k. If your estimate is wrong it does -- not matter too much, the default buffer size will be used for the -- remaining chunks. -- -- One particular use case for setting the decompressBufferSize is -- if you know the exact size of the decompressed data and want to -- produce a strict ByteString. The compression and decompression -- functions use lazy ByteStrings but if you set the -- decompressBufferSize correctly then you can generate a lazy -- ByteString with exactly one chunk, which can be converted to a -- strict ByteString in O(1) time using concat -- . toChunks. data DecompressParams DecompressParams :: !WindowBits -> !Int -> Maybe ByteString -> Bool -> DecompressParams [decompressWindowBits] :: DecompressParams -> !WindowBits [decompressBufferSize] :: DecompressParams -> !Int [decompressDictionary] :: DecompressParams -> Maybe ByteString [decompressAllMembers] :: DecompressParams -> Bool -- | The default set of parameters for decompression. This is typically -- used with decompressWith or decompressWith with specific -- parameters overridden. defaultDecompressParams :: DecompressParams -- | The compression level parameter controls the amount of compression. -- This is a trade-off between the amount of compression and the time -- required to do the compression. newtype CompressionLevel CompressionLevel :: Int -> CompressionLevel -- | The default CompressionLevel. defaultCompression :: CompressionLevel -- | No compression, just a block copy. noCompression :: CompressionLevel -- | The fastest compression method (less compression). bestSpeed :: CompressionLevel -- | The slowest compression method (best compression). bestCompression :: CompressionLevel -- | A specific compression level in the range 0..9. Throws an -- error for arguments outside of this range. -- --
-- compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel -- decompressTotal windowBits = 2^windowBits ---- -- For example, for compression with the default windowBits = 15 -- and memLevel = 8 uses 256Kb. So for example a -- network server with 100 concurrent compressed streams would use -- 25Mb. The memory per stream can be halved (at the cost of -- somewhat degraded and slower compression) by reducing the -- windowBits and memLevel by one. -- -- Decompression takes less memory, the default windowBits = 15 -- corresponds to just 32Kb. newtype MemoryLevel MemoryLevel :: Int -> MemoryLevel -- | The default MemoryLevel. Equivalent to memoryLevel -- 8. defaultMemoryLevel :: MemoryLevel -- | Use minimum memory. This is slow and reduces the compression ratio. -- Equivalent to memoryLevel 1. minMemoryLevel :: MemoryLevel -- | Use maximum memory for optimal compression speed. Equivalent to -- memoryLevel 9. maxMemoryLevel :: MemoryLevel -- | A specific memory level in the range 1..9. Throws an error -- for arguments outside of this range. memoryLevel :: Int -> MemoryLevel -- | The strategy parameter is used to tune the compression algorithm. -- -- The strategy parameter only affects the compression ratio but not the -- correctness of the compressed output even if it is not set -- appropriately. data CompressionStrategy -- | Use this default compression strategy for normal data. defaultStrategy :: CompressionStrategy -- | Use the filtered compression strategy for data produced by a filter -- (or predictor). Filtered data consists mostly of small values with a -- somewhat random distribution. In this case, the compression algorithm -- is tuned to compress them better. The effect of this strategy is to -- force more Huffman coding and less string matching; it is somewhat -- intermediate between defaultStrategy and -- huffmanOnlyStrategy. filteredStrategy :: CompressionStrategy -- | Use the Huffman-only compression strategy to force Huffman encoding -- only (no string match). huffmanOnlyStrategy :: CompressionStrategy -- | Use rleStrategy to limit match distances to one (run-length -- encoding). rleStrategy is designed to be almost as fast as -- huffmanOnlyStrategy, but give better compression for PNG image -- data. rleStrategy :: CompressionStrategy -- | fixedStrategy prevents the use of dynamic Huffman codes, -- allowing for a simpler decoder for special applications. fixedStrategy :: CompressionStrategy -- | Compression and decompression of data streams in the zlib format. -- -- The format is described in detail in RFC #1950: -- http://www.ietf.org/rfc/rfc1950.txt -- -- See also the zlib home page: http://zlib.net/ module Codec.Compression.Zlib -- | Compress a stream of data into the zlib format. -- -- This uses the default compression parameters. In particular it uses -- the default compression level which favours a higher compression ratio -- over compression speed, though it does not use the maximum compression -- level. -- -- Use compressWith to adjust the compression level or other -- compression parameters. compress :: ByteString -> ByteString -- | Decompress a stream of data in the zlib format, throw -- DecompressError on failure. -- -- Note that the decompression is performed lazily. Errors in the -- data stream may not be detected until the end of the stream is -- demanded (since it is only at the end that the final checksum can be -- checked). If this is important to you, you must make sure to consume -- the whole decompressed stream before doing any IO action that depends -- on it. decompress :: ByteString -> ByteString -- | The possible error cases when decompressing a stream. -- -- This can be shown to give a human readable error message. data DecompressError -- | The compressed data stream ended prematurely. This may happen if the -- input data stream was truncated. TruncatedInput :: DecompressError -- | It is possible to do zlib compression with a custom dictionary. This -- allows slightly higher compression ratios for short files. However -- such compressed streams require the same dictionary when -- decompressing. This error is for when we encounter a compressed stream -- that needs a dictionary, and it's not provided. DictionaryRequired :: DecompressError -- | If the stream requires a dictionary and you provide one with the wrong -- DictionaryHash then you will get this error. DictionaryMismatch :: DecompressError -- | If the compressed data stream is corrupted in any way then you will -- get this error, for example if the input data just isn't a compressed -- zlib data stream. In particular if the data checksum turns out to be -- wrong then you will get all the decompressed data but this error at -- the end, instead of the normal successful StreamEnd. DataFormatError :: String -> DecompressError -- | Like compress but with the ability to specify various -- compression parameters. Typical usage: -- --
-- compressWith defaultCompressParams { ... }
--
--
-- In particular you can set the compression level:
--
--
-- compressWith defaultCompressParams { compressLevel = BestCompression }
--
compressWith :: CompressParams -> ByteString -> ByteString
-- | Like decompress but with the ability to specify various
-- decompression parameters. Typical usage:
--
--
-- decompressWith defaultCompressParams { ... }
--
decompressWith :: DecompressParams -> ByteString -> ByteString
-- | The full set of parameters for compression. The defaults are
-- defaultCompressParams.
--
-- The compressBufferSize is the size of the first output buffer
-- containing the compressed data. If you know an approximate upper bound
-- on the size of the compressed data then setting this parameter can
-- save memory. The default compression output buffer size is
-- 16k. If your estimate is wrong it does not matter too much,
-- the default buffer size will be used for the remaining chunks.
data CompressParams
CompressParams :: !CompressionLevel -> !Method -> !WindowBits -> !MemoryLevel -> !CompressionStrategy -> !Int -> Maybe ByteString -> CompressParams
[compressLevel] :: CompressParams -> !CompressionLevel
[compressMethod] :: CompressParams -> !Method
[compressWindowBits] :: CompressParams -> !WindowBits
[compressMemoryLevel] :: CompressParams -> !MemoryLevel
[compressStrategy] :: CompressParams -> !CompressionStrategy
[compressBufferSize] :: CompressParams -> !Int
[compressDictionary] :: CompressParams -> Maybe ByteString
-- | The default set of parameters for compression. This is typically used
-- with compressWith or compressWith with specific
-- parameters overridden.
defaultCompressParams :: CompressParams
-- | The full set of parameters for decompression. The defaults are
-- defaultDecompressParams.
--
-- The decompressBufferSize is the size of the first output
-- buffer, containing the uncompressed data. If you know an exact or
-- approximate upper bound on the size of the decompressed data then
-- setting this parameter can save memory. The default decompression
-- output buffer size is 32k. If your estimate is wrong it does
-- not matter too much, the default buffer size will be used for the
-- remaining chunks.
--
-- One particular use case for setting the decompressBufferSize is
-- if you know the exact size of the decompressed data and want to
-- produce a strict ByteString. The compression and decompression
-- functions use lazy ByteStrings but if you set the
-- decompressBufferSize correctly then you can generate a lazy
-- ByteString with exactly one chunk, which can be converted to a
-- strict ByteString in O(1) time using concat
-- . toChunks.
data DecompressParams
DecompressParams :: !WindowBits -> !Int -> Maybe ByteString -> Bool -> DecompressParams
[decompressWindowBits] :: DecompressParams -> !WindowBits
[decompressBufferSize] :: DecompressParams -> !Int
[decompressDictionary] :: DecompressParams -> Maybe ByteString
[decompressAllMembers] :: DecompressParams -> Bool
-- | The default set of parameters for decompression. This is typically
-- used with decompressWith or decompressWith with specific
-- parameters overridden.
defaultDecompressParams :: DecompressParams
-- | The compression level parameter controls the amount of compression.
-- This is a trade-off between the amount of compression and the time
-- required to do the compression.
newtype CompressionLevel
CompressionLevel :: Int -> CompressionLevel
-- | The default CompressionLevel.
defaultCompression :: CompressionLevel
-- | No compression, just a block copy.
noCompression :: CompressionLevel
-- | The fastest compression method (less compression).
bestSpeed :: CompressionLevel
-- | The slowest compression method (best compression).
bestCompression :: CompressionLevel
-- | A specific compression level in the range 0..9. Throws an
-- error for arguments outside of this range.
--
-- -- compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel -- decompressTotal windowBits = 2^windowBits ---- -- For example, for compression with the default windowBits = 15 -- and memLevel = 8 uses 256Kb. So for example a -- network server with 100 concurrent compressed streams would use -- 25Mb. The memory per stream can be halved (at the cost of -- somewhat degraded and slower compression) by reducing the -- windowBits and memLevel by one. -- -- Decompression takes less memory, the default windowBits = 15 -- corresponds to just 32Kb. newtype MemoryLevel MemoryLevel :: Int -> MemoryLevel -- | The default MemoryLevel. Equivalent to memoryLevel -- 8. defaultMemoryLevel :: MemoryLevel -- | Use minimum memory. This is slow and reduces the compression ratio. -- Equivalent to memoryLevel 1. minMemoryLevel :: MemoryLevel -- | Use maximum memory for optimal compression speed. Equivalent to -- memoryLevel 9. maxMemoryLevel :: MemoryLevel -- | A specific memory level in the range 1..9. Throws an error -- for arguments outside of this range. memoryLevel :: Int -> MemoryLevel -- | The strategy parameter is used to tune the compression algorithm. -- -- The strategy parameter only affects the compression ratio but not the -- correctness of the compressed output even if it is not set -- appropriately. data CompressionStrategy -- | Use this default compression strategy for normal data. defaultStrategy :: CompressionStrategy -- | Use the filtered compression strategy for data produced by a filter -- (or predictor). Filtered data consists mostly of small values with a -- somewhat random distribution. In this case, the compression algorithm -- is tuned to compress them better. The effect of this strategy is to -- force more Huffman coding and less string matching; it is somewhat -- intermediate between defaultStrategy and -- huffmanOnlyStrategy. filteredStrategy :: CompressionStrategy -- | Use the Huffman-only compression strategy to force Huffman encoding -- only (no string match). huffmanOnlyStrategy :: CompressionStrategy -- | Use rleStrategy to limit match distances to one (run-length -- encoding). rleStrategy is designed to be almost as fast as -- huffmanOnlyStrategy, but give better compression for PNG image -- data. rleStrategy :: CompressionStrategy -- | fixedStrategy prevents the use of dynamic Huffman codes, -- allowing for a simpler decoder for special applications. fixedStrategy :: CompressionStrategy -- | Compression and decompression of data streams in the gzip format. -- -- The format is described in detail in RFC #1952: -- http://www.ietf.org/rfc/rfc1952.txt -- -- See also the zlib home page: http://zlib.net/ module Codec.Compression.GZip -- | Compress a stream of data into the gzip format. -- -- This uses the default compression parameters. In particular it uses -- the default compression level which favours a higher compression ratio -- over compression speed, though it does not use the maximum compression -- level. -- -- Use compressWith to adjust the compression level or other -- compression parameters. compress :: ByteString -> ByteString -- | Decompress a stream of data in the gzip format, throw -- DecompressError on failure. -- -- Note that the decompression is performed lazily. Errors in the -- data stream may not be detected until the end of the stream is -- demanded (since it is only at the end that the final checksum can be -- checked). If this is important to you, you must make sure to consume -- the whole decompressed stream before doing any IO action that depends -- on it. decompress :: ByteString -> ByteString -- | The possible error cases when decompressing a stream. -- -- This can be shown to give a human readable error message. data DecompressError -- | The compressed data stream ended prematurely. This may happen if the -- input data stream was truncated. TruncatedInput :: DecompressError -- | It is possible to do zlib compression with a custom dictionary. This -- allows slightly higher compression ratios for short files. However -- such compressed streams require the same dictionary when -- decompressing. This error is for when we encounter a compressed stream -- that needs a dictionary, and it's not provided. DictionaryRequired :: DecompressError -- | If the stream requires a dictionary and you provide one with the wrong -- DictionaryHash then you will get this error. DictionaryMismatch :: DecompressError -- | If the compressed data stream is corrupted in any way then you will -- get this error, for example if the input data just isn't a compressed -- zlib data stream. In particular if the data checksum turns out to be -- wrong then you will get all the decompressed data but this error at -- the end, instead of the normal successful StreamEnd. DataFormatError :: String -> DecompressError -- | Like compress but with the ability to specify various -- compression parameters. Typical usage: -- --
-- compressWith defaultCompressParams { ... }
--
--
-- In particular you can set the compression level:
--
--
-- compressWith defaultCompressParams { compressLevel = BestCompression }
--
compressWith :: CompressParams -> ByteString -> ByteString
-- | Like decompress but with the ability to specify various
-- decompression parameters. Typical usage:
--
--
-- decompressWith defaultCompressParams { ... }
--
decompressWith :: DecompressParams -> ByteString -> ByteString
-- | The full set of parameters for compression. The defaults are
-- defaultCompressParams.
--
-- The compressBufferSize is the size of the first output buffer
-- containing the compressed data. If you know an approximate upper bound
-- on the size of the compressed data then setting this parameter can
-- save memory. The default compression output buffer size is
-- 16k. If your estimate is wrong it does not matter too much,
-- the default buffer size will be used for the remaining chunks.
data CompressParams
CompressParams :: !CompressionLevel -> !Method -> !WindowBits -> !MemoryLevel -> !CompressionStrategy -> !Int -> Maybe ByteString -> CompressParams
[compressLevel] :: CompressParams -> !CompressionLevel
[compressMethod] :: CompressParams -> !Method
[compressWindowBits] :: CompressParams -> !WindowBits
[compressMemoryLevel] :: CompressParams -> !MemoryLevel
[compressStrategy] :: CompressParams -> !CompressionStrategy
[compressBufferSize] :: CompressParams -> !Int
[compressDictionary] :: CompressParams -> Maybe ByteString
-- | The default set of parameters for compression. This is typically used
-- with compressWith or compressWith with specific
-- parameters overridden.
defaultCompressParams :: CompressParams
-- | The full set of parameters for decompression. The defaults are
-- defaultDecompressParams.
--
-- The decompressBufferSize is the size of the first output
-- buffer, containing the uncompressed data. If you know an exact or
-- approximate upper bound on the size of the decompressed data then
-- setting this parameter can save memory. The default decompression
-- output buffer size is 32k. If your estimate is wrong it does
-- not matter too much, the default buffer size will be used for the
-- remaining chunks.
--
-- One particular use case for setting the decompressBufferSize is
-- if you know the exact size of the decompressed data and want to
-- produce a strict ByteString. The compression and decompression
-- functions use lazy ByteStrings but if you set the
-- decompressBufferSize correctly then you can generate a lazy
-- ByteString with exactly one chunk, which can be converted to a
-- strict ByteString in O(1) time using concat
-- . toChunks.
data DecompressParams
DecompressParams :: !WindowBits -> !Int -> Maybe ByteString -> Bool -> DecompressParams
[decompressWindowBits] :: DecompressParams -> !WindowBits
[decompressBufferSize] :: DecompressParams -> !Int
[decompressDictionary] :: DecompressParams -> Maybe ByteString
[decompressAllMembers] :: DecompressParams -> Bool
-- | The default set of parameters for decompression. This is typically
-- used with decompressWith or decompressWith with specific
-- parameters overridden.
defaultDecompressParams :: DecompressParams
-- | The compression level parameter controls the amount of compression.
-- This is a trade-off between the amount of compression and the time
-- required to do the compression.
newtype CompressionLevel
CompressionLevel :: Int -> CompressionLevel
-- | The default CompressionLevel.
defaultCompression :: CompressionLevel
-- | No compression, just a block copy.
noCompression :: CompressionLevel
-- | The fastest compression method (less compression).
bestSpeed :: CompressionLevel
-- | The slowest compression method (best compression).
bestCompression :: CompressionLevel
-- | A specific compression level in the range 0..9. Throws an
-- error for arguments outside of this range.
--
-- -- compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel -- decompressTotal windowBits = 2^windowBits ---- -- For example, for compression with the default windowBits = 15 -- and memLevel = 8 uses 256Kb. So for example a -- network server with 100 concurrent compressed streams would use -- 25Mb. The memory per stream can be halved (at the cost of -- somewhat degraded and slower compression) by reducing the -- windowBits and memLevel by one. -- -- Decompression takes less memory, the default windowBits = 15 -- corresponds to just 32Kb. newtype MemoryLevel MemoryLevel :: Int -> MemoryLevel -- | The default MemoryLevel. Equivalent to memoryLevel -- 8. defaultMemoryLevel :: MemoryLevel -- | Use minimum memory. This is slow and reduces the compression ratio. -- Equivalent to memoryLevel 1. minMemoryLevel :: MemoryLevel -- | Use maximum memory for optimal compression speed. Equivalent to -- memoryLevel 9. maxMemoryLevel :: MemoryLevel -- | A specific memory level in the range 1..9. Throws an error -- for arguments outside of this range. memoryLevel :: Int -> MemoryLevel -- | The strategy parameter is used to tune the compression algorithm. -- -- The strategy parameter only affects the compression ratio but not the -- correctness of the compressed output even if it is not set -- appropriately. data CompressionStrategy -- | Use this default compression strategy for normal data. defaultStrategy :: CompressionStrategy -- | Use the filtered compression strategy for data produced by a filter -- (or predictor). Filtered data consists mostly of small values with a -- somewhat random distribution. In this case, the compression algorithm -- is tuned to compress them better. The effect of this strategy is to -- force more Huffman coding and less string matching; it is somewhat -- intermediate between defaultStrategy and -- huffmanOnlyStrategy. filteredStrategy :: CompressionStrategy -- | Use the Huffman-only compression strategy to force Huffman encoding -- only (no string match). huffmanOnlyStrategy :: CompressionStrategy -- | Use rleStrategy to limit match distances to one (run-length -- encoding). rleStrategy is designed to be almost as fast as -- huffmanOnlyStrategy, but give better compression for PNG image -- data. rleStrategy :: CompressionStrategy -- | fixedStrategy prevents the use of dynamic Huffman codes, -- allowing for a simpler decoder for special applications. fixedStrategy :: CompressionStrategy