zlib-0.6.1.0: Compression and decompression in the gzip and zlib formats

Copyright(c) 2006-2014 Duncan Coutts
LicenseBSD-style
Maintainerduncan@community.haskell.org
Safe HaskellSafe-Inferred
LanguageHaskell2010

Codec.Compression.GZip

Contents

Description

Compression and decompression of data streams in the gzip format.

The format is described in detail in RFC #1952: http://www.ietf.org/rfc/rfc1952.txt

See also the zlib home page: http://zlib.net/

Synopsis

Documentation

This module provides pure functions for compressing and decompressing streams of data in the gzip format and represented by lazy ByteStrings. This makes it easy to use either in memory or with disk or network IO.

For example a simple gzip compression program is just:

import qualified Data.ByteString.Lazy as ByteString
import qualified Codec.Compression.GZip as GZip

main = ByteString.interact GZip.compress

Or you could lazily read in and decompress a .gz file using:

content <- fmap GZip.decompress (readFile file)

Simple compression and decompression

compress :: ByteString -> ByteString Source

Compress a stream of data into the gzip format.

This uses the default compression parameters. In partiular it uses the default compression level which favours a higher compression ratio over compression speed, though it does not use the maximum compression level.

Use compressWith to adjust the compression level or other compression parameters.

decompress :: ByteString -> ByteString Source

Decompress a stream of data in the gzip format.

There are a number of errors that can occur. In each case an exception will be thrown. The possible error conditions are:

  • if the stream does not start with a valid gzip header
  • if the compressed stream is corrupted
  • if the compressed stream ends permaturely

Note that the decompression is performed lazily. Errors in the data stream may not be detected until the end of the stream is demanded (since it is only at the end that the final checksum can be checked). If this is important to you, you must make sure to consume the whole decompressed stream before doing any IO action that depends on it.

Extended api with control over compression parameters

compressWith :: CompressParams -> ByteString -> ByteString Source

Like compress but with the ability to specify various compression parameters. Typical usage:

compressWith defaultCompressParams { ... }

In particular you can set the compression level:

compressWith defaultCompressParams { compressLevel = BestCompression }

decompressWith :: DecompressParams -> ByteString -> ByteString Source

Like decompress but with the ability to specify various decompression parameters. Typical usage:

decompressWith defaultCompressParams { ... }

data CompressParams Source

The full set of parameters for compression. The defaults are defaultCompressParams.

The compressBufferSize is the size of the first output buffer containing the compressed data. If you know an approximate upper bound on the size of the compressed data then setting this parameter can save memory. The default compression output buffer size is 16k. If your extimate is wrong it does not matter too much, the default buffer size will be used for the remaining chunks.

Instances

defaultCompressParams :: CompressParams Source

The default set of parameters for compression. This is typically used with the compressWith function with specific parameters overridden.

data DecompressParams Source

The full set of parameters for decompression. The defaults are defaultDecompressParams.

The decompressBufferSize is the size of the first output buffer, containing the uncompressed data. If you know an exact or approximate upper bound on the size of the decompressed data then setting this parameter can save memory. The default decompression output buffer size is 32k. If your extimate is wrong it does not matter too much, the default buffer size will be used for the remaining chunks.

One particular use case for setting the decompressBufferSize is if you know the exact size of the decompressed data and want to produce a strict ByteString. The compression and deccompression functions use lazy ByteStrings but if you set the decompressBufferSize correctly then you can generate a lazy ByteString with exactly one chunk, which can be converted to a strict ByteString in O(1) time using concat . toChunks.

defaultDecompressParams :: DecompressParams Source

The default set of parameters for decompression. This is typically used with the compressWith function with specific parameters overridden.

The compression parameter types

data CompressionLevel Source

The compression level parameter controls the amount of compression. This is a trade-off between the amount of compression and the time required to do the compression.

Constructors

DefaultCompression

Deprecated: Use defaultCompression. CompressionLevel constructors will be hidden in version 0.7

NoCompression

Deprecated: Use noCompression. CompressionLevel constructors will be hidden in version 0.7

BestSpeed

Deprecated: Use bestSpeed. CompressionLevel constructors will be hidden in version 0.7

BestCompression

Deprecated: Use bestCompression. CompressionLevel constructors will be hidden in version 0.7

CompressionLevel Int 

defaultCompression :: CompressionLevel Source

The default compression level is 6 (that is, biased towards higher compression at expense of speed).

noCompression :: CompressionLevel Source

No compression, just a block copy.

bestSpeed :: CompressionLevel Source

The fastest compression method (less compression)

bestCompression :: CompressionLevel Source

The slowest compression method (best compression).

compressionLevel :: Int -> CompressionLevel Source

A specific compression level between 0 and 9.

data Method Source

The compression method

Constructors

Deflated

Deprecated: Use deflateMethod. Method constructors will be hidden in version 0.7

deflateMethod :: Method Source

'Deflate' is the only method supported in this version of zlib. Indeed it is likely to be the only method that ever will be supported.

data WindowBits Source

This specifies the size of the compression window. Larger values of this parameter result in better compression at the expense of higher memory usage.

The compression window size is the value of the the window bits raised to the power 2. The window bits must be in the range 8..15 which corresponds to compression window sizes of 256b to 32Kb. The default is 15 which is also the maximum size.

The total amount of memory used depends on the window bits and the MemoryLevel. See the MemoryLevel for the details.

Constructors

WindowBits Int 
DefaultWindowBits

Deprecated: Use defaultWindowBits. WindowBits constructors will be hidden in version 0.7

defaultWindowBits :: WindowBits Source

The default WindowBits is 15 which is also the maximum size.

windowBits :: Int -> WindowBits Source

A specific compression window size, specified in bits in the range 8..15

data MemoryLevel Source

The MemoryLevel parameter specifies how much memory should be allocated for the internal compression state. It is a tradoff between memory usage, compression ratio and compression speed. Using more memory allows faster compression and a better compression ratio.

The total amount of memory used for compression depends on the WindowBits and the MemoryLevel. For decompression it depends only on the WindowBits. The totals are given by the functions:

compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel
decompressTotal windowBits = 2^windowBits

For example, for compression with the default windowBits = 15 and memLevel = 8 uses 256Kb. So for example a network server with 100 concurrent compressed streams would use 25Mb. The memory per stream can be halved (at the cost of somewhat degraded and slower compressionby) by reducing the windowBits and memLevel by one.

Decompression takes less memory, the default windowBits = 15 corresponds to just 32Kb.

Constructors

DefaultMemoryLevel

Deprecated: Use defaultMemoryLevel. MemoryLevel constructors will be hidden in version 0.7

MinMemoryLevel

Deprecated: Use minMemoryLevel. MemoryLevel constructors will be hidden in version 0.7

MaxMemoryLevel

Deprecated: Use maxMemoryLevel. MemoryLevel constructors will be hidden in version 0.7

MemoryLevel Int 

defaultMemoryLevel :: MemoryLevel Source

The default memory level. (Equivalent to memoryLevel 8)

minMemoryLevel :: MemoryLevel Source

Use minimum memory. This is slow and reduces the compression ratio. (Equivalent to memoryLevel 1)

maxMemoryLevel :: MemoryLevel Source

Use maximum memory for optimal compression speed. (Equivalent to memoryLevel 9)

memoryLevel :: Int -> MemoryLevel Source

A specific level in the range 1..9

data CompressionStrategy Source

The strategy parameter is used to tune the compression algorithm.

The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately.

Constructors

DefaultStrategy

Deprecated: Use defaultStrategy. CompressionStrategy constructors will be hidden in version 0.7

Filtered

Deprecated: Use filteredStrategy. CompressionStrategy constructors will be hidden in version 0.7

HuffmanOnly

Deprecated: Use huffmanOnlyStrategy. CompressionStrategy constructors will be hidden in version 0.7

defaultStrategy :: CompressionStrategy Source

Use this default compression strategy for normal data.

filteredStrategy :: CompressionStrategy Source

Use the filtered compression strategy for data produced by a filter (or predictor). Filtered data consists mostly of small values with a somewhat random distribution. In this case, the compression algorithm is tuned to compress them better. The effect of this strategy is to force more Huffman coding and less string matching; it is somewhat intermediate between defaultCompressionStrategy and huffmanOnlyCompressionStrategy.

huffmanOnlyStrategy :: CompressionStrategy Source

Use the Huffman-only compression strategy to force Huffman encoding only (no string match).