shannon-fano: Shannon-fano compression algorithm in Haskell

[ codec, library, mit, program ] [ Propose Tags ]

Shannon-fano compression algorithm in Haskell program and API

[Skip to Readme]


Maintainer's Corner

For package maintainers and hackage trustees


Versions [RSS],,
Change log
Dependencies base (>= && <5), bytestring, optparse-generic, shannon-fano [details]
License MIT
Copyright 2020 Armando Santos
Author Armando Santos
Maintainer Armando Santos <>
Category Codec
Home page
Bug tracker
Source repo head: git clone
Uploaded by bolt12 at 2020-06-02T17:43:59Z
Distributions NixOS:
Executables shannon-fano
Downloads 1114 total (2 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs uploaded by user
Build status unknown [no reports yet]

Readme for shannon-fano-

[back to package description]

Shannon-Fano compression algorithm library

Haskell implementation

GitHub CI Build status Hackage Stackage Lts Stackage Nightly

This library offers a set of functions to compress files into binary code applying the Shannon-Fano compression algorithm.

Installing / Getting started


This package is now availablein Hackage at

So you just need to:

$ stack install shannon-fano


$ git clone
$ cd shannon-fano/
$ stack install

Build documentation

$ cd shannon-fano/
$ stack haddock

See for yourself

You can see if it's correctly installed by doing calling the program in the terminal. You should see the following:

> shannon-fano -h
Compress contents using the Shannon-Fano algorithm

Usage: shannon-fano [--decompress STRING] [--input STRING] [--output STRING]

Available options:
-h,--help                Show this help text
--decompress STRING      Decompress with decode table file name.
--input STRING           Input content file name. If not specified will be
read from stdin.
--output STRING          Output result file name. If not specified will be
'out.bin' or 'out.bin.dat' depending on the action

Use examples

The program is able to read from stdin if no input file is provided like so:

> shannon-fano

This will create a 'out.bin' file and a '' file (which contains the decode table), which you can decompress:

> shannon-fano --decompress --input out.bin

If no output file name is provided, this should create a new file called 'out.dat':

> cat out.dat

Performance and compression

Testing the compressor program for a lorem ipsum text file of 1000 words:

> time shannon-fano --input test.txt

real    0m0.074s
user    0m0.060s
sys     0m0.025s


> ls -lh out.bin test.txt | cut -d " " -f5

Total ~ 47%

Testing the compressor program with 1M of random data:

> base64 /dev/urandom | head -c 1000000 > test.txt
> time shannon-fano --input test.txt

real    0m2.648s
user    0m2.321s
sys     0m1.305s


> ls -lh out.bin test.txt | cut -d " " -f5

Total ~ 15%

Testing the compressor program with a 2.1M file containing repetitions of 5 characters:

> time shannon-fano --input test.txt

real    0m2.356s
user    0m2.069s
sys     0m1.499s


> ls -lh out.bin test.txt | cut -d " " -f5

Total ~ 65%


> time shannon-fano --decompress --input out.bin

real    0m6.374s
user    0m6.252s
sys     0m1.394s


As you can see, this algorithm performs worse, in general, in terms of compression, with random and large data.


If you'd like to contribute, please fork the repository and use a feature branch. Pull requests are warmly welcome.


The code in this project is licensed under GPL3 license.