The hyperion package

[ Tags: benchmarking, bsd3, library, program ] [ Propose Tags ]

Please see

[Skip to Readme]


Dependencies aeson (>=0.11), ansi-wl-pprint, base (>=4.9 && <5), bytestring (>=0.10), clock (>=0.7.2), containers (>=0.5), deepseq (>=1.4), directory, exceptions (>=0.8), filepath, generic-deriving (>=1.11), hashable, hyperion, lens (>=4.0), mtl (>=2.2), optparse-applicative (>=0.12), process, random (>=1.1), random-shuffle (>=0.0.4), statistics (>=0.13), text (>=1.2), time (>=1.0), unordered-containers (>=0.2), vector (>=0.11) [details]
License BSD3
Author Tweag I/O
Category Benchmarking
Home page
Source repository head: git clone
Uploaded Wed Sep 6 11:46:14 UTC 2017 by MathieuBoespflug
Distributions NixOS:
Executables hyperion-end-to-end-benchmark-example, hyperion-micro-benchmark-example
Downloads 27 total (15 in the last 30 days)
Rating 0.0 (0 ratings) [clear rating]
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2017-09-06 [all 1 reports]
Hackage Matrix CI




Maintainer's Corner

For package maintainers and hackage trustees

Readme for hyperion-

[back to package description]

Hyperion: Haskell-based systems benchmarking

Build Status

Hyperion is a DSL for writing benchmarks to measure and analyze software performance. It is a lab for future [Criterion][criterion] features.

Getting started


You can build the micro benchmark example using stack:

$ stack build
$ stack exec hyperion-micro-benchmark-example

Example usage

The Hyperion DSL is a backwards compatible extension to [Criterion][criterion]'s DSL (except for the rarely used env combinator, which has a safer type). Here is an example:

benchmarks :: [Benchmark]
benchmarks =
    [ bench "id" (nf id ())
    , series [0,5..20] $ \n ->
        bgroup "pure-functions"
          [ bench "fact" (nf fact n)
          , bench "fib" (nf fib n)
    , series [1..4] $ \n ->
        series [1..n] $ \k ->
          bench "n choose k" $ nf (uncurry choose) (n, k)

main :: IO ()
main = defaultMain "hyperion-example-micro-benchmarks" benchmarks

By default Hyperion runs your benchmarks and pretty prints the results. There are several command-line options that you can pass to the executable, like printing the results to a JSON file or including individual raw measurements. To see the full set of options run the executable with --help:

$ stack exec hyperion-micro-benchmark-example -- --help
Usage: hyperion-micro-benchmark-example ([--pretty] | [-j|--json PATH] |
                                        [-f|--flat PATH]) ([-l|--list] | [--run]
                                        | [--no-analyze]) [--raw]
                                        [--arg KEY:VAL] [NAME...]

Available options:
  -h,--help                Show this help text
  --pretty                 Pretty prints the measurements on stdout.
  -j,--json PATH           Where to write the json benchmarks output. Can be a
                           file name, a directory name or '-' for stdout.
  -f,--flat PATH           Where to write the json benchmarks output. Can be a
                           file name, a directory name or '-' for stdout.
  --version                Display version information
  -l,--list                List benchmark names
  --run                    Run benchmarks and analyze them (default)
  --no-analyze             Only run the benchmarks
  --raw                    Include raw measurement data in report.
  --arg KEY:VAL            Extra metadata to include in the report, in the
                           format key:value.