# testbench: Create tests and benchmarks together

[ library, mit, testing ] [ Propose Tags ]

It's too easy to accidentally try and benchmark apples and oranges together. Wouldn't it be nice if you could somehow guarantee that your benchmarks satisfy some simple tests (e.g. a group of comparisons all return the same value)?

Furthermore, trying to compare multiple inputs/functions against each other requires a lot of boilerplate, making it even easier to accidentally compare the wrong things (e.g. using whnf instead of nf).

testbench aims to help solve these problems and more by making it easier to write unit tests and benchmarks together by stating up-front what requirements are needed and then using simple functions to state the next parameter to be tested/benchmarked.

Versions [faq] 0.1.0.0, 0.2.0.0, 0.2.0.1, 0.2.1.0, 0.2.1.1, 0.2.1.2, 0.2.1.3 Changelog.md base (>=4.8 && <5), bytestring, cassava (==0.5.*), containers, criterion (==1.5.*), criterion-measurement (==0.1.*), deepseq (>=1.1.0.0 && <1.5), dlist (==0.8.*), HUnit (>=1.1 && <1.7), optparse-applicative (>=0.11.0.0 && <0.15), process (>=1.1.0.0 && <1.7), statistics (>=0.14 && <0.16), streaming (==0.2.*), streaming-cassava (==0.1.*), streaming-with (>=0.1.0.0 && <0.3), temporary (>=1.1 && <1.4), testbench, transformers (==0.5.*), weigh (>=0.0.4 && <0.1) [details] MIT Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com Testing head: git clone https://github.com/ivan-m/testbench.git by IvanMiljenovic at Tue May 7 15:15:04 UTC 2019 NixOS:0.2.1.3 examples 1421 total (62 in the last 30 days) (no votes yet) [estimated by rule of succession] λ λ λ Docs available Last success reported on 2019-05-07

## Modules

[Index] [Quick Jump]

## Flags

NameDescriptionDefaultType
examples

Build example executable

DisabledAutomatic

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

#### Maintainer's Corner

For package maintainers and hackage trustees

[back to package description]

# testbench

It's too easy to accidentally try and benchmark apples and oranges together. Wouldn't it be nice if you could somehow guarantee that your benchmarks satisfy some simple tests (e.g. a group of comparisons all return the same value)?

Furthermore, trying to compare multiple inputs/functions against each other requires a lot of boilerplate, making it even easier to accidentally compare the wrong things (e.g. using whnf instead of nf).

testbench aims to help solve these problems and more by making it easier to write unit tests and benchmarks together by stating up-front what requirements are needed and then using simple functions to state the next parameter to be tested/benchmarked.

This uses HUnit and criterion to create the tests and benchmarks respectively, and it's possible to obtain these explicitly to embed them within existing test- or benchmark-suites. Alternatively, you can use the provided testBench function directly to first run the tests and then -- if the tests all succeeded -- run the benchmarks.

## Examples

Please see the provided examples/ directory.

## Limitations

• No availability of specifying an environment to run benchmarks in.

• To be able to display the tree-like structure more readily for comparisons, the following limitations (currently) have to be made:

• No detailed output, including no reports. In practice however, the detailed outputs produced by criterion don't lend themselves well to comparisons.

## Fortuitously Anticipated Queries

### Why write this library?

The idea behind testbench came about because of two related dissatisfactions with criterion that I found:

1. Even when the bcompare function was still available, it still seemed very difficult/clumsy to write comparison benchmarks since so much needed to be duplicated for each comparison.

2. When trying to find examples of benchmarks that performed comparisons between different implementations, I came across some that seemingly did the same calculation on different inputs/implementations, but upon closer analysis the implementation that "won" was actually doing less work than the others (not by a large amount, but the difference was non-negligible in my opinion). This would have been easy to pick up if even a simple test was performed (e.g. using == would have led rise to a type mis-match, making it obvious they did different things).

testbench aims to solve these problems by making it easier to write comparisons up-front: by using the compareFunc function to specify what you are benchmarking and how, then using comp just to specify the input (without needing to also re-specify the function, evaluationg type, etc.).

### Do I need to know HUnit or criterion to be able to use this?

No, for basic/default usage this library handles all that for you.

There are two overall hints for good benchmarks though:

• Use the NFData variants (e.g. normalForm) where possible: this ensures the calculation is actually completed rather than laziness biting you.

• If the variance is high, make the benchmark do more work to decrease it.

### Why not use hspec/tasty/some-other-testing-framework?

Hopefully by the nature of this question it is obvious why I did not pick one over the others. HUnit is low-level enough that it can be utilised by any of the others if so required whilst keeping the dependencies required minimal.

Not to mention that these tests are more aimed at checking that the benchmarks are valid and are thus typically equality/predicate-based tests on the result from a simple function; as such it is more intended that they are quickly run as a verification stage rather than the basis for a large test-suite.

### Why not use criterion directly for running benchmarks?

criterion currently does not lend itself well to visualising the results from comparison-style benchmarks:

• A very limited internal tree-like structure which is not really apparent when results are displayed.

• No easy way to actually compare benchmark values: there used to be a bcompare function but it hasn't been available since version 1.0.0.0 came out in August 2014. As such, comparisons must be done by hand by comparing the results visually.

• Having more than a few benchmarks together produces a lot of output (either to the terminal or a resulting report): combined with the previous point, having more than a few benchmarks is discouraged.

Note that if however you wish to use criterion more directly (either for configurability or to be able to have reports), a combination of getTestBenches and flattenBenchForest will provide you with a Benchmark value that is accepted by criterion.