Copyright | (c) 2009-2014 Bryan O'Sullivan |
---|---|
License | BSD-style |
Maintainer | bos@serpentine.com |
Stability | experimental |
Portability | GHC |
Safe Haskell | Trustworthy |
Language | Haskell2010 |
Types for benchmarking.
The core type is Benchmarkable
, which admits both pure functions
and IO
actions.
For a pure function of type a -> b
, the benchmarking harness
calls this function repeatedly, each time with a different Int64
argument (the number of times to run the function in a loop), and
reduces the result the function returns to weak head normal form.
For an action of type IO a
, the benchmarking harness calls the
action repeatedly, but does not reduce the result.
Synopsis
- data Config = Config {}
- data Verbosity
- data Benchmarkable where
- data Benchmark where
- data Measured = Measured {
- measTime :: !Double
- measCpuTime :: !Double
- measCycles :: !Int64
- measIters :: !Int64
- measAllocated :: !Int64
- measNumGcs :: !Int64
- measBytesCopied :: !Int64
- measMutatorWallSeconds :: !Double
- measMutatorCpuSeconds :: !Double
- measGcWallSeconds :: !Double
- measGcCpuSeconds :: !Double
- fromInt :: Int64 -> Maybe Int64
- toInt :: Maybe Int64 -> Int64
- fromDouble :: Double -> Maybe Double
- toDouble :: Maybe Double -> Double
- measureAccessors :: Map String (Measured -> Maybe Double, String)
- measureKeys :: [String]
- measure :: Unbox a => (Measured -> a) -> Vector Measured -> Vector a
- rescale :: Measured -> Measured
- env :: NFData env => IO env -> (env -> Benchmark) -> Benchmark
- envWithCleanup :: NFData env => IO env -> (env -> IO a) -> (env -> Benchmark) -> Benchmark
- perBatchEnv :: (NFData env, NFData b) => (Int64 -> IO env) -> (env -> IO b) -> Benchmarkable
- perBatchEnvWithCleanup :: (NFData env, NFData b) => (Int64 -> IO env) -> (Int64 -> env -> IO ()) -> (env -> IO b) -> Benchmarkable
- perRunEnv :: (NFData env, NFData b) => IO env -> (env -> IO b) -> Benchmarkable
- perRunEnvWithCleanup :: (NFData env, NFData b) => IO env -> (env -> IO ()) -> (env -> IO b) -> Benchmarkable
- toBenchmarkable :: (Int64 -> IO ()) -> Benchmarkable
- bench :: String -> Benchmarkable -> Benchmark
- bgroup :: String -> [Benchmark] -> Benchmark
- addPrefix :: String -> String -> String
- benchNames :: Benchmark -> [String]
- nf :: NFData b => (a -> b) -> a -> Benchmarkable
- whnf :: (a -> b) -> a -> Benchmarkable
- nfIO :: NFData a => IO a -> Benchmarkable
- whnfIO :: IO a -> Benchmarkable
- nfAppIO :: NFData b => (a -> IO b) -> a -> Benchmarkable
- whnfAppIO :: (a -> IO b) -> a -> Benchmarkable
- data Outliers = Outliers {
- samplesSeen :: !Int64
- lowSevere :: !Int64
- lowMild :: !Int64
- highMild :: !Int64
- highSevere :: !Int64
- data OutlierEffect
- = Unaffected
- | Slight
- | Moderate
- | Severe
- data OutlierVariance = OutlierVariance {}
- data Regression = Regression {}
- data KDE = KDE {}
- data Report = Report {}
- data SampleAnalysis = SampleAnalysis {}
- data DataRecord
Configuration
Top-level benchmarking configuration.
Config | |
|
Instances
Control the amount of information displayed.
Instances
Benchmark descriptions
data Benchmarkable where #
A pure function or impure action that can be benchmarked. The
Int64
parameter indicates the number of times to run the given
function or action.
Specification of a collection of benchmarks and environments. A benchmark may consist of:
- An environment that creates input data for benchmarks, created
with
env
. - A single
Benchmarkable
item with a name, created withbench
. - A (possibly nested) group of
Benchmark
s, created withbgroup
.
Measurements
A collection of measurements made while benchmarking.
Measurements related to garbage collection are tagged with GC.
They will only be available if a benchmark is run with "+RTS
-T"
.
Packed storage. When GC statistics cannot be collected, GC
values will be set to huge negative values. If a field is labeled
with "GC" below, use fromInt
and fromDouble
to safely
convert to "real" values.
Measured | |
|
Instances
fromInt :: Int64 -> Maybe Int64 #
Convert a (possibly unavailable) GC measurement to a true value.
If the measurement is a huge negative number that corresponds to
"no data", this will return Nothing
.
toInt :: Maybe Int64 -> Int64 #
Convert from a true value back to the packed representation used for GC measurements.
fromDouble :: Double -> Maybe Double #
Convert a (possibly unavailable) GC measurement to a true value.
If the measurement is a huge negative number that corresponds to
"no data", this will return Nothing
.
toDouble :: Maybe Double -> Double #
Convert from a true value back to the packed representation used for GC measurements.
measureAccessors :: Map String (Measured -> Maybe Double, String) #
Field names and accessors for a Measured
record.
measureKeys :: [String] #
Field names in a Measured
record, in the order in which they
appear.
Benchmark construction
:: NFData env | |
=> IO env | Create the environment. The environment will be evaluated to normal form before being passed to the benchmark. |
-> (env -> Benchmark) | Take the newly created environment and make it available to the given benchmarks. |
-> Benchmark |
Run a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.
A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place.
Motivation. In earlier versions of criterion, all benchmark inputs were always created when a program started running. By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:
- Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.
- The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any.
Creation. An environment is created right before its related
benchmarks are run. The IO
action that creates the environment
is run, then the newly created environment is evaluated to normal
form (hence the NFData
constraint) before being passed to the
function that receives the environment.
Complex environments. If you need to create an environment that contains multiple values, simply pack the values into a tuple.
Lazy pattern matching. In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated, a value which throws an exception will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.
The function that receives the environment must use lazy pattern
matching to deconstruct the tuple (e.g., ~(x, y)
, not (x, y)
),
as use of strict pattern matching will cause a crash if an
exception-throwing value is passed in.
Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run.
setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ]
Discussion. The environment created in the example above is
intentionally not ideal. As Haskell's scoping rules suggest, the
variable big
is in scope for the benchmarks that use only
small
. It would be better to create a separate environment for
big
, so that it will not be kept alive while the unrelated
benchmarks are being run.
:: (NFData env, NFData b) | |
=> (Int64 -> IO env) | Create an environment for a batch of N runs. The environment will be evaluated to normal form before running. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly generated environment. |
-> Benchmarkable |
Create a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.
The environment is evaluated to normal form before the benchmark is run.
When using whnf
, whnfIO
, etc. Criterion creates a Benchmarkable
whichs runs a batch of N
repeat runs of that expressions. Criterion may
run any number of these batches to get accurate measurements. Environments
created by env
and envWithCleanup
, are shared across all these batches
of runs.
This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.
An example: Suppose we want to benchmark writing to a Chan, if we allocate
the Chan using environment and our benchmark consists of writeChan env ()
,
the contents and thus size of the Chan will grow with every repeat. If
Criterion runs a 1,000 batches of 1,000 repeats, the result is that the
channel will have 999,000 items in it by the time the last batch is run.
Since GHC GC has to copy the live set for every major GC this means our last
set of writes will suffer a lot of noise of the previous repeats.
By allocating a fresh environment for every batch of runs this function should eliminate this effect.
:: (NFData env, NFData b) | |
=> (Int64 -> IO env) | Create an environment for a batch of N runs. The environment will be evaluated to normal form before running. |
-> (Int64 -> env -> IO ()) | Clean up the created environment. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly generated environment. |
-> Benchmarkable |
Same as perBatchEnv
, but but allows for an additional callback
to clean up the environment. Resource clean up is exception safe, that is,
it runs even if the Benchmark
throws an exception.
:: (NFData env, NFData b) | |
=> IO env | Action that creates the environment for a single run. |
-> (env -> IO b) | Function returning the IO action that should be benchmarked with the newly genereted environment. |
-> Benchmarkable |
Create a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.
As with env
and perBatchEnv
the environment is evaluated to normal form
before the benchmark is run.
This introduces extra noise and result in reduce accuracy compared to other Criterion benchmarks. But allows easier benchmarking for mutable operations than was previously possible.
toBenchmarkable :: (Int64 -> IO ()) -> Benchmarkable #
Construct a Benchmarkable
value from an impure action, where the Int64
parameter indicates the number of times to run the action.
:: String | A name to identify the benchmark. |
-> Benchmarkable | An activity to be benchmarked. |
-> Benchmark |
Create a single benchmark.
:: String | A name to identify the group of benchmarks. |
-> [Benchmark] | Benchmarks to group under this name. |
-> Benchmark |
Group several benchmarks together under a common name.
Add the given prefix to a name. If the prefix is empty, the name
is returned unmodified. Otherwise, the prefix and name are
separated by a '/'
character.
benchNames :: Benchmark -> [String] #
Retrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in.
Evaluation control
nf :: NFData b => (a -> b) -> a -> Benchmarkable #
Apply an argument to a function, and evaluate the result to normal form (NF).
whnf :: (a -> b) -> a -> Benchmarkable #
Apply an argument to a function, and evaluate the result to weak head normal form (WHNF).
nfIO :: NFData a => IO a -> Benchmarkable #
whnfIO :: IO a -> Benchmarkable #
Perform an action, then evaluate its result to weak head normal
form (WHNF). This is useful for forcing an IO
action whose result
is an expression to be evaluated down to a more useful value.
If the construction of the 'IO a' value is an important factor
in the benchmark, it is best to use whnfAppIO
instead.
nfAppIO :: NFData b => (a -> IO b) -> a -> Benchmarkable #
Apply an argument to a function which performs an action, then
evaluate its result to normal form (NF).
This function constructs the 'IO b' value on each iteration,
similar to nf
.
This is particularly useful for IO
actions where the bulk of the
work is not bound by IO, but by pure computations that may
optimize away if the argument is known statically, as in nfIO
.
whnfAppIO :: (a -> IO b) -> a -> Benchmarkable #
Perform an action, then evaluate its result to weak head normal
form (WHNF).
This function constructs the 'IO b' value on each iteration,
similar to whnf
.
This is particularly useful for IO
actions where the bulk of the
work is not bound by IO, but by pure computations that may
optimize away if the argument is known statically, as in nfIO
.
Result types
Outliers from sample data, calculated using the boxplot technique.
Outliers | |
|
Instances
data OutlierEffect Source #
A description of the extent to which outliers in the sample data affect the sample mean and standard deviation.
Unaffected | Less than 1% effect. |
Slight | Between 1% and 10%. |
Moderate | Between 10% and 50%. |
Severe | Above 50% (i.e. measurements are useless). |
Instances
data OutlierVariance Source #
Analysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean).
OutlierVariance | |
|
Instances
data Regression Source #
Results of a linear regression.
Instances
Data for a KDE chart of performance.
Instances
Eq KDE Source # | |
Data KDE Source # | |
Defined in Criterion.Types gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> KDE -> c KDE # gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c KDE # dataTypeOf :: KDE -> DataType # dataCast1 :: Typeable t => (forall d. Data d => c (t d)) -> Maybe (c KDE) # dataCast2 :: Typeable t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c KDE) # gmapT :: (forall b. Data b => b -> b) -> KDE -> KDE # gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> KDE -> r # gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> KDE -> r # gmapQ :: (forall d. Data d => d -> u) -> KDE -> [u] # gmapQi :: Int -> (forall d. Data d => d -> u) -> KDE -> u # gmapM :: Monad m => (forall d. Data d => d -> m d) -> KDE -> m KDE # gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> KDE -> m KDE # gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> KDE -> m KDE # | |
Read KDE Source # | |
Show KDE Source # | |
Generic KDE Source # | |
NFData KDE Source # | |
Defined in Criterion.Types | |
ToJSON KDE Source # | |
Defined in Criterion.Types | |
FromJSON KDE Source # | |
Binary KDE Source # | |
type Rep KDE Source # | |
Defined in Criterion.Types type Rep KDE = D1 (MetaData "KDE" "Criterion.Types" "criterion-1.5.1.0-BeCGsrqSvVX16bvefBbhzK" False) (C1 (MetaCons "KDE" PrefixI True) (S1 (MetaSel (Just "kdeType") NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 String) :*: (S1 (MetaSel (Just "kdeValues") NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 (Vector Double)) :*: S1 (MetaSel (Just "kdePDF") NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 (Vector Double))))) |
Report of a sample analysis.
Report | |
|
Instances
data SampleAnalysis Source #
Result of a bootstrap analysis of a non-parametric sample.
SampleAnalysis | |
|
Instances
data DataRecord Source #