A4r      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  Safe-Inferred  (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone!"+0KReport of a sample analysis.+A simple index indicating that this is the n th report.The name of this report.See O.Raw measurements. These are notN corrected for the estimated measurement overhead that can be found via the  field of .Report analysis.Analysis of outliers.Data for a KDE of times. $Data for a KDE chart of performance.:Result of a bootstrap analysis of a non-parametric sample.+Estimates calculated via linear regression.\Estimated measurement overhead, in seconds. Estimation is performed via linear regression.Estimated mean.Estimated standard deviation.BDescription of the effects of outliers on the estimated variance.Results of a linear regression. Name of the responding variable.1Map from name to value of predictor coefficients.R goodness-of-fit estimate.sAnalysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean)."Qualitative description of effect.$Brief textual description of effect.@Quantitative description of effect (a fraction between 0 and 1).pA description of the extent to which outliers in the sample data affect the sample mean and standard deviation. +Above 50% (i.e. measurements are useless).!Between 10% and 50%."Between 1% and 10%.#Less than 1% effect.$COutliers from sample data, calculated using the boxplot technique.'JMore than 3 times the interquartile range (IQR) below the first quartile.(9Between 1.5 and 3 times the IQR below the first quartile.)9Between 1.5 and 3 times the IQR above the third quartile.*3More than 3 times the IQR above the third quartile.+ZSpecification of a collection of benchmarks and environments. A benchmark may consist of:GAn environment that creates input data for benchmarks, created with Z. A single < item with a name, created with [.A (possibly nested) group of +s, created with \./5A collection of measurements made while benchmarking.;Measurements related to garbage collection are tagged with GC:. They will only be available if a benchmark is run with  "+RTS -T".Packed storage.w When GC statistics cannot be collected, GC values will be set to huge negative values. If a field is labeled with "GC " below, use R and T% to safely convert to "real" values.1*Total wall-clock time elapsed, in seconds.2RTotal CPU time elapsed, in seconds. Includes both user and kernel (system) time.3gCycles, in unspecified units that may be CPU cycles. (On i386 and x86_64, this is measured using the rdtsc instruction.)4#Number of loop iterations measured.5(GC)* Number of bytes allocated. Access using R.6(GC)9 Number of garbage collections performed. Access using R.7(GC)A Number of bytes copied during garbage collection. Access using R.8(GC)j Wall-clock time spent doing real work ("mutation"), as distinct from garbage collection. Access using T.9(GC)b CPU time spent doing real work ("mutation"), as distinct from garbage collection. Access using T.:(GC)? Wall-clock time spent doing garbage collection. Access using T.;(GC)9 CPU time spent doing garbage collection. Access using T.<?A pure function or impure action that can be benchmarked. The N parameter indicates the number of times to run the given function or action.>%Top-level benchmarking configuration.@LConfidence interval for bootstrap estimation (greater than 0, less than 1).AZForce garbage collection between every benchmark run. This leads to more stable results.BrNumber of seconds to run a single benchmark. (In practice, execution time will very slightly exceed this limit.)C2Number of resamples to perform when bootstrapping.DRegressions to perform.EiFile to write binary measurement and analysis data to. If not specified, this will be a temporary file.F7File to write report output to, with template expanded.GFile to write CSV summary to.H.File to write JUnit-compatible XML results to.I>Verbosity level to use when running and analysing benchmarks.J)Template file to use if writing a report.K,Control the amount of information displayed.OField names in a /, record, in the order in which they appear.P Field names and accessors for a / record.Q"Normalise every measurement as if 4 was 1.(4 itself is left unaffected.)RConvert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return .SVConvert from a true value back to the packed representation used for GC measurements.TConvert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return .UVConvert from a true value back to the packed representation used for GC measurements.VZApply an argument to a function, and evaluate the result to weak head normal form (WHNF).WSApply an argument to a function, and evaluate the result to head normal form (NF).XqPerform an action, then evaluate its result to head normal form. This is particularly useful for forcing a lazy $ action to be completely performed.YmPerform an action, then evaluate its result to weak head normal form (WHNF). This is useful for forcing an S action whose result is an expression to be evaluated down to a more useful value.ZRun a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place. Motivation. In earlier versions of criterion, all benchmark inputs were always created when a program started running. By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.4The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any. Creation.N An environment is created right before its related benchmarks are run. The y action that creates the environment is run, then the newly created environment is evaluated to normal form (hence the P constraint) before being passed to the function that receives the environment.Complex environments.j If you need to create an environment that contains multiple values, simply pack the values into a tuple.Lazy pattern matching.q In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated,  undefined will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.The function that receives the environment must use lazy pattern matching to deconstruct the tuple, as use of strict pattern matching will cause a crash if  undefined is passed in.Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run. setupEnv = do let small = replicate 1000 1 big <- readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] Discussion.@ The environment created in the example above is intentionally not; ideal. As Haskell's scoping rules suggest, the variable big/ is in scope for the benchmarks that use only small<. It would be better to create a separate environment for bigR, so that it will not be kept alive while the unrelated benchmarks are being run.[Create a single benchmark.\6Group several benchmarks together under a common name.]nRetrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in.  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.RTake the newly created environment and make it available to the given benchmarks.[!A name to identify the benchmark.An activity to be benchmarked.\+A name to identify the group of benchmarks.$Benchmarks to group under this name.]^_  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_>?@ABCDEFGHIJKNML<=+.-,/0123456789:;RSTUPO^QZ[\]VWXY$%&'()*#"!  B  #"! $%&'()*+.-,/ 0123456789:;<=> ?@ABCDEFGHIJKNMLOPQRSTUVWXYZ[\]^ (c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHCNone6B_0The monad in which most criterion code executes. ___(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone M `fReturn the amount of elapsed CPU time, combining user and kernel (system) time into a single measure.aIReturn the current wallclock time, in seconds since some arbitrary time.You must call c# once before calling this function!bRead the CPU cycle counter.cSet up time measurement.dTry to get GC statistics, bearing in mind that the GHC runtime will throw an exception if statistics collection was not enabled using "+RTS -T".e=Measure the execution of a benchmark a given number of times.fgThe amount of time a benchmark must run for in order for us to have some trust in the raw measurement.lWe set this threshold so that we can generate enough data to later perform meaningful statistical analyses.-The threshold is 30 milliseconds. One use of go must accumulate more than 300 milliseconds of total measurements above this threshold before it will finish.gRun a single benchmark, and return measurements collected while executing it, along with the amount of time the measurement process took.hAn empty structure.iIApply the difference between two sets of GC statistics to a measurement.jConvert a number of seconds to a string. The string will consist of four decimal places, followed by a short description of the time units. `abcdeOperation to benchmark.Number of iterations.fgLower bound on how long the benchmarking process should take. In practice, this time limit may be exceeded in order to generate enough data to perform meaningful statistical analyses.hiStatistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".j `abcdefghij ca`bdjeghif `abcdefghij(c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHCNonekRun a _ action with the given >.l<Return a random number generator, creating one if necessary.IThis is not currently thread-safe, but in a harmless way (we might call * more than once if multiple threads race).m/Return an estimate of the measurement overhead.Memoise the result of an  action.This is not currently thread-safe, but hopefully in a harmless way. We might call the given action more than once if multiple threads race, so our caller's job is to write actions that can be run multiple times safely.klm_klm_klmklm(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone+0M n A problem arose with a template.o The template could not be found.p%Trim long flat tails from a KDE plot.qMReturn the path to the template and other files used for generating reports.rWrite out a series of 2 values to a single file, if configured to do so.sFormat a series of + values using the given Hastache template.t Render the elements of a vector.)For example, given this piece of Haskell: & $ \name -> case name of "foo" -> t "x" foo0It will substitute each value in the vector for x% in the following Hastache template: {{#foo}} {{x}} {{/foo}}u#Render the elements of two vectors.vKAttempt to include the contents of a file based on a search path. Returns 3 if the search fails or the file could not be read.!Intended for use with Hastache's , for example: context "include" =  $ v [ templateDir]Hastache template expansion is not performed within the included file. No attempt is made to ensure that the included file path is safe, i.e. that it does not refer to an unexpected file such as "etcpasswd".wLoad a Hastache template file.AIf the name is an absolute or relative path, the search path is not1 used, and the name is treated as a literal path.This function throws a n, if the template could not be found, or an  if no template could be loaded. nopqrsHastache template.tName to use when substituting.u(Name for elements from the first vector.)Name for elements from the second vector. First vector.Second vector.vDirectories to search.Name of the file to search for.w Search path.Name of template file. nopqrstuvw srpnowvqtu nopqrstuvw(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone24HMx0An internal class that acts like Printf/HPrintf.`The implementation is visible to the rest of the program, but the details of the class are not.yPrint a "normal" note.zPrint verbose output.{Print an error message.|Write a record to a CSV file. xyz{|xyz{|xy{z| xyz{|(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone+ }=Classify outliers in a data set, using the boxplot technique.~gCompute the extent to which outliers in the sample data affect the sample mean and standard deviation./Count the total number of outliers in a sample.Display the mean of a 7, and characterise the outliers present in the sample. Multiply the Estimate,s in an analysis by the given value, using .%Perform an analysis of a measurement.3Regress the given predictors against the responder.bErrors may be returned under various circumstances, such as invalid names or lack of needed data.See ) for details of the regression performed.$Given a list of accessor names (see Og), return either a mapping from accessor name to function or an error message if any names are wrong.fGiven predictor and responder names, do some basic validation, then hand back the relevant accessors.Display a report of the $ present in a . }~"Bootstrap estimate of sample mean.3Bootstrap estimate of sample standard deviation.Number of original iterations.1Number of iterations used to compute the sample.Value to multiply by.Experiment number.Experiment name. Sample data.Predictor names.Responder name.Predictor names.Responder name." !"#$%&'()*}~"$%&'()*#"! }~ }~(c) 2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone+0 'Execution mode for a benchmark program.%Run and analyse the given benchmarks.ORun the given benchmarks, without collecting or analysing performance numbers.Print the version.List all benchmarks.How to match a benchmark name.!Match by Unix-style glob pattern.+Match by prefix. For example, a prefix of "foo" will match "foobar".#Default benchmarking configuration.Parse a command line. Flesh out a command line parser.pA string describing the version of this benchmark (really, the version of criterion that was used to build it).FDefault configuration to use if options are not explicitly specified. (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNoneThe header identifies a criterion data file. This contains version information; there is no expectation of cross-version compatibility. Read all reports from the given .Write reports to the given .%Read all reports from the given file. Write reports to the given file. (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone3Run a single benchmark and analyse its performance.)Run, and analyse, one or more benchmarks.2Run a benchmark without analysing its performance.Add the given prefix to a name. If the prefix is empty, the name is returned unmodified. Otherwise, the prefix and name are separated by a '/' character.(Write summary JUnit file (if applicable)BA predicate that chooses whether to run a benchmark by its name.!Number of loop iterations to run.BA predicate that chooses whether to run a benchmark by its name.Prefix.Name. (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone%An entry point that can be used as a main function. (import Criterion.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ]YCreate a function that can tell if a name given on the command line matches a benchmark.%An entry point that can be used as a main' function, with configurable defaults.Example: import Criterion.Main.Options import Criterion.Main myConfig = defaultConfig { -- Do not GC between runs. forceGC = False } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]!If you save the above example as "Fib.hs"/, you should be able to compile it as follows: ghc -O --make FibRun  "Fib --help"< on the command line to get a list of command line options.HDisplay an error message from a command line parsing failure, and exit.Command line arguments. +<VWXYZ[\ <+Z[\WVXY (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone;Run a benchmark interactively, and analyse its performance.QRun a benchmark interactively, analyse its performance, and return the analysis.;Run a benchmark interactively, and analyse its performance.QRun a benchmark interactively, analyse its performance, and return the analysis. +<VWXYZ[\ <+Z[\WVXY !""#$%&&'()*+,-.//012345657889:;<=>?@ABCDDEEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcd  efghidjklmnopqrstuvwxyz{|}~             criterion-1.0.2.0Criterion.TypesCriterion.MonadCriterion.MeasurementCriterion.ReportCriterion.IO.PrintfCriterion.AnalysisCriterion.Main.Options Criterion.IOCriterion.InternalCriterion.Main CriterionPaths_criterionCriterion.Monad.InternalBemptyReport reportNumber reportName reportKeysreportMeasuredreportAnalysisreportOutliers reportKDEsKDEkdeType kdeValueskdePDFSampleAnalysis anRegress anOverheadanMeananStdDev anOutlierVar Regression regResponder regCoeffs regRSquareOutlierVarianceovEffectovDesc ovFraction OutlierEffectSevereModerateSlight UnaffectedOutliers samplesSeen lowSeverelowMildhighMild highSevere Benchmark BenchGroup EnvironmentMeasuredmeasTime measCpuTime measCycles measIters measAllocated measNumGcsmeasBytesCopiedmeasMutatorWallSecondsmeasMutatorCpuSecondsmeasGcWallSecondsmeasGcCpuSeconds BenchmarkableConfig confIntervalforceGC timeLimit resamples regressions rawDataFile reportFilecsvFile junitFile verbositytemplate VerbosityVerboseNormalQuiet measureKeysmeasureAccessorsrescalefromInttoInt fromDoubletoDoublewhnfnfnfIOwhnfIOenvbenchbgroup benchNamesmeasure getCPUTimegetTime getCyclesinitializeTime getGCStats threshold runBenchmarkmeasured applyGCStatssecs withConfiggetGen getOverheadTemplateExceptionTemplateNotFound tidyTailsgetTemplateDirreport formatReportvectorvector2 includeFile loadTemplateCritHPrintfTypenoteprolix printErrorwriteCsvclassifyOutliersoutlierVariance countOutliers analyseMeanscale analyseSampleregressresolveAccessorsvalidateAccessors noteOutliersModeRunOnlyRunVersionList MatchTypeGlobPrefix defaultConfig parseWithdescribe versionInfoheader hGetReports hPutReports readReports writeReportsrunAndAnalyseOne runAndAnalyse runNotAnalyse addPrefix defaultMain makeMatcherdefaultMainWith benchmark benchmark' benchmarkWithbenchmarkWith'catchIOversionbindirlibdirdatadir libexecdir sysconfdir getBinDir getLibDir getDataDir getLibexecDir getSysconfDirgetDataFileNamebaseGHC.IntInt64 Data.MaybeNothingghc-prim GHC.TypesIOdeepseq-1.3.0.2Control.DeepSeqNFDatameasureAccessors_pureFuncimpure addOutliers$fNFDataReport$fBinaryReport$fToJSONReport$fFromJSONReport $fNFDataKDE $fBinaryKDE $fToJSONKDE $fFromJSONKDE$fNFDataSampleAnalysis$fBinarySampleAnalysis$fToJSONSampleAnalysis$fFromJSONSampleAnalysis$fNFDataRegression$fBinaryRegression$fToJSONRegression$fFromJSONRegression$fNFDataOutlierVariance$fBinaryOutlierVariance$fToJSONOutlierVariance$fFromJSONOutlierVariance$fMonoidOutliers$fNFDataOutlierEffect$fBinaryOutlierEffect$fToJSONOutlierEffect$fFromJSONOutlierEffect$fNFDataOutliers$fBinaryOutliers$fToJSONOutliers$fFromJSONOutliers$fShowBenchmark$fBinaryMeasured$fNFDataMeasured$fToJSONMeasured$fFromJSONMeasured runCriterionCritconfiggenoverhead$fMonadReaderConfigCriterionsquishseriesmwc-random-0.13.2.0System.Random.MWCcreateSystemRandommemoisehastache-0.6.0Text.Hastache.Context mkStrContext Text.Hastache MuLambdaMGHC.IO.Exception IOException$fExceptionTemplateException chPrintfImpl PrintfContchPrintf$fCritHPrintfType(->)$fCritHPrintfTypeIO$fCritHPrintfTypeCriterionstatistics-0.13.2.1Statistics.TypesSampleStatistics.Resampling.BootstrapStatistics.Regression olsRegress singleton renderNames outputOptionrangematch regressParamsregressionHelpGHC.IO.Handle.TypesHandlereadAlljunit parseError selectBenches