G      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:; < = > ? @ A B C D E F  (c) 2017 Ryan Scott BSD-stylebos@serpentine.com experimentalGHCSafeGA dummy environment that is passed to functions that create benchmarks from environments when no concrete environment is available.G(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy "#&'16QVRReport of a sample analysis.+A simple index indicating that this is the n th report.The name of this report.See W.Raw measurements. These are notN corrected for the estimated measurement overhead that can be found via the  field of  . Report analysis. Analysis of outliers. Data for a KDE of times. $Data for a KDE chart of performance.:Result of a bootstrap analysis of a non-parametric sample.+Estimates calculated via linear regression.\Estimated measurement overhead, in seconds. Estimation is performed via linear regression.Estimated mean.Estimated standard deviation.BDescription of the effects of outliers on the estimated variance.Results of a linear regression. Name of the responding variable.1Map from name to value of predictor coefficients.R goodness-of-fit estimate.sAnalysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean)."Qualitative description of effect. $Brief textual description of effect.!@Quantitative description of effect (a fraction between 0 and 1)."pA description of the extent to which outliers in the sample data affect the sample mean and standard deviation.#Less than 1% effect.$Between 1% and 10%.%Between 10% and 50%.&+Above 50% (i.e. measurements are useless).'COutliers from sample data, calculated using the boxplot technique.*JMore than 3 times the interquartile range (IQR) below the first quartile.+9Between 1.5 and 3 times the IQR below the first quartile.,9Between 1.5 and 3 times the IQR above the third quartile.-3More than 3 times the IQR above the third quartile..ZSpecification of a collection of benchmarks and environments. A benchmark may consist of:GAn environment that creates input data for benchmarks, created with b. A single ? item with a name, created with h.A (possibly nested) group of .s, created with i.25A collection of measurements made while benchmarking.;Measurements related to garbage collection are tagged with GC:. They will only be available if a benchmark is run with  "+RTS -T".Packed storage.w When GC statistics cannot be collected, GC values will be set to huge negative values. If a field is labeled with "GC " below, use Z and \% to safely convert to "real" values.4*Total wall-clock time elapsed, in seconds.5RTotal CPU time elapsed, in seconds. Includes both user and kernel (system) time.6gCycles, in unspecified units that may be CPU cycles. (On i386 and x86_64, this is measured using the rdtsc instruction.)7#Number of loop iterations measured.8(GC)* Number of bytes allocated. Access using Z.9(GC)9 Number of garbage collections performed. Access using Z.:(GC)A Number of bytes copied during garbage collection. Access using Z.;(GC)j Wall-clock time spent doing real work ("mutation"), as distinct from garbage collection. Access using \.<(GC)b CPU time spent doing real work ("mutation"), as distinct from garbage collection. Access using \.=(GC)? Wall-clock time spent doing garbage collection. Access using \.>(GC)9 CPU time spent doing garbage collection. Access using \.??A pure function or impure action that can be benchmarked. The HN parameter indicates the number of times to run the given function or action.E%Top-level benchmarking configuration.GLConfidence interval for bootstrap estimation (greater than 0, less than 1).HrNumber of seconds to run a single benchmark. (In practice, execution time will very slightly exceed this limit.)I2Number of resamples to perform when bootstrapping.JRegressions to perform.KiFile to write binary measurement and analysis data to. If not specified, this will be a temporary file.L7File to write report output to, with template expanded.MFile to write CSV summary to.N(File to write JSON-formatted results to.O.File to write JUnit-compatible XML results to.P>Verbosity level to use when running and analysing benchmarks.Q)Template file to use if writing a report.R,Control the amount of information displayed.V Construct a ?( value from an impure action, where the H< parameter indicates the number of times to run the action.WField names in a 2, record, in the order in which they appear.X Field names and accessors for a 2 record.Y"Normalise every measurement as if 7 was 1.(7 itself is left unaffected.)ZConvert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return I.[VConvert from a true value back to the packed representation used for GC measurements.\Convert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return I.]VConvert from a true value back to the packed representation used for GC measurements.^ZApply an argument to a function, and evaluate the result to weak head normal form (WHNF)._NApply an argument to a function, and evaluate the result to normal form (NF).`lPerform an action, then evaluate its result to normal form. This is particularly useful for forcing a lazy J$ action to be completely performed.amPerform an action, then evaluate its result to weak head normal form (WHNF). This is useful for forcing an JS action whose result is an expression to be evaluated down to a more useful value.bRun a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place. Motivation. In earlier versions of criterion, all benchmark inputs were always created when a program started running. By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.4The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any. Creation.N An environment is created right before its related benchmarks are run. The Jy action that creates the environment is run, then the newly created environment is evaluated to normal form (hence the KP constraint) before being passed to the function that receives the environment.Complex environments.j If you need to create an environment that contains multiple values, simply pack the values into a tuple.Lazy pattern matching.* In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated, a value which throws an exception will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.kThe function that receives the environment must use lazy pattern matching to deconstruct the tuple (e.g., ~(x, y), not (x, y)f), as use of strict pattern matching will cause a crash if an exception-throwing value is passed in.Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run. setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ] Discussion.@ The environment created in the example above is intentionally not; ideal. As Haskell's scoping rules suggest, the variable big/ is in scope for the benchmarks that use only small<. It would be better to create a separate environment for bigR, so that it will not be kept alive while the unrelated benchmarks are being run.cSame as b, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.dlCreate a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.HThe environment is evaluated to normal form before the benchmark is run. When using ^, a, etc. Criterion creates a ? whichs runs a batch of N repeat runs of that expressions. Criterion may run any number of these batches to get accurate measurements. Environments created by b and c/, are shared across all these batches of runs.!This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.An example: Suppose we want to benchmark writing to a Chan, if we allocate the Chan using environment and our benchmark consists of writeChan env ()m, the contents and thus size of the Chan will grow with every repeat. If Criterion runs a 1,000 batches of 1,000 repeats, the result is that the channel will have 999,000 items in it by the time the last batch is run. Since GHC GC has to copy the live set for every major GC this means our last set of writes will suffer a lot of noise of the previous repeats.fBy allocating a fresh environment for every batch of runs this function should eliminate this effect.eSame as d, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.fCreate a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.As with b and dJ the environment is evaluated to normal form before the benchmark is run.This introduces extra noise and result in reduce accuracy compared to other Criterion benchmarks. But allows easier benchmarking for mutable operations than was previously possible.gSame as f, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.hCreate a single benchmark.i6Group several benchmarks together under a common name.jAdd the given prefix to a name. If the prefix is empty, the name is returned unmodified. Otherwise, the prefix and name are separated by a '/' character.knRetrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in. bpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.RTake the newly created environment and make it available to the given benchmarks.cpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.!Clean up the created environment.RTake the newly created environment and make it available to the given benchmarks.dnCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.bFunction returning the IO action that should be benchmarked with the newly generated environment.enCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly generated environment.f5Action that creates the environment for a single run.bFunction returning the IO action that should be benchmarked with the newly genereted environment.g5Action that creates the environment for a single run.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly genereted environment.h!A name to identify the benchmark.An activity to be benchmarked.i+A name to identify the group of benchmarks.$Benchmarks to group under this name.jPrefix.Name.m  !"#$%&'()*+,-.0/123456789:;<=>?@ABCDEFIMPGHJKLNOQRSTUVWXYZ[\]^_`abcdefghijklmEFGHIJKLMNOPQRSTU?@ABCD./0123456789:;<=>Z[\]XWlYbcdefgVhijk^_`a'()*+,-"#$%& !      !"#$%&'()*+,-./012 3456789:;<=>?@ABCDE FGHIJKLMNOPQRSTU (c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHCNone>?K0The monad in which most criterion code executes.LMNOPQRLMNOPQR(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy"#16VFEStatistics about memory usage and the garbage collector. Apart from  and 6 all are cumulative values since the program started. is cargo-culted from the GCStats data type that  GHC.Stats used to export. Since GCStats was removed in GHC 8.4,  criterion uses 9 to provide a backwards-compatible view of GC statistics.Total number of bytes allocatedJNumber of garbage collections performed (any generation, major and minor)(Maximum number of live bytes seen so farWNumber of byte usage samples taken, or equivalently the number of major GCs performed.1Sum of all byte usage samples, can be used with c to calculate averages with arbitrary weighting (if you are sampling this record multiple times). Number of bytes copied during GC4Number of live bytes at the end of the last major GC$Current number of bytes lost to slop;Maximum number of bytes lost to slop at any one time so far%Maximum number of megabytes allocatediCPU time spent running mutator threads. This does not include any profiling overhead or initialization.VWall clock time spent running mutator threads. This does not include initialization.CPU time spent running GC Wall clock time spent running GC*Total CPU time elapsed since program start)Total wall clock time elapsed since startfReturn the amount of elapsed CPU time, combining user and kernel (system) time into a single measure.IReturn the current wallclock time, in seconds since some arbitrary time.You must call # once before calling this function!Read the CPU cycle counter.Set up time measurement.Try to get GC statistics, bearing in mind that the GHC runtime will throw an exception if statistics collection was not enabled using "+RTS -T".=Measure the execution of a benchmark a given number of times.gThe amount of time a benchmark must run for in order for us to have some trust in the raw measurement.lWe set this threshold so that we can generate enough data to later perform meaningful statistical analyses.-The threshold is 30 milliseconds. One use of o must accumulate more than 300 milliseconds of total measurements above this threshold before it will finish.Run a single benchmark, and return measurements collected while executing it, along with the amount of time the measurement process took.An empty structure.IApply the difference between two sets of GC statistics to a measurement.Convert a number of seconds to a string. The string will consist of four decimal places, followed by a short description of the time units.Operation to benchmark.Number of iterations.Lower bound on how long the benchmarking process should take. In practice, this time limit may be exceeded in order to generate enough data to perform meaningful statistical analyses.Statistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".(c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHC Trustworthy!Run a  action with the given E.<Return a random number generator, creating one if necessary.IThis is not currently thread-safe, but in a harmless way (we might call S* more than once if multiple threads race)./Return an estimate of the measurement overhead.TMemoise the result of an J action.This is not currently thread-safe, but hopefully in a harmless way. We might call the given action more than once if multiple threads race, so our caller's job is to write actions that can be run multiple times safely.(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy;=QV&0An internal class that acts like Printf/HPrintf.`The implementation is visible to the rest of the program, but the details of the class are not.Print a "normal" note.Print verbose output.Print an error message.Write a record to a CSV file.UVW(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy"#1; =Classify outliers in a data set, using the boxplot technique.gCompute the extent to which outliers in the sample data affect the sample mean and standard deviation./Count the total number of outliers in a sample.Display the mean of a X7, and characterise the outliers present in the sample. Multiply the Estimate,s in an analysis by the given value, using Y.%Perform an analysis of a measurement.3Regress the given predictors against the responder.bErrors may be returned under various circumstances, such as invalid names or lack of needed data.See Z) for details of the regression performed.$Given a list of accessor names (see Wg), return either a mapping from accessor name to function or an error message if any names are wrong.fGiven predictor and responder names, do some basic validation, then hand back the relevant accessors.Display a report of the ' present in a X."Bootstrap estimate of sample mean.3Bootstrap estimate of sample standard deviation.Number of original iterations.1Number of iterations used to compute the sample.Value to multiply by.Experiment number.Experiment name. Sample data.Predictor names.Responder name.Predictor names.Responder name." !"#$%&'()*+,-"'()*+,-"#$%& !Safe='[\]^_`ab(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy"#16VV  A problem arose with a template. The template could not be found.%Trim long flat tails from a KDE plot.MReturn the path to the template and other files used for generating reports. When the -fembed-data-files Cabal6 flag is enabled, this simply returns the empty path. Write out a series of 2 values to a single file, if configured to do so. Format a series of * values using the given Mustache template.  Render the elements of a vector.0It will substitute each value in the vector for x% in the following Mustache template: {{#foo}} {{x}} {{/foo}} #Render the elements of two vectors. KAttempt to include the contents of a file based on a search path. Returns 3 if the search fails or the file could not be read.BIntended for preprocessing Mustache files, e.g. replacing sections  {{#include}}file.txt{{/include} with file contents.Load a Mustache template file.AIf the name is an absolute or relative path, the search path is not1 used, and the name is treated as a literal path.If the -fembed-data-files Cabal1 flag is enabled, this also checks the embedded  data-files from criterion.cabal.This function throws a , if the template could not be found, or an c if no template could be loaded. Mustache template. Name to use when substituting. (Name for elements from the first vector.)Name for elements from the second vector. First vector.Second vector. Directories to search.Name of the file to search for. Search path.Name of template file.            (c) 2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#16g'Execution mode for a benchmark program.List all benchmarks.Print the version.ORun the given benchmarks, without collecting or analysing performance numbers.%Run and analyse the given benchmarks.How to match a benchmark name.+Match by prefix. For example, a prefix of "foo" will match "foobar".Match by Unix-style glob pattern. When using this match type, benchmark names are treated as if they were file-paths. For example, the glob patterns "*/ba*" and "*/*" will match  "foo/bar", but "*" or "*bar" will not.7Match by searching given substring in benchmark paths.Same as , but case insensitive.#Default benchmarking configuration. Parse a command line.!Parse a configuration." Flesh out a command line parser.#pA string describing the version of this benchmark (really, the version of criterion that was used to build it). FDefault configuration to use if options are not explicitly specified. !"# !"#(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthyvs 12On disk we store (name,version,reports), where [7 is the version of Criterion used to generate the file.2The header identifies a criterion data file. This contains version information; there is no expectation of cross-version compatibility.33The magic string we expect to start off the header.4OThe current version of criterion, encoded into a string that is used in files.5 Read all records from the given d.6Write records to the given d.7%Read all records from the given file.8 Write records to the given file.9jAlternative file IO with JSON instances. Read a list of reports from a .json file produced by criterion.=If the version does not match exactly, this issues a warning.:Write a list of reports to a JSON file. Includes a header, which includes the current Criterion version number. This should be the inverse of 9. 123456789: 234567819: (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"# ;Run a single benchmark.eAnalyse a single benchmark.<3Run a single benchmark and analyse its performance.=)Run, and analyse, one or more benchmarks.f}Write out raw binary report files. This has some bugs, including and not limited to #68, and may be slated for deprecation.>2Run a benchmark without analysing its performance.gIterate over benchmarks.h'Write summary JSON file (if applicable)i(Write summary JUnit file (if applicable)=BA predicate that chooses whether to run a benchmark by its name.>!Number of loop iterations to run.BA predicate that chooses whether to run a benchmark by its name.;<=>=<;> (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy?%An entry point that can be used as a main function. (import Criterion.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ]@YCreate a function that can tell if a name given on the command line matches a benchmark.A%An entry point that can be used as a main' function, with configurable defaults.Example: import Criterion.Main.Options import Criterion.Main myConfig = defaultConfig { -- Resample 10 times for bootstrapping resamples = 10 } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]!If you save the above example as "Fib.hs"/, you should be able to compile it as follows: ghc -O --make FibRun  "Fib --help"< on the command line to get a list of command line options.B Run a set of .s with the given .!This can be useful if you have a Z from some other source (e.g. from a one in your benchmark driver's command-line parser).jHDisplay an error message from a command line parsing failure, and exit.@Command line arguments..?V^_`abcdefghi?@AB?.bcdefgVhi_^`a?A@B (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#tC;Run a benchmark interactively, and analyse its performance.DQRun a benchmark interactively, analyse its performance, and return the analysis.E;Run a benchmark interactively, and analyse its performance.FQRun a benchmark interactively, analyse its performance, and return the analysis..?V^_`abcdefghiCDEF?.bcdefgVhi_^`aCEDFk  !"#$%&&'()**+,-./01233456789:9;<<=>?@ABCDEFGHHIJKLMMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  s      !"#$%&'()*+,-./0123456789:;<=> ? @ A B C D E F G H I J KLMNLOPQRSTUV  W X X % Y Z[\]^_``abcadaefghijklmnLopLqr s t u v w xy(criterion-1.3.0.0-L2HHT9rJDtmAT7mBhB7yNFCriterion.TypesCriterion.MonadCriterion.MeasurementCriterion.IO.PrintfCriterion.AnalysisCriterion.ReportCriterion.Main.Options Criterion.IOCriterion.InternalCriterion.Main CriterionCriterion.Types.InternalCriterion.Monad.InternalPaths_criterionBempty DataRecord MeasurementAnalysedReport reportNumber reportName reportKeysreportMeasuredreportAnalysisreportOutliers reportKDEsKDEkdeType kdeValueskdePDFSampleAnalysis anRegress anOverheadanMeananStdDev anOutlierVar Regression regResponder regCoeffs regRSquareOutlierVarianceovEffectovDesc ovFraction OutlierEffect UnaffectedSlightModerateSevereOutliers samplesSeen lowSeverelowMildhighMild highSevere Benchmark Environment BenchGroupMeasuredmeasTime measCpuTime measCycles measIters measAllocated measNumGcsmeasBytesCopiedmeasMutatorWallSecondsmeasMutatorCpuSecondsmeasGcWallSecondsmeasGcCpuSeconds BenchmarkableallocEnvcleanEnv runRepeatedlyperRunConfig confInterval timeLimit resamples regressions rawDataFile reportFilecsvFilejsonFile junitFile verbositytemplate VerbosityQuietNormalVerbosetoBenchmarkable measureKeysmeasureAccessorsrescalefromInttoInt fromDoubletoDoublewhnfnfnfIOwhnfIOenvenvWithCleanup perBatchEnvperBatchEnvWithCleanup perRunEnvperRunEnvWithCleanupbenchbgroup addPrefix benchNamesmeasure$fBinaryMeasured$fNFDataMeasured$fToJSONMeasured$fFromJSONMeasured$fShowBenchmark$fMonoidOutliers$fSemigroupOutliers$fNFDataOutliers$fBinaryOutliers$fToJSONOutliers$fFromJSONOutliers$fNFDataOutlierEffect$fBinaryOutlierEffect$fToJSONOutlierEffect$fFromJSONOutlierEffect$fNFDataOutlierVariance$fBinaryOutlierVariance$fToJSONOutlierVariance$fFromJSONOutlierVariance$fNFDataRegression$fBinaryRegression$fToJSONRegression$fFromJSONRegression$fNFDataSampleAnalysis$fBinarySampleAnalysis$fToJSONSampleAnalysis$fFromJSONSampleAnalysis $fNFDataKDE $fBinaryKDE $fToJSONKDE $fFromJSONKDE$fNFDataReport$fBinaryReport$fToJSONReport$fFromJSONReport$fToJSONDataRecord$fFromJSONDataRecord$fNFDataDataRecord$fBinaryDataRecord $fEqVerbosity$fOrdVerbosity$fBoundedVerbosity$fEnumVerbosity$fReadVerbosity$fShowVerbosity$fDataVerbosity$fGenericVerbosity $fEqConfig $fReadConfig $fShowConfig $fDataConfig$fGenericConfig $fEqMeasured$fReadMeasured$fShowMeasured$fDataMeasured$fGenericMeasured $fEqOutliers$fReadOutliers$fShowOutliers$fDataOutliers$fGenericOutliers$fEqOutlierEffect$fOrdOutlierEffect$fReadOutlierEffect$fShowOutlierEffect$fDataOutlierEffect$fGenericOutlierEffect$fEqOutlierVariance$fReadOutlierVariance$fShowOutlierVariance$fDataOutlierVariance$fGenericOutlierVariance$fEqRegression$fReadRegression$fShowRegression$fGenericRegression$fEqSampleAnalysis$fReadSampleAnalysis$fShowSampleAnalysis$fGenericSampleAnalysis$fEqKDE $fReadKDE $fShowKDE $fDataKDE $fGenericKDE $fEqReport $fReadReport $fShowReport$fGenericReport$fEqDataRecord$fReadDataRecord$fShowDataRecord$fGenericDataRecord GCStatisticsgcStatsBytesAllocated gcStatsNumGcsgcStatsMaxBytesUsedgcStatsNumByteUsageSamplesgcStatsCumulativeBytesUsedgcStatsBytesCopiedgcStatsCurrentBytesUsedgcStatsCurrentBytesSlopgcStatsMaxBytesSlopgcStatsPeakMegabytesAllocatedgcStatsMutatorCpuSecondsgcStatsMutatorWallSecondsgcStatsGcCpuSecondsgcStatsGcWallSecondsgcStatsCpuSecondsgcStatsWallSeconds getCPUTimegetTime getCyclesinitializeTimegetGCStatistics thresholdrunBenchmarkablerunBenchmarkable_ runBenchmarkmeasuredapplyGCStatisticssecs$fEqGCStatistics$fReadGCStatistics$fShowGCStatistics$fDataGCStatistics$fGenericGCStatistics withConfiggetGen getOverheadCritHPrintfTypenoteprolix printErrorwriteCsv$fCritHPrintfType(->)$fCritHPrintfTypeIO$fCritHPrintfTypeCriterionclassifyOutliersoutlierVariance countOutliers analyseMeanscale analyseSampleregressresolveAccessorsvalidateAccessors noteOutliersTemplateExceptionTemplateNotFound tidyTailsgetTemplateDirreport formatReportvectorvector2 includeFile loadTemplate$fExceptionTemplateException$fEqTemplateException$fReadTemplateException$fShowTemplateException$fDataTemplateException$fGenericTemplateExceptionModeListVersionRunItersRun MatchTypePrefixGlobPatternIPattern defaultConfig parseWithconfigdescribe versionInfo $fEqMatchType$fOrdMatchType$fBoundedMatchType$fEnumMatchType$fReadMatchType$fShowMatchType$fDataMatchType$fGenericMatchType$fEqMode $fReadMode $fShowMode $fDataMode $fGenericModeReportFileContentsheader headerRoot critVersion hGetRecords hPutRecords readRecords writeRecordsreadJSONReportswriteJSONReportsrunOnerunAndAnalyseOne runAndAnalyse runFixedIters defaultMain makeMatcherdefaultMainWithrunMode benchmark benchmark' benchmarkWithbenchmarkWith'fakeEnvironmentbaseGHC.IntInt64GHC.BaseNothingghc-prim GHC.TypesIOdeepseq-1.4.3.0Control.DeepSeqNFData runCriterionCritgenoverhead*mwc-random-0.13.6.0-FVAd7inlLEjCHUPScUTDLtSystem.Random.MWCcreateSystemRandommemoise chPrintfImpl PrintfCont*statistics-0.14.0.2-3n7f62OGiNgEjFILrCKMCCStatistics.Types.InternalSampleStatistics.TypesStatistics.Regression olsRegressversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileNameGHC.IO.Exception IOExceptionGHC.IO.Handle.TypesHandle analyseOne rawReportforjsonjunit parseError