!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~+(c) 2017 Vincent HanquezSafez3 Possible modes of escaping: * none * normal quotes escapes * content need doubling because of double quote in contentA Row of fields!a CSV Field (numerical or string)&The content inside is properly escapedCreate a field from Double%Create a field for numerical integralCreate a field from StringOutput a row to a StringNone16K(c) 2017-2018 Vincent HanquezSafe+16<USA type representing a sum-type free Maybe a where a specific tag represent Nothing Create an optional value from a (c) 2017 Vincent HanquezNone-FK-Record clock, cpu and cycles in one structurefReturn the amount of elapsed CPU time, combining user and kernel (system) time into a single measure.IReturn the current wallclock time, in seconds since some arbitrary time.You must call initializeTime# once before calling this function!Read the CPU cycle counter.Set up time measurement. None16K-Represent a number of hundreds of picoseconds!Represent a number of nanoseconds"Represent a number of microseconds#Represent a number of milliseconds.dReturn the number of integral nanoseconds followed by the number of hundred of picoseconds (1 digit) (c) 2017 Vincent HanquezNoneVd!2 Gather RUsagecall a function f, gathering RUSage before and after the call.POn operating system not supporting getrusage this will be False, otherwise True.      (c) 2017 Vincent HanquezNoneV('  'Differential metrics related the RTS/GCnumber of bytes allocated number of GCsnumber of bytes copiedmutator wall time measurementmutator cpu time measurementgc wall time measurementgc cpu time measurement3Check if RTS/GC metrics gathering is enabled or not5Return RTS/GC metrics differential between a call to ffunction to measure    (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone "#-16FVR<5A collection of measurements made while benchmarking.;Measurements related to garbage collection are tagged with GC:. They will only be available if a benchmark is run with  "+RTS -T".Packed storage.w When GC statistics cannot be collected, GC values will be set to huge negative values. If a field is labeled with "GC " below, use fromInt and  fromDouble% to safely convert to "real" values.#Number of loop iterations measured.*Total wall-clock time elapsed, in seconds.gCycles, in unspecified units that may be CPU cycles. (On i386 and x86_64, this is measured using the rdtsc instruction.)RTotal CPU time elapsed, in seconds. Includes both user and kernel (system) time. User time  System time!Maximum resident set size"Minor page faults#Major page faults$$Number of voluntary context switches%&Number of involuntary context switches&(GC)* Number of bytes allocated. Access using fromInt.'(GC)9 Number of garbage collections performed. Access using fromInt.((GC)A Number of bytes copied during garbage collection. Access using fromInt.)(GC)j Wall-clock time spent doing real work ("mutation"), as distinct from garbage collection. Access using  fromDouble.*(GC)b CPU time spent doing real work ("mutation"), as distinct from garbage collection. Access using  fromDouble.+(GC)? Wall-clock time spent doing garbage collection. Access using  fromDouble.,(GC)9 CPU time spent doing garbage collection. Access using  fromDouble.-Convert a number of seconds to a string. The string will consist of four decimal places, followed by a short description of the time units..Field names in a , record, in the order in which they appear./ Field names and accessors for a  record.0$Given a list of accessor names (see .g), return either a mapping from accessor name to function or an error message if any names are wrong.1fGiven predictor and responder names, do some basic validation, then hand back the relevant accessors.2"Normalise every measurement as if  was 1.( itself is left unaffected.)3Invokes the supplied benchmark runner function with a combiner and a measurer that returns the measurement of a single iteration of an IO action.4An empty structure.5IApply the difference between two sets of GC statistics to a measurement.6MApply the difference between two sets of rusage statistics to a measurement.1Predictor names.Responder name.3Number of iterations.6Statistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".*7 !"#$%&'()*+,89:;-<=.>12345?@7 !"#$%&'()*+, (c) 2017 Vincent HanquezNone_oAhPrint a NanoSeconds quantity with a human friendly format that make it easy to compare different valuesGiven a separator Char of '_':0 -> " 0" 1000 -> " 1_000" 1234567 -> " 1_234_567" 10200300400 -> "10_200_300_400"bNote that the seconds parameters is aligned considered maximum of 2 characters (i.e. 99 seconds).BProduce a table in markdownJThis is handy when wanting to copy paste to a markdown flavor destination.C%reset, green, red, yellow ANSI escapeD%reset, green, red, yellow ANSI escapeE%reset, green, red, yellow ANSI escapeF%reset, green, red, yellow ANSI escapeBtop left corner labelcolumns labels-a list of row labels followed by content rowsthe resulting stringAGBCDEF(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneh3HMeasure distance between two DoubleEs in ULPs (units of least precision). Note that it's different from abs (ulpDelta a b), since it returns correct result even when ulpDelta overflows.I Compare two J9 values for approximate equality, using Dawson's method.The required accuracy is specified in ULPs (units of least precision). If the two numbers differ by the given number of ULPs or less, this function returns True.I#Number of ULPs of accuracy desired.I(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafepK#Largest representable finite value.L5The smallest representable positive normalized value.M The largest N x such that 2**(x)-1) is approximately representable as a J.OPositive infinity.PNegative infinity.Q Not a number.R!Maximum possible finite value of log xS)Logarithm of smallest normalized double (L)T sqrt 2U  sqrt (2 * pi)V  2 / sqrt piW  1 / sqrt 2X The smallest J  such that 1 +  "` 1.Y log(sqrt((2*pi))Z*Euler Mascheroni constant ( = 0.57721...)KLMOPQRSTUVWXYZ%(c) 2009, 2011, 2012 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneV} [Error function.K \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp(-t^2) dt Function limits are:[ begin{aligned} &operatorname{erf}(-infty) &=& -1 -- &operatorname{erf}(0) &=& phantom{-},0 -- &operatorname{erf}(+infty) &=& phantom{-},1 -- end{aligned}\Complementary error function.6 \operatorname{erfc}(x) = 1 - \operatorname{erf}(x) Function limits are:[ begin{aligned} &operatorname{erf}(-infty) &=&, 2 -- &operatorname{erf}(0) &=&, 1 -- &operatorname{erf}(+infty) &=&, 0 -- end{aligned}] Inverse of [.^ Inverse of \._O(log n)4 Compute the logarithm in base 2 of the given value.]p " [-1,1]^p " [0,2][\]^_None}V[\]^_(c) 2014 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 1<>?FT)`sKahan-Babuaka-Neumaier summation. This is a little more computationally costly than plain Kahan summation, but is always at least as accurate.a0A class for summation of floating point numbers.bThe identity for summation.cAdd a value to a sum.dSum a collection of values. Example: foo = d e [1,2,3]e2Return the result of a Kahan-Babuaka-Neumaier sum.fO(n) Sum a vector of values.adcbef`gabcdhijkSafelmnopqrs(c) 2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#16-$%Top-level benchmarking configuration.LConfidence interval for bootstrap estimation (greater than 0, less than 1).Obsolete, unused. This option used to force garbage collection between every benchmark run, but it no longer has an effect (we now unconditionally force garbage collection). This option remains solely for backwards API compatibility.Number of seconds to run a single benchmark. In practice, execution time may exceed this limit to honor minimum number of samples or minimum duration of each sample. Increased time limit allows us to take more samples. Use 0 for a single sample benchmark.&Minimum number of samples to be taken.Minimum duration of each sample, increased duration allows us to perform more iterations in each sample. To enforce a single iteration in a sample use duration 0.Discard the very first iteration of a benchmark. The first iteration includes the potentially extra cost of one time evaluations introducing large variance.,Quickly measure and report raw measurements. nJust measure the given benchmark and place the raw output in this file, do not analyse and generate a report. ~Specify the path of the benchmarking program to use (this program itself) for measuring the benchmarks in a separate process. 2Number of resamples to perform when bootstrapping. Regressions to perform. iFile to write binary measurement and analysis data to. If not specified, this will be a temporary file.7File to write report output to, with template expanded.File to write CSV summary to."File to write CSV measurements to.(File to write JSON-formatted results to..File to write JUnit-compatible XML results to.>Verbosity level to use when running and analysing benchmarks.)Template file to use if writing a report.Number of iterationsType of matching to use, if anyMode of operation'Execution mode for a benchmark program.List all benchmarks.Print the version. Print help Default Benchmark mode!How to match a benchmark name."+Match by prefix. For example, a prefix of "foo" will match "foobar".#7Match by searching given substring in benchmark paths.$Same as #, but case insensitive.%,Control the amount of information displayed.)#Default benchmarking configuration.*tCreate a benchmark selector function that can tell if a name given on the command line matches a defined benchmark.-lA string describing the version of this benchmark (really, the version of gauge that was used to build it).*Command line arguments.+Default configuration to useProgram Argument.  !"#$%'&()*+,-.)*+,- %&'(!"#$   !"#$%&'((c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafe>?VČt%Continuous probability distributuion.Minimal complete definition is u and either v or w.v@Probability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)"xu<Inverse of the cumulative distribution function. The value x for which P(X"dx) = pA. If probability is outside of [0,1] range function should call xy1-complement of quantile: "complQuantile x "a quantile (1 - x)wNatural logarithm of density.zuType class common to all distributions. Only c.d.f. could be defined for both discrete and continuous distributions.{KCumulative distribution function. The probability that a random variable X is less or equal than x , i.e. P(X"dx8). Cumulative should be defined for infinities as well: 'cumulative d +" = 1 cumulative d -" = 0|+One's complement of cumulative distibution: (complCumulative d x = 1 - cumulative d x(It's useful when one is interested in P(X>x) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.tuvwyz{|tvuywz{|%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone <FQTV}Sort a vector.~Return the indices of a vector.8Compute the minimum and maximum of a vector in one pass.Efficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.Multiply a number by itself.Simple for loop. Counts from start to end-1.&Simple reverse-for loop. Counts from start-1 to end (which must be less than start). I}~(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafeV (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone16>?ՙThe normal distribution.IStandard normal distribution with mean equal to 0 and variance equal to 1+Create normal distribution from parameters.WIMPORTANT: prior to 0.10 release second parameter was variance not standard deviation.Mean of distribution"Standard deviation of distribution(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone16>The result of searching for a root of a mathematical function.fThe function does not have opposite signs when evaluated at the lower and upper bounds of the search.hThe search failed to converge to within the given error tolerance after the given number of iterations.A root was successfully found.]Returns either the result of a search for a root, or the default value if the search failed.:Use the method of Ridders to compute a root of a function.The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed).Default value.Result of search for a root.Absolute error tolerance.&Lower and upper bounds for the search.Function to find the roots of.2014 Bryan O'SullivanBSD3None:Two-dimensional mutable matrix, stored in row-major order.2Two-dimensional matrix, stored in row-major order.Rows of matrix.Columns of matrix.aIn order to avoid overflows during matrix multiplication, a large exponent is stored separately. Matrix data. (c) 2014 Bryan O'SullivanBSD3None;eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking. (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<XO(n log n). Estimate the kth q:-quantile of a sample, using the weighted average method.SThe following properties should hold: * the length of the input is greater than 0! * the input does not contain NaN * k "e 0 and k "d q"otherwise an error will be thrown.k, the desired quantile.q, the number of quantiles.x, the sample data.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<,O(n)% Compute a histogram over a data set.PInterval (bin) sizes are uniform, based on the supplied upper and lower bounds.]Number of bins. This value must be positive. A zero or negative value will cause an error.PLower bound on interval range. Sample data less than this will cause an error.Upper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. Sample data.(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<+(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<O(n)Y Arithmetic mean. This uses Kahan-Babuaka-Neumaier summation, so is more accurate than  welfordMean) unless the input values are very large.vMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n.hUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1.^Standard deviation. This is simply the square root of the unbiased estimate of the variance.-2011 Aleksey Khudyakov, 2014 Bryan O'SullivanBSD3None Z Convert from a row-major vector.=Return the dimensions of this matrix, as a (row,column) pair.Matrix-vector multiplication.)Calculate the Euclidean norm of a vector.Return the given column.Return the given row.eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.Number of rows.Number of columns.(Flat list of values, in row-major order.Row.Column. 2014 Bryan O'SullivanBSD3NoneO(r*c)Q Compute the QR decomposition of a matrix. The result returned is the matrices (q,r).!(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<#Discrete cosine transform (DCT-II).>Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x)Inverse fast Fourier transform.2Radix-2 decimation-in-time fast Fourier transform."(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<%]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use .)%Density estimates at each mesh point.]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two. Lower bound (min) of the mesh range. Upper bound (max) of the mesh range.#(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone& Sample data.$(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 16<>?FTV^1Data types which could be multiplied by constant.vConfidence interval. It assumes that confidence interval forms single interval and isn't set of disjoint intervals.cLower error estimate, or distance between point estimate and lower bound of confidence interval.cUpper error estimate, or distance between point estimate and upper bound of confidence interval.<Confidence level corresponding to given confidence interval.TA point estimate and its confidence interval. It's parametrized by both error type e and value type a,. This module provides two types of error:  NormalErr& for normally distributed errors and O for error with normal distribution. See their documentation for more details. For example 144 5+ (assuming normality) could be expressed as FEstimate { estPoint = 144 , estError = NormalErr 5 }Or if we want to express  144 + 6 - 4 at CL95 we could write: Estimate { estPoint = 144 , estError = ConfInt { confIntLDX = 4 , confIntUDX = 6 , confIntCL = cl95 }Prior to statistics 0.14 Estimate% data type used following definition: data Estimate = Estimate { estPoint :: {-# UNPACK #-} !Double , estLowerBound :: {-# UNPACK #-} !Double , estUpperBound :: {-# UNPACK #-} !Double , estConfidenceLevel :: {-# UNPACK #-} !Double } Now type Estimate ConfInt Double# should be used instead. Function 5 allow to easily construct estimate from same inputs.Point estimate.!Confidence interval for estimate.Newtype wrapper for p-value.Confidence level. In context of confidence intervals it's probability of said interval covering true value of measured value. In context of statistical tests it's 1-" where  is significance of test.BSince confidence level are usually close to 1 they are stored as 1-CL2 internally. There are two smart constructors for CL:  and mkCLFromSignificance' (and corresponding variant returning Maybe). First creates CL( from confidence level and second from 1 - CL or significance level.cl95mkCLFromSignificance 0.05BPrior to 0.14 confidence levels were passed to function as plain Doubles. Use  to convert them to CL.Create confidence level from probability  or probability confidence interval contain true value of estimate. Will throw exception if parameter is out of [0,1] rangemkCL 0.95 -- same as cl95mkCLFromSignificance 0.05Same as  but returns Nothing7 instead of error if parameter is out of [0,1] rangemkCLE 0.95 -- same as cl95 Just (mkCLFromSignificance 0.05)Same as mkCLFromSignificance but returns Nothing7 instead of error if parameter is out of [0,1] range-mkCLFromSignificanceE 0.05 -- same as cl95 Just (mkCLFromSignificance 0.05)IGet confidence level. This function is subject to rounding errors. If 1 - CL is needed use  insteadGet significance level.95% confidence levelConstruct PValue. Returns Nothing# if argument is out of [0,1] range.&Create estimate with asymmetric error.&Create estimate with asymmetric error.Get confidence interval cl95 > cl90TrueCentral estimateHLower and upper errors. Both should be positive but it's not checked.Confidence level for intervalCPoint estimate. Should lie within interval but it's not checked."Lower and upper bounds of intervalConfidence level for interval%(c) 2009-2012 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 1<DFQTV %A shorter name for PRNG state in the  monad.State of the pseudo-random number generator. It uses mutable state so same generator shouldn't be used from the different threads simultaneously.TThe class of types for which we can generate uniformly distributed random variates.The uniform PRNG uses Marsaglia's MWC256 (also known as MWC8222) multiply-with-carry generator, which has a period of 2^8222 and fares well in tests of randomness. It is also extremely fast, between 2 and 3 times faster than the Mersenne Twister.Notew: Marsaglia's PRNG is not known to be cryptographically secure, so you should not use it for cryptographic operations.fGenerate a single uniformly distributed random variate. The range of values produced varies by type:CFor fixed-width integral types, the type's entire range is used.For floating point numbers, the range (0,1] is used. Zero is explicitly excluded, to allow variates to be used in statistical calculations that require non-zero values (e.g. uses of the  function).To generate a I variate with a range of [0,1), subtract 2**(-33). To do the same with J variates, subtract 2**(-53).HGenerate single uniformly distributed random variable in a given range.+For integral types inclusive range is used.QFor floating point numbers range (a,b] is used if one ignores rounding errors.DCreate a generator for variates using the given seed of 256 elements gen' <-  . fromSeed =<< saveYAcquire seed from the system entropy source. On Unix machines, this will attempt to use devurandom&. On Windows, it will internally use  RtlGenRandom.NSeed a PRNG with data from the system's fast source of pseudo-random numbers.QCompute the next index into the state pool. This is simply addition modulo 256.\Generate a vector of pseudo-random variates. This is not necessarily faster than invoking P repeatedly in a loop, but it may be more convenient to use in some situations.:Split a generator into several that can run independently. &(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 13456<FTG 4An estimator of a property of a sample, such as its .AThe use of an algebraic data type here allows functions such as  and  bootstrapBCA1 to use more efficient algorithms when possible.Run an  over a sample.O(e*r*s)d Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg).Create vector using resamplesO(n) or O(n^2)c Compute a statistical estimate repeatedly over a sample, each time omitting a successive element.O(n)( Compute the jackknife mean of a sample.O(n)F Compute the jackknife variance of a sample with a correction factor c;, so we can get either the regular or "unbiased" variance.O(n)5 Compute the unbiased jackknife variance of a sample.O(n), Compute the jackknife variance of a sample.O(n)6 Compute the jackknife standard deviation of a sample. Drop the kth element of a vector.Estimation functions.Number of resamples to compute.Original sample.           '(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNonetuBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution.tBCA algorithm is described in ch. 5 of Davison, Hinkley "Confidence intervals" in section 5.3 "Percentile method"Confidence levelFull data sampleFEstimates obtained from resampled data and estimator used for this.2(2014 Bryan O'SullivanBSD3None3zPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one moreD element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see  for details)./Compute the ordinary least-squares solution to A x = b.Solve the equation R x = b.Compute RS, the coefficient of determination that indicates goodness-of-fit of a regression.gThis value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power.{Bootstrap a regression function. Returns both the results of the regression and the requested confidence interval values.%Balance units of work across workers.tNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by .GResponder vector. Must have the same length as the predictor vectors.A& has at least as many rows as columns.b# has the same length as columns in A.R& is an upper-triangular square matrix.b* is of the same length as rows/columns in R.Predictors (regressors). Responders.Regression coefficients.Number of resamples to compute.Confidence level.Regression function.Predictor vectors.Responder vector.)(c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHC TrustworthyFT'b is essentially a reader monad to make the benchmark configuration available throughout the code.$Retrieve the configuration from the  monad.Lift an IO action into the  monad.Run a  action with the given .  !"#$ *(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone;=QVS%0An internal class that acts like Printf/HPrintf.`The implementation is visible to the rest of the program, but the details of the class are not.&Print a "normal" note.'Print verbose output.(Print an error message.);ansi escape on unix to rewind and clear the line to the end%&'()%*+,(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#&'QVa5M%The function to run after measurementPZSpecification of a collection of benchmarks and environments. A benchmark may consist of:GAn environment that creates input data for benchmarks, created with e. A single T item with a name, created with c.A (possibly nested) group of Ps, created with d.TvA pure function or impure action that can be benchmarked. The function to be benchmarked is wrapped into a function (X) that takes an - parameter which indicates the number of times to run the given function or action. The wrapper is constructed automatically by the APIs provided in this library to construct T.When Y is not set then XJ is invoked to perform all iterations in one measurement interval. When Y is set, XW is always invoked with 1 iteration in one measurement interval, before a measurement V' is invoked and after the measurement We is invoked. The performance counters for each iteration are then added together for all iterations.Z,This is a low level function to construct a T1 value from an impure wrapper action, where the - parameter dictates the number of times the action wrapped inside would run. You would normally be using the other higher level APIs rather than this function to construct a benchmarkable.[ZApply an argument to a function, and evaluate the result to weak head normal form (WHNF).\NApply an argument to a function, and evaluate the result to normal form (NF).]lPerform an action, then evaluate its result to normal form. This is particularly useful for forcing a lazy $ action to be completely performed.^mPerform an action, then evaluate its result to weak head normal form (WHNF). This is useful for forcing an S action whose result is an expression to be evaluated down to a more useful value._Same as `, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the P throws an exception.`lCreate a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.HThe environment is evaluated to normal form before the benchmark is run. When using [, ^, etc. Gauge creates a T whichs runs a batch of N repeat runs of that expressions. Gauge may run any number of these batches to get accurate measurements. Environments created by e and f/, are shared across all these batches of runs.!This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.An example: Suppose we want to benchmark writing to a Chan, if we allocate the Chan using environment and our benchmark consists of writeChan env ()i, the contents and thus size of the Chan will grow with every repeat. If Gauge runs a 1,000 batches of 1,000 repeats, the result is that the channel will have 999,000 items in it by the time the last batch is run. Since GHC GC has to copy the live set for every major GC this means our last set of writes will suffer a lot of noise of the previous repeats.fBy allocating a fresh environment for every batch of runs this function should eliminate this effect.aSame as b, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the P throws an exception.bCreate a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.As with e and `J the environment is evaluated to normal form before the benchmark is run.This introduces extra noise and result in reduce accuracy compared to other Gauge benchmarks. But allows easier benchmarking for mutable operations than was previously possible.cCreate a single benchmark.d6Group several benchmarks together under a common name.eRun a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place.By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.4The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any. Creation.N An environment is created right before its related benchmarks are run. The y action that creates the environment is run, then the newly created environment is evaluated to normal form (hence the .P constraint) before being passed to the function that receives the environment.Complex environments.j If you need to create an environment that contains multiple values, simply pack the values into a tuple.Lazy pattern matching.q In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated,  undefined will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.The function that receives the environment must use lazy pattern matching to deconstruct the tuple, as use of strict pattern matching will cause a crash if  undefined is passed in.Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run. setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ] Discussion.@ The environment created in the example above is intentionally not; ideal. As Haskell's scoping rules suggest, the variable big/ is in scope for the benchmarks that use only small<. It would be better to create a separate environment for bigR, so that it will not be kept alive while the unrelated benchmarks are being run.fSame as e, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the P throws an exception./Add the given prefix to a name. If the prefix is empty, the name is returned unmodified. Otherwise, the prefix and name are separated by a '/' character.gnRetrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in.0Take a T, number of iterations, a function to combine the results of multiple iterations and a measurement function to measure the stats over a number of iterations.1JRun a single benchmark, and return measurements collected while executing it, along with the amount of time the measurement process took. The benchmark will not terminate until we reach all the minimum bounds specified. If the minimum bounds are satisfied, the benchmark will terminate as soon as we reach any of the maximums.29Run a single benchmark measurement in a separate process.31Run a single benchmarkable and return the result.hRun benchmarkables, selected by a given selector function, under a given benchmark and analyse the output using the given analysis function.4Iterate over benchmarks. _nCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly generated environment.`nCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.bFunction returning the IO action that should be benchmarked with the newly generated environment.a5Action that creates the environment for a single run.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly genereted environment.b5Action that creates the environment for a single run.bFunction returning the IO action that should be benchmarked with the newly genereted environment.c!A name to identify the benchmark.An activity to be benchmarked.d+A name to identify the group of benchmarks.$Benchmarks to group under this name.epCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.RTake the newly created environment and make it available to the given benchmarks.fpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.!Clean up the created environment.RTake the newly created environment and make it available to the given benchmarks./Prefix.Name.1Minimum sample duration in ms.Minimum number of samples.>Upper bound on how long the benchmarking process should take.5File name templateCallback that can use the filehSelect benchmarks by name.Analysis functionMNOPRQSTUVWXYZ[\]^_`abcdefghTUVWXYZ]^\[`_baPQRScdefghMNOMNOPQRSTUVWXY(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy "#16FT,jReport of a sample analysis.6The name of this report.7See ..8Raw measurements.9Report analysis.:Analysis of outliers.;Data for a KDE of times.<$Data for a KDE chart of performance.k:Result of a bootstrap analysis of a non-parametric sample.m+Estimates calculated via linear regression.nEstimated mean.oEstimated standard deviation.pBDescription of the effects of outliers on the estimated variance.=Results of a linear regression.> Name of the responding variable.?1Map from name to value of predictor coefficients.@R goodness-of-fit estimate.qsAnalysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean).s"Qualitative description of effect.t$Brief textual description of effect.u@Quantitative description of effect (a fraction between 0 and 1).vpA description of the extent to which outliers in the sample data affect the sample mean and standard deviation.wLess than 1% effect.xBetween 1% and 10%.yBetween 10% and 50%.z+Above 50% (i.e. measurements are useless).{COutliers from sample data, calculated using the boxplot technique.~JMore than 3 times the interquartile range (IQR) below the first quartile.9Between 1.5 and 3 times the IQR below the first quartile.9Between 1.5 and 3 times the IQR above the third quartile.3More than 3 times the IQR above the third quartile.=Classify outliers in a data set, using the boxplot technique.gCompute the extent to which outliers in the sample data affect the sample mean and standard deviation./Count the total number of outliers in a sample.Display the mean of a 7, and characterise the outliers present in the sample. Multiply the ,s in an analysis by the given value, using .A<Return a random number generator, creating one if necessary.IThis is not currently thread-safe, but in a harmless way (we might call * more than once if multiple threads race).BMemoise the result of an  action.This is not currently thread-safe, but hopefully in a harmless way. We might call the given action more than once if multiple threads race, so our caller's job is to write actions that can be run multiple times safely.%Perform an analysis of a measurement.3Regress the given predictors against the responder.bErrors may be returned under various circumstances, such as invalid names or lack of needed data.See ) for details of the regression performed.Display a report of the { present in a .Analyse a single benchmark.:Run a benchmark interactively and analyse its performance.9Run a benchmark interactively and analyse its performanc."Bootstrap estimate of sample mean.3Bootstrap estimate of sample standard deviation.Number of original iterations.1Number of iterations used to compute the sample.Value to multiply by.Experiment name. Sample data.Predictor names.Responder name.#jklmnopqrstuvwxyz{|}~#{|}~vwxyzqrstujklmnopjC6789:;<DEFGklmnop=H>?@qrstuvwxyz{|}~(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy"#z%An entry point that can be used as a main function. $import Gauge.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ]IHDisplay an error message from a command line parsing failure, and exit.J]Analyse a single benchmark, printing just the time by default and all stats in verbose mode.QRun a benchmark interactively with supplied config, and analyse its performance.PRun a benchmark interactively with default config, and analyse its performance.%An entry point that can be used as a main' function, with configurable defaults.Example: import Gauge.Main.Options import Gauge.Main myConfig = defaultConfig { -- Do not GC between runs. forceGC = False } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]!If you save the above example as "Fib.hs"/, you should be able to compile it as follows: ghc -O --make FibRun  "Fib --help"< on the command line to get a list of command line options. Run a set of Ps with the given .!This can be useful if you have a Z from some other source (e.g. from a one in your benchmark driver's command-line parser).!MNOPRQSTUVWXYZ[\]^_`abcdefgh+(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNonewO  !"#$%'&()*+,-MNOPRQSTUVWXYZ[\]^_`abcdefghK,,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|{}~~                                     ! " # $ % & ' ( ) * + ,   % - - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J . K L M N O P Q R S T U V W X Y Z[\]^_`ab]^cdefghijklmnopqrstuvwxyz{u|}~y !!!!!""#$$$$$$$$$$$$$$$$$$$$$$$$$$%]^%%%]^%%%%%%%%%%&&&&&&& & & & & &&&&&&&&6&'''(((((()+))) )!)!)")#)$)%)+)&*'*(*)***+*,*-*-./0123456789:;<=>?@ABCDE?FGH@IJK"gauge-0.2.2-6OjO8S5ENyVEc4DYN3Y3dtGauge.Main.OptionsGauge.BenchmarkGauge.Analysis Gauge.Main Gauge.CSV Gauge.ListMapGauge.OptionalGauge.Source.Time Gauge.TimeGauge.Source.RUsageGauge.Source.GCGauge.Measurement Gauge.Format Numeric.MathFunctions.ComparisonNumeric.MathFunctions.ConstantsNumeric.SpecFunctions.InternalNumeric.SpecFunctions Numeric.Sum Paths_gaugeStatistics.DistributionStatistics.FunctionStatistics.InternalStatistics.Distribution.NormalStatistics.Math.RootFindingStatistics.Matrix.TypesStatistics.Matrix.MutableStatistics.QuantileStatistics.Sample.HistogramStatistics.Sample.InternalStatistics.SampleStatistics.MatrixStatistics.Matrix.AlgorithmsStatistics.TransformStatistics.Sample.KernelDensityStatistics.Types.InternalStatistics.TypesSystem.Random.MWCStatistics.ResamplingStatistics.Resampling.BootstrapStatistics.Regression Gauge.MonadGauge.IO.PrintfGaugeConfig confIntervalforceGC timeLimit minSamples minDurationincludeFirstIter quickMode measureOnly measureWith resamples regressions rawDataFile reportFilecsvFile csvRawFilejsonFile junitFile verbositytemplateitersmatchmode displayMode DisplayMode Condensed StatsTableModeListVersionHelp DefaultMode MatchTypePrefixPatternIPattern VerbosityQuietNormalVerbose defaultConfig makeSelector parseWithdescribe versionInfo $fEqVerbosity$fOrdVerbosity$fBoundedVerbosity$fEnumVerbosity$fReadVerbosity$fShowVerbosity$fDataVerbosity$fGenericVerbosity $fEqMatchType$fOrdMatchType$fBoundedMatchType$fEnumMatchType$fReadMatchType$fShowMatchType$fDataMatchType$fGenericMatchType$fEqMode $fReadMode $fShowMode $fDataMode $fGenericMode$fEqDisplayMode$fReadDisplayMode$fShowDisplayMode$fDataDisplayMode$fGenericDisplayMode $fEqConfig $fReadConfig $fShowConfig $fDataConfig$fGenericConfigBenchmarkAnalysisBenchmarkNormalBenchmarkIters Benchmark Environment BenchGroup BenchmarkableallocEnvcleanEnv runRepeatedlyperRuntoBenchmarkablewhnfnfnfIOwhnfIOperBatchEnvWithCleanup perBatchEnvperRunEnvWithCleanup perRunEnvbenchbgroupenvenvWithCleanup benchNames runBenchmark$fShowBenchmarkReportSampleAnalysis anRegressanMeananStdDev anOutlierVarOutlierVarianceovEffectovDesc ovFraction OutlierEffect UnaffectedSlightModerateSevereOutliers samplesSeen lowSeverelowMildhighMild highSevereclassifyOutliersoutlierVariance countOutliers analyseMeanscale analyseSampleregress noteOutliersanalyseBenchmarkbenchmarkWith' benchmark'$fNFDataOutliers$fNFDataOutlierEffect$fNFDataOutlierVariance$fNFDataRegression$fNFDataSampleAnalysis $fNFDataKDE$fNFDataReport $fEqOutliers$fShowOutliers$fDataOutliers$fGenericOutliers$fEqOutlierEffect$fOrdOutlierEffect$fShowOutlierEffect$fDataOutlierEffect$fGenericOutlierEffect$fEqOutlierVariance$fShowOutlierVariance$fDataOutlierVariance$fGenericOutlierVariance$fEqRegression$fShowRegression$fGenericRegression$fEqSampleAnalysis$fShowSampleAnalysis$fGenericSampleAnalysis$fEqKDE $fShowKDE $fDataKDE $fGenericKDE $fEqReport $fShowReport$fGenericReport defaultMain benchmarkWith benchmarkdefaultMainWithrunModeEscapingRowFieldfloatintegralstring outputRowwriteNoEscapeEscapeEscapeDoublingunFieldMapfromListtoListlookupOptional toOptional OptionalTag optionalTag isOptionalTag unOptionalomitted isOmittedtoMaybe fromMaybemapboth getRecordPtr getCPUTimegetTime getCycles initialize TimeRecordCyclesCpuTime ClockTimeMeasurementType DifferentialAbsolute getMetrics withMetricsPicoSeconds100 NanoSeconds MicroSeconds MilliSecondspicosecondsToNanoSecondsmicroSecondsToDoublemilliSecondsToDoublenanoSecondsToDoubledoubleToNanoSecondsdoubleToPicoseconds100getwith supportedWhoTimeValRUsage userCpuTime systemCpuTimemaxResidentSetSizeiSharedMemorySizeiUnsharedDataSizeiUnsharedStackSize minorFault majorFaultnSwapinBlockoutBlockmsgSendmsgRecvnSignalsnVoluntaryContextSwitchnInvoluntaryContextSwitchChildrenSelfMetrics allocatednumGCscopiedmutWallSeconds mutCpuSeconds gcWallSeconds gcCpuSeconds AbsMetricsMeasured measItersmeasTime measCycles measCpuTime measUtime measStime measMaxrss measMinflt measMajflt measNvcsw measNivcsw measAllocated measNumGcsmeasBytesCopiedmeasMutatorWallSecondsmeasMutatorCpuSecondsmeasGcWallSecondsmeasGcCpuSecondssecs measureKeysmeasureAccessorsresolveAccessorsvalidateAccessorsrescalemeasuremeasuredapplyGCStatisticsapplyRUStatisticsdefaultMinSamplesNormaldefaultMinSamplesQuickdefaultTimeLimitNormaldefaultTimeLimitQuickmeasureAccessors_initializeTime renderNames MeasureDiff measureDiffprintNanoseconds tableMarkdownresetgreenredyellowprintSubNanoseconds ulpDistancewithinghc-prim GHC.TypesDoublem_hugem_tiny m_max_expInt m_pos_inf m_neg_infm_NaN m_max_log m_min_logm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilonm_ln_sqrt_2_pim_eulerMascheronierferfcinvErfinvErfclog2KBNSum Summationzeroaddsumkbn sumVectorD:R:VectorKBNSum0V_KBNSumD:R:MVectorsKBNSum0 MV_KBNSumversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileName ContDistrquantiledensity logDensitybaseGHC.Errerror complQuantile Distribution cumulativecomplCumulativesortindicesminMaxnextHighestPowerOfTwosquareforrfor inplaceSortIO unsafeModifyMMGHC.ReadRead readsPrecreadListreadPrec readListPrecGHC.ShowShow showsPrecshowshowList defaultShow1 defaultShow2defaultReadPrecM1defaultReadPrecM2NormalDistributionstandard normalDistrENDmeanstdDev ndPdfDenom ndCdfDenomRoot NotBracketed SearchFailedfromRootriddersMMatrixMatrixrowscolsexponent_vectorMVectorVector unsafeBounds replicatethaw unsafeFreeze unsafeRead unsafeWrite immutably weightedAvgSorted histogram_ robustSumVarvariancevarianceUnbiased fromVector dimension multiplyVnormcolumnrow unsafeIndex transposeqrdctidctifftfftCDkdekde_SampleScaleConfInt confIntLDX confIntUDX confIntCLEstimateestimateFromIntervalestPointestErrorPValueCLmkCLmkCLEmkCLFromSignificanceEconfidenceLevelsignificanceLevelcl95 mkPValueEestimateFromErrconfidenceInterval$fOrdCLGenIOIOGenVariateuniform GHC.FloatlogFloatuniformRacquireSeedSystemcreateSystemRandom nextIndex uniformVectorsplitGen DoubleWord32 Estimator jackknifeestimateresampleresampleVector jackknifeMeanjackknifeVariance_jackknifeVarianceUnbjackknifeVariancejackknifeStdDevdropAtFunctionMeanVarianceVarianceUnbiasedStdDev Bootstrap fullSample bootstrapBCAT:< olsRegressrSquareolssolvebootstrapRegressbalance askConfiggaugeIO withConfigCritgenconfigaskCrit finallyGaugerunGaugeCritHPrintfTypenoteprolix printErrorrewindClearLine chPrintfImpl PrintfContGHC.IntInt64deepseq-1.4.3.0Control.DeepSeqNFData addPrefixiterateBenchmarkablerunBenchmarkable'runBenchmarkIsolatedrunBenchmarkablewithSystemTempFile reportName reportKeysreportMeasuredreportAnalysisreportOutliers reportKDEsKDE Regression regResponder regCoeffs regRSquaregetGenmemoisekdeType kdeValueskdePDF parseError quickAnalyse