wZK      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~$(c) 2017 Vincent HanquezSafez3 Possible modes of escaping: * none * normal quotes escapes * content need doubling because of double quote in contentA Row of fields!a CSV Field (numerical or string)&The content inside is properly escapedCreate a field from Double%Create a field for numerical integralCreate a field from StringOutput a row to a StringNone16K(c) 2017-2018 Vincent HanquezSafe168SA type representing a sum-type free Maybe a where a specific tag represent Nothing Create an optional value from a (c) 2017 Vincent HanquezNone-FK-Record clock, cpu and cycles in one structurefReturn the amount of elapsed CPU time, combining user and kernel (system) time into a single measure.IReturn the current wallclock time, in seconds since some arbitrary time.You must call initializeTime# once before calling this function!Read the CPU cycle counter.Set up time measurement. None16Kx-Represent a number of hundreds of picoseconds!Represent a number of nanoseconds"Represent a number of microseconds#Represent a number of milliseconds.dReturn the number of integral nanoseconds followed by the number of hundred of picoseconds (1 digit) (c) 2017 Vincent HanquezNoneVd! Gather RUsagecall a function f, gathering RUSage before and after the call.POn operating system not supporting getrusage this will be False, otherwise True.    (c) 2017 Vincent HanquezNoneV(  'Differential metrics related the RTS/GC number of bytes allocated number of GCsnumber of bytes copiedmutator wall time measurementmutator cpu time measurementgc wall time measurementgc cpu time measurement3Check if RTS/GC metrics gathering is enabled or not5Return RTS/GC metrics differential between a call to ffunction to measure      (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone "#-16FVR5A collection of measurements made while benchmarking.;Measurements related to garbage collection are tagged with GC:. They will only be available if a benchmark is run with  "+RTS -T".Packed storage.w When GC statistics cannot be collected, GC values will be set to huge negative values. If a field is labeled with "GC " below, use fromInt and  fromDouble% to safely convert to "real" values.#Number of loop iterations measured.*Total wall-clock time elapsed, in seconds.gCycles, in unspecified units that may be CPU cycles. (On i386 and x86_64, this is measured using the rdtsc instruction.)RTotal CPU time elapsed, in seconds. Includes both user and kernel (system) time. User time System time Maximum resident set size!Minor page faults"Major page faults#$Number of voluntary context switches$&Number of involuntary context switches%(GC)* Number of bytes allocated. Access using fromInt.&(GC)9 Number of garbage collections performed. Access using fromInt.'(GC)A Number of bytes copied during garbage collection. Access using fromInt.((GC)j Wall-clock time spent doing real work ("mutation"), as distinct from garbage collection. Access using  fromDouble.)(GC)b CPU time spent doing real work ("mutation"), as distinct from garbage collection. Access using  fromDouble.*(GC)? Wall-clock time spent doing garbage collection. Access using  fromDouble.+(GC)9 CPU time spent doing garbage collection. Access using  fromDouble.,Convert a number of seconds to a string. The string will consist of four decimal places, followed by a short description of the time units.-Field names in a , record, in the order in which they appear.. Field names and accessors for a  record./$Given a list of accessor names (see -g), return either a mapping from accessor name to function or an error message if any names are wrong.0fGiven predictor and responder names, do some basic validation, then hand back the relevant accessors.1"Normalise every measurement as if  was 1.( itself is left unaffected.)2Invokes the supplied benchmark runner function with a combiner and a measurer that returns the measurement of a single iteration of an IO action.3An empty structure.4IApply the difference between two sets of GC statistics to a measurement.5MApply the difference between two sets of rusage statistics to a measurement.0Predictor names.Responder name.2Number of iterations.5Statistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".&6 !"#$%&'()*+,78-901234:;6 !"#$%&'()*+ SafeSR<=>?@ABC(c) 2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#16|$%Top-level benchmarking configuration.LConfidence interval for bootstrap estimation (greater than 0, less than 1).Obsolete, unused. This option used to force garbage collection between every benchmark run, but it no longer has an effect (we now unconditionally force garbage collection). This option remains solely for backwards API compatibility.Number of seconds to run a single benchmark. In practice, execution time may exceed this limit to honor minimum number of samples or minimum duration of each sample. Increased time limit allows us to take more samples. Use 0 for a single sample benchmark.&Minimum number of samples to be taken.Minimum duration of each sample, increased duration allows us to perform more iterations in each sample. To enforce a single iteration in a sample use duration 0.Discard the very first iteration of a benchmark. The first iteration includes the potentially extra cost of one time evaluations introducing large variance.,Quickly measure and report raw measurements. nJust measure the given benchmark and place the raw output in this file, do not analyse and generate a report. ~Specify the path of the benchmarking program to use (this program itself) for measuring the benchmarks in a separate process. 2Number of resamples to perform when bootstrapping. Regressions to perform. iFile to write binary measurement and analysis data to. If not specified, this will be a temporary file.7File to write report output to, with template expanded.File to write CSV summary to."File to write CSV measurements to.(File to write JSON-formatted results to..File to write JUnit-compatible XML results to.>Verbosity level to use when running and analysing benchmarks.)Template file to use if writing a report.Number of iterationsType of matching to use, if anyMode of operation'Execution mode for a benchmark program.List all benchmarks.Print the version. Print help Default Benchmark mode!How to match a benchmark name."+Match by prefix. For example, a prefix of "foo" will match "foobar".#7Match by searching given substring in benchmark paths.$Same as #, but case insensitive.%,Control the amount of information displayed.)#Default benchmarking configuration.*tCreate a benchmark selector function that can tell if a name given on the command line matches a defined benchmark.-lA string describing the version of this benchmark (really, the version of gauge that was used to build it).*Command line arguments.+Default configuration to useProgram Argument.  !"#$%'&()*+,-.)*+,- %&'(!"#$   !"#$%&'((c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHC TrustworthyFTDDb is essentially a reader monad to make the benchmark configuration available throughout the code.E$Retrieve the configuration from the D monad.FLift an IO action into the D monad.GRun a D action with the given . DHIJKELFMGDNOHIJK(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone;=QVP0An internal class that acts like Printf/HPrintf.`The implementation is visible to the rest of the program, but the details of the class are not.QPrint a "normal" note.RPrint verbose output.SPrint an error message.T;ansi escape on unix to rewind and clear the line to the endPQRSTPUVW(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone"#&'QV,MZSpecification of a collection of benchmarks and environments. A benchmark may consist of:GAn environment that creates input data for benchmarks, created with b. A single Q item with a name, created with `.A (possibly nested) group of Ms, created with a.QvA pure function or impure action that can be benchmarked. The function to be benchmarked is wrapped into a function (U) that takes an X parameter which indicates the number of times to run the given function or action. The wrapper is constructed automatically by the APIs provided in this library to construct Q.When V is not set then UJ is invoked to perform all iterations in one measurement interval. When V is set, UW is always invoked with 1 iteration in one measurement interval, before a measurement S' is invoked and after the measurement Te is invoked. The performance counters for each iteration are then added together for all iterations.W,This is a low level function to construct a Q1 value from an impure wrapper action, where the X parameter dictates the number of times the action wrapped inside would run. You would normally be using the other higher level APIs rather than this function to construct a benchmarkable.XZApply an argument to a function, and evaluate the result to weak head normal form (WHNF).YNApply an argument to a function, and evaluate the result to normal form (NF).ZlPerform an action, then evaluate its result to normal form. This is particularly useful for forcing a lazy Y$ action to be completely performed.[mPerform an action, then evaluate its result to weak head normal form (WHNF). This is useful for forcing an YS action whose result is an expression to be evaluated down to a more useful value.\Same as ], but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the M throws an exception.]lCreate a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.HThe environment is evaluated to normal form before the benchmark is run. When using X, [, etc. Gauge creates a Q whichs runs a batch of N repeat runs of that expressions. Gauge may run any number of these batches to get accurate measurements. Environments created by b and c/, are shared across all these batches of runs.!This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.An example: Suppose we want to benchmark writing to a Chan, if we allocate the Chan using environment and our benchmark consists of writeChan env ()i, the contents and thus size of the Chan will grow with every repeat. If Gauge runs a 1,000 batches of 1,000 repeats, the result is that the channel will have 999,000 items in it by the time the last batch is run. Since GHC GC has to copy the live set for every major GC this means our last set of writes will suffer a lot of noise of the previous repeats.fBy allocating a fresh environment for every batch of runs this function should eliminate this effect.^Same as _, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the M throws an exception._Create a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.As with b and ]J the environment is evaluated to normal form before the benchmark is run.This introduces extra noise and result in reduce accuracy compared to other Gauge benchmarks. But allows easier benchmarking for mutable operations than was previously possible.`Create a single benchmark.a6Group several benchmarks together under a common name.bRun a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place.By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.4The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any. Creation.N An environment is created right before its related benchmarks are run. The Yy action that creates the environment is run, then the newly created environment is evaluated to normal form (hence the ZP constraint) before being passed to the function that receives the environment.Complex environments.j If you need to create an environment that contains multiple values, simply pack the values into a tuple.Lazy pattern matching.q In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated,  undefined will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.The function that receives the environment must use lazy pattern matching to deconstruct the tuple, as use of strict pattern matching will cause a crash if  undefined is passed in.Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run. setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ] Discussion.@ The environment created in the example above is intentionally not; ideal. As Haskell's scoping rules suggest, the variable big/ is in scope for the benchmarks that use only small<. It would be better to create a separate environment for bigR, so that it will not be kept alive while the unrelated benchmarks are being run.cSame as b, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the M throws an exception.[Add the given prefix to a name. If the prefix is empty, the name is returned unmodified. Otherwise, the prefix and name are separated by a '/' character.dnRetrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in.\Take a Q, number of iterations, a function to combine the results of multiple iterations and a measurement function to measure the stats over a number of iterations.]JRun a single benchmark, and return measurements collected while executing it, along with the amount of time the measurement process took. The benchmark will not terminate until we reach all the minimum bounds specified. If the minimum bounds are satisfied, the benchmark will terminate as soon as we reach any of the maximums.^9Run a single benchmark measurement in a separate process._1Run a single benchmarkable and return the result.eRun benchmarkables, selected by a given selector function, under a given benchmark and analyse the output using the given analysis function.f2Run a benchmark without analysing its performance.`Iterate over benchmarks. \nCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly generated environment.]nCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.bFunction returning the IO action that should be benchmarked with the newly generated environment.^5Action that creates the environment for a single run.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly genereted environment._5Action that creates the environment for a single run.bFunction returning the IO action that should be benchmarked with the newly genereted environment.`!A name to identify the benchmark.An activity to be benchmarked.a+A name to identify the group of benchmarks.$Benchmarks to group under this name.bpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.RTake the newly created environment and make it available to the given benchmarks.cpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.!Clean up the created environment.RTake the newly created environment and make it available to the given benchmarks.[Prefix.Name.]Minimum sample duration in ms.Minimum number of samples.>Upper bound on how long the benchmarking process should take.aFile name templateCallback that can use the fileeSelect benchmarks by name.Analysis function.fSelect benchmarks by name.Number of iterations to run.MONPQRSTUVWXYZ[\]^_`abcdefQRSTUVWZ[YX]\_^MNOP`abcdefMNOPQRSTUV(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafe>?V@Pb%Continuous probability distributuion.Minimal complete definition is c and either d or e.d@Probability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)"xc<Inverse of the cumulative distribution function. The value x for which P(X"dx) = pA. If probability is outside of [0,1] range function should call fg1-complement of quantile: "complQuantile x "a quantile (1 - x)eNatural logarithm of density.huType class common to all distributions. Only c.d.f. could be defined for both discrete and continuous distributions.iKCumulative distribution function. The probability that a random variable X is less or equal than x , i.e. P(X"dx8). Cumulative should be defined for infinities as well: 'cumulative d +" = 1 cumulative d -" = 0j+One's complement of cumulative distibution: (complCumulative d x = 1 - cumulative d x(It's useful when one is interested in P(X>x) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.bcdeghijbdcgehij%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone <FQTVIkSort a vector.lReturn the indices of a vector.m8Compute the minimum and maximum of a vector in one pass.nEfficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.oMultiply a number by itself.pSimple for loop. Counts from start to end-1.q&Simple reverse-for loop. Counts from start-1 to end (which must be less than start). rkslmnopqtuv(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafeK wxyz{|}~(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone16>?Q]The normal distribution.IStandard normal distribution with mean equal to 0 and variance equal to 1+Create normal distribution from parameters.WIMPORTANT: prior to 0.10 release second parameter was variance not standard deviation.Mean of distribution"Standard deviation of distribution(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone16^>The result of searching for a root of a mathematical function.fThe function does not have opposite signs when evaluated at the lower and upper bounds of the search.hThe search failed to converge to within the given error tolerance after the given number of iterations.A root was successfully found.]Returns either the result of a search for a root, or the default value if the search failed.:Use the method of Ridders to compute a root of a function.The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed).Default value.Result of search for a root.Absolute error tolerance.&Lower and upper bounds for the search.Function to find the roots of.2014 Bryan O'SullivanBSD3Nonecj:Two-dimensional mutable matrix, stored in row-major order.2Two-dimensional matrix, stored in row-major order.Rows of matrix.Columns of matrix.aIn order to avoid overflows during matrix multiplication, a large exponent is stored separately. Matrix data. (c) 2014 Bryan O'SullivanBSD3NoneeeGiven row and column numbers, calculate the offset into the flat row-major vector, without checking. (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<mO(n log n). Estimate the kth q:-quantile of a sample, using the weighted average method.SThe following properties should hold: * the length of the input is greater than 0! * the input does not contain NaN * k "e 0 and k "d q"otherwise an error will be thrown.k, the desired quantile.q, the number of quantiles.x, the sample data.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<uO(n)% Compute a histogram over a data set.PInterval (bin) sizes are uniform, based on the supplied upper and lower bounds.]Number of bins. This value must be positive. A zero or negative value will cause an error.PLower bound on interval range. Sample data less than this will cause an error.Upper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. Sample data.(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<wF+(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<mO(n)Y Arithmetic mean. This uses Kahan-Babuaka-Neumaier summation, so is more accurate than  welfordMean) unless the input values are very large.vMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n.hUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1.^Standard deviation. This is simply the square root of the unbiased estimate of the variance.-2011 Aleksey Khudyakov, 2014 Bryan O'SullivanBSD3None Convert from a row-major vector.=Return the dimensions of this matrix, as a (row,column) pair.Matrix-vector multiplication.)Calculate the Euclidean norm of a vector.Return the given column.Return the given row.eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.Number of rows.Number of columns.(Flat list of values, in row-major order.Row.Column.p2014 Bryan O'SullivanBSD3NoneO(r*c)Q Compute the QR decomposition of a matrix. The result returned is the matrices (q,r).(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<#Discrete cosine transform (DCT-II).>Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x)Inverse fast Fourier transform.2Radix-2 decimation-in-time fast Fourier transform.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone<]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use .)%Density estimates at each mesh point.]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two. Lower bound (min) of the mesh range. Upper bound (max) of the mesh range.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneK Sample data. (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 16<>?FTV1Data types which could be multiplied by constant.vConfidence interval. It assumes that confidence interval forms single interval and isn't set of disjoint intervals.cLower error estimate, or distance between point estimate and lower bound of confidence interval.cUpper error estimate, or distance between point estimate and upper bound of confidence interval.<Confidence level corresponding to given confidence interval.TA point estimate and its confidence interval. It's parametrized by both error type e and value type a,. This module provides two types of error:  NormalErr& for normally distributed errors and O for error with normal distribution. See their documentation for more details. For example 144 5+ (assuming normality) could be expressed as FEstimate { estPoint = 144 , estError = NormalErr 5 }Or if we want to express  144 + 6 - 4 at CL95 we could write: Estimate { estPoint = 144 , estError = ConfInt { confIntLDX = 4 , confIntUDX = 6 , confIntCL = cl95 }Prior to statistics 0.14 Estimate% data type used following definition: data Estimate = Estimate { estPoint :: {-# UNPACK #-} !Double , estLowerBound :: {-# UNPACK #-} !Double , estUpperBound :: {-# UNPACK #-} !Double , estConfidenceLevel :: {-# UNPACK #-} !Double } Now type Estimate ConfInt Double# should be used instead. Function 5 allow to easily construct estimate from same inputs.Point estimate.!Confidence interval for estimate.Newtype wrapper for p-value.Confidence level. In context of confidence intervals it's probability of said interval covering true value of measured value. In context of statistical tests it's 1-" where  is significance of test.BSince confidence level are usually close to 1 they are stored as 1-CL2 internally. There are two smart constructors for CL:  and mkCLFromSignificance' (and corresponding variant returning Maybe). First creates CL( from confidence level and second from 1 - CL or significance level.cl95mkCLFromSignificance 0.05BPrior to 0.14 confidence levels were passed to function as plain Doubles. Use  to convert them to CL.Create confidence level from probability  or probability confidence interval contain true value of estimate. Will throw exception if parameter is out of [0,1] rangemkCL 0.95 -- same as cl95mkCLFromSignificance 0.05Same as  but returns Nothing7 instead of error if parameter is out of [0,1] rangemkCLE 0.95 -- same as cl95 Just (mkCLFromSignificance 0.05)Same as mkCLFromSignificance but returns Nothing7 instead of error if parameter is out of [0,1] range-mkCLFromSignificanceE 0.05 -- same as cl95 Just (mkCLFromSignificance 0.05)IGet confidence level. This function is subject to rounding errors. If 1 - CL is needed use  insteadGet significance level.95% confidence levelConstruct PValue. Returns Nothing# if argument is out of [0,1] range.&Create estimate with asymmetric error.&Create estimate with asymmetric error.Get confidence interval cl95 > cl90TrueCentral estimateHLower and upper errors. Both should be positive but it's not checked.Confidence level for intervalCPoint estimate. Should lie within interval but it's not checked."Lower and upper bounds of intervalConfidence level for interval!(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 13456<FT] 4An estimator of a property of a sample, such as its .AThe use of an algebraic data type here allows functions such as  and  bootstrapBCA1 to use more efficient algorithms when possible.Run an  over a sample.O(e*r*s)d Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg).Create vector using resamplesO(n) or O(n^2)c Compute a statistical estimate repeatedly over a sample, each time omitting a successive element.O(n)( Compute the jackknife mean of a sample.O(n)F Compute the jackknife variance of a sample with a correction factor c;, so we can get either the regular or "unbiased" variance.O(n)5 Compute the unbiased jackknife variance of a sample.O(n), Compute the jackknife variance of a sample.O(n)6 Compute the jackknife standard deviation of a sample. Drop the kth element of a vector.:Split a generator into several that can run independently.Estimation functions.Number of resamples to compute.Original sample. "(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneuBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution.tBCA algorithm is described in ch. 5 of Davison, Hinkley "Confidence intervals" in section 5.3 "Percentile method"Confidence levelFull data sampleFEstimates obtained from resampled data and estimator used for this.2#2014 Bryan O'SullivanBSD3NoneMzPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one moreD element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see  for details)./Compute the ordinary least-squares solution to A x = b.Solve the equation R x = b.Compute RS, the coefficient of determination that indicates goodness-of-fit of a regression.gThis value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power.{Bootstrap a regression function. Returns both the results of the regression and the requested confidence interval values.%Balance units of work across workers.tNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by .GResponder vector. Must have the same length as the predictor vectors.A& has at least as many rows as columns.b# has the same length as columns in A.R& is an upper-triangular square matrix.b* is of the same length as rows/columns in R.Predictors (regressors). Responders.Regression coefficients.Number of resamples to compute.Confidence level.Regression function.Predictor vectors.Responder vector.(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy "#16FT>,hReport of a sample analysis.The name of this report.See -.Raw measurements.Report analysis.Analysis of outliers.Data for a KDE of times.$Data for a KDE chart of performance.i:Result of a bootstrap analysis of a non-parametric sample.k+Estimates calculated via linear regression.lEstimated mean.mEstimated standard deviation.nBDescription of the effects of outliers on the estimated variance.Results of a linear regression. Name of the responding variable.1Map from name to value of predictor coefficients.R goodness-of-fit estimate.osAnalysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean).q"Qualitative description of effect.r$Brief textual description of effect.s@Quantitative description of effect (a fraction between 0 and 1).tpA description of the extent to which outliers in the sample data affect the sample mean and standard deviation.uLess than 1% effect.vBetween 1% and 10%.wBetween 10% and 50%.x+Above 50% (i.e. measurements are useless).yCOutliers from sample data, calculated using the boxplot technique.|JMore than 3 times the interquartile range (IQR) below the first quartile.}9Between 1.5 and 3 times the IQR below the first quartile.~9Between 1.5 and 3 times the IQR above the third quartile.3More than 3 times the IQR above the third quartile.=Classify outliers in a data set, using the boxplot technique.gCompute the extent to which outliers in the sample data affect the sample mean and standard deviation./Count the total number of outliers in a sample.Display the mean of a 7, and characterise the outliers present in the sample. Multiply the ,s in an analysis by the given value, using .<Return a random number generator, creating one if necessary.IThis is not currently thread-safe, but in a harmless way (we might call * more than once if multiple threads race).Memoise the result of an Y action.This is not currently thread-safe, but hopefully in a harmless way. We might call the given action more than once if multiple threads race, so our caller's job is to write actions that can be run multiple times safely.%Perform an analysis of a measurement.3Regress the given predictors against the responder.bErrors may be returned under various circumstances, such as invalid names or lack of needed data.See ) for details of the regression performed.Display a report of the y present in a .Analyse a single benchmark.:Run a benchmark interactively and analyse its performance.9Run a benchmark interactively and analyse its performanc."Bootstrap estimate of sample mean.3Bootstrap estimate of sample standard deviation.Number of original iterations.1Number of iterations used to compute the sample.Value to multiply by.Experiment name. Sample data.Predictor names.Responder name.#hijklmnopqrstuvwxyz{|}~#yz{|}~tuvwxopqrshijklmnh  ijklmn opqrstuvwxyz{|}~(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy"#W%An entry point that can be used as a main function. $import Gauge.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ] HDisplay an error message from a command line parsing failure, and exit. ]Analyse a single benchmark, printing just the time by default and all stats in verbose mode.QRun a benchmark interactively with supplied config, and analyse its performance.PRun a benchmark interactively with default config, and analyse its performance.%An entry point that can be used as a main' function, with configurable defaults.Example: import Gauge.Main.Options import Gauge.Main myConfig = defaultConfig { -- Do not GC between runs. forceGC = False } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]!If you save the above example as "Fib.hs"/, you should be able to compile it as follows: ghc -O --make FibRun  "Fib --help"< on the command line to get a list of command line options. Run a set of Ms with the given .!This can be useful if you have a Z from some other source (e.g. from a one in your benchmark driver's command-line parser).$(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNoneYM  !"#$%'&()*+,-MONPQRSTUVWXYZ[\]^_`abcdef%%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrqsttuvwxyz{|}~                                            ! " # $   % % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B & C D E F G H I J K L M N O$PQRSSTUVW$XYZ[\]^__`abcdefghijklmnopqrs`tuvwxyz{|}~n`````````                         !!!!!!!!!!!!!!!!!!!!/!"""######     "gauge-0.2.0-2w6NjMyip2Q2hVOYbM1MPMGauge.Main.OptionsGauge.BenchmarkGauge.Analysis Gauge.Main Gauge.CSV Gauge.ListMapGauge.OptionalGauge.Source.Time Gauge.TimeGauge.Source.RUsageGauge.Source.GCGauge.Measurement Paths_gauge Gauge.MonadGauge.IO.PrintfStatistics.DistributionStatistics.FunctionStatistics.InternalStatistics.Distribution.NormalStatistics.Math.RootFindingStatistics.Matrix.TypesStatistics.Matrix.MutableStatistics.QuantileStatistics.Sample.HistogramStatistics.Sample.InternalStatistics.SampleStatistics.MatrixStatistics.Matrix.AlgorithmsStatistics.TransformStatistics.Sample.KernelDensityStatistics.Types.InternalStatistics.TypesStatistics.ResamplingStatistics.Resampling.BootstrapStatistics.RegressionGaugeConfig confIntervalforceGC timeLimit minSamples minDurationincludeFirstIter quickMode measureOnly measureWith resamples regressions rawDataFile reportFilecsvFile csvRawFilejsonFile junitFile verbositytemplateitersmatchmode displayMode DisplayMode Condensed StatsTableModeListVersionHelp DefaultMode MatchTypePrefixPatternIPattern VerbosityQuietNormalVerbose defaultConfig makeSelector parseWithdescribe versionInfo $fEqVerbosity$fOrdVerbosity$fBoundedVerbosity$fEnumVerbosity$fReadVerbosity$fShowVerbosity$fDataVerbosity$fGenericVerbosity $fEqMatchType$fOrdMatchType$fBoundedMatchType$fEnumMatchType$fReadMatchType$fShowMatchType$fDataMatchType$fGenericMatchType$fEqMode $fReadMode $fShowMode $fDataMode $fGenericMode$fEqDisplayMode$fReadDisplayMode$fShowDisplayMode$fDataDisplayMode$fGenericDisplayMode $fEqConfig $fReadConfig $fShowConfig $fDataConfig$fGenericConfig Benchmark Environment BenchGroup BenchmarkableallocEnvcleanEnv runRepeatedlyperRuntoBenchmarkablewhnfnfnfIOwhnfIOperBatchEnvWithCleanup perBatchEnvperRunEnvWithCleanup perRunEnvbenchbgroupenvenvWithCleanup benchNames runBenchmarkrunBenchmarkIters$fShowBenchmarkReportSampleAnalysis anRegressanMeananStdDev anOutlierVarOutlierVarianceovEffectovDesc ovFraction OutlierEffect UnaffectedSlightModerateSevereOutliers samplesSeen lowSeverelowMildhighMild highSevereclassifyOutliersoutlierVariance countOutliers analyseMeanscale analyseSampleregress noteOutliersanalyseBenchmarkbenchmarkWith' benchmark'$fMonoidOutliers$fNFDataOutliers$fNFDataOutlierEffect$fNFDataOutlierVariance$fNFDataRegression$fNFDataSampleAnalysis $fNFDataKDE$fNFDataReport $fEqOutliers$fShowOutliers$fDataOutliers$fGenericOutliers$fEqOutlierEffect$fOrdOutlierEffect$fShowOutlierEffect$fDataOutlierEffect$fGenericOutlierEffect$fEqOutlierVariance$fShowOutlierVariance$fDataOutlierVariance$fGenericOutlierVariance$fEqRegression$fShowRegression$fGenericRegression$fEqSampleAnalysis$fShowSampleAnalysis$fGenericSampleAnalysis$fEqKDE $fShowKDE $fDataKDE $fGenericKDE $fEqReport $fShowReport$fGenericReport defaultMain benchmarkWith benchmarkdefaultMainWithrunModeEscapingRowFieldfloatintegralstring outputRowwriteNoEscapeEscapeEscapeDoublingunFieldMapfromListtoListlookupOptional toOptional OptionalTag optionalTag isOptionalTag unOptionalomitted isOmittedtoMaybe fromMaybemapboth getRecordPtr getCPUTimegetTime getCycles initialize TimeRecordCyclesCpuTime ClockTimeMeasurementType DifferentialAbsolute getMetrics withMetricsPicoSeconds100 NanoSeconds MicroSeconds MilliSecondspicosecondsToNanoSecondsmicroSecondsToDoublemilliSecondsToDoublenanoSecondsToDoubledoubleToNanoSecondsdoubleToPicoseconds100getwith supportedWhoTimeValRUsage userCpuTime systemCpuTimemaxResidentSetSizeiSharedMemorySizeiUnsharedDataSizeiUnsharedStackSize minorFault majorFaultnSwapinBlockoutBlockmsgSendmsgRecvnSignalsnVoluntaryContextSwitchnInvoluntaryContextSwitchChildrenSelfMetrics allocatednumGCscopiedmutWallSeconds mutCpuSeconds gcWallSeconds gcCpuSeconds AbsMetricsMeasured measItersmeasTime measCycles measCpuTime measUtime measStime measMaxrss measMinflt measMajflt measNvcsw measNivcsw measAllocated measNumGcsmeasBytesCopiedmeasMutatorWallSecondsmeasMutatorCpuSecondsmeasGcWallSecondsmeasGcCpuSecondssecs measureKeysmeasureAccessorsresolveAccessorsvalidateAccessorsrescalemeasuremeasuredapplyGCStatisticsapplyRUStatisticsmeasureAccessors_initializeTime renderNames MeasureDiff measureDiffversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileName askConfiggaugeIO withConfigCritconfiggenaskCrit finallyGaugerunGaugeCritHPrintfTypenoteprolix printErrorrewindClearLine chPrintfImpl PrintfContbaseGHC.IntInt64ghc-prim GHC.TypesIOdeepseq-1.4.3.0Control.DeepSeqNFData addPrefixiterateBenchmarkablerunBenchmarkable'runBenchmarkIsolatedrunBenchmarkableforwithSystemTempFile ContDistrquantiledensity logDensityGHC.Errerror complQuantile Distribution cumulativecomplCumulativesortindicesminMaxnextHighestPowerOfTwosquarerfor-math-functions-0.2.1.0-9Gm7lexbgc4ALrIpxMJUA8 Numeric.MathFunctions.Comparisonwithin inplaceSortIO unsafeModifyMMGHC.ReadRead readsPrecreadListreadPrec readListPrecGHC.ShowShow showsPrecshowshowList defaultShow1 defaultShow2defaultReadPrecM1defaultReadPrecM2NormalDistributionstandard normalDistrENDmeanstdDev ndPdfDenom ndCdfDenomRoot NotBracketed SearchFailedfromRootriddersMMatrixMatrixrowscolsexponent_vectorMVectorVector unsafeBounds replicatethaw unsafeFreeze unsafeRead unsafeWrite immutably weightedAvgSorted histogram_ robustSumVarsumvariancevarianceUnbiased fromVector dimension multiplyVnormcolumnrow unsafeIndex transposeqrdctidctifftfftCDkdekde_SampleScaleConfInt confIntLDX confIntUDX confIntCLEstimateestimateFromIntervalestPointestErrorPValueCLmkCLmkCLEmkCLFromSignificanceEconfidenceLevelsignificanceLevelcl95 mkPValueEestimateFromErrconfidenceInterval$fOrdCL Estimator jackknifeestimateresampleresampleVector jackknifeMeanjackknifeVariance_jackknifeVarianceUnbjackknifeVariancejackknifeStdDevdropAtsplitGenFunctionMeanVarianceVarianceUnbiasedStdDev Bootstrap fullSample bootstrapBCAT:< olsRegressrSquareolssolvebootstrapRegressbalance reportName reportKeysreportMeasuredreportAnalysisreportOutliers reportKDEsKDE Regression regResponder regCoeffs regRSquaregetGen*mwc-random-0.13.6.0-FVAd7inlLEjCHUPScUTDLtSystem.Random.MWCcreateSystemRandommemoisekdeType kdeValueskdePDF parseError quickAnalyse