sX      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone Sample data.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:#Discrete cosine transform (DCT-II).>Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x)Inverse fast Fourier transform.2Radix-2 decimation-in-time fast Fourier transform.  (c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:O(n)% Compute a histogram over a data set.PInterval (bin) sizes are uniform, based on the supplied upper and lower bounds.]Number of bins. This value must be positive. A zero or negative value will cause an error.PLower bound on interval range. Sample data less than this will cause an error.Upper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. Sample data.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:O(n log n). Estimate the kth q:-quantile of a sample, using the weighted average method.SThe following properties should hold: * the length of the input is greater than 0! * the input does not contain NaN * k "e 0 and k "d q"otherwise an error will be thrown.k, the desired quantile.q, the number of quantiles.x, the sample data. 2014 Bryan O'SullivanBSD3None:Two-dimensional mutable matrix, stored in row-major order.2Two-dimensional matrix, stored in row-major order.Rows of matrix.Columns of matrix.aIn order to avoid overflows during matrix multiplication, a large exponent is stored separately. Matrix data.   (c) 2014 Bryan O'SullivanBSD3NoneeGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.   (c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05 >The result of searching for a root of a mathematical function. fThe function does not have opposite signs when evaluated at the lower and upper bounds of the search. hThe search failed to converge to within the given error tolerance after the given number of iterations. A root was successfully found.]Returns either the result of a search for a root, or the default value if the search failed.:Use the method of Ridders to compute a root of a function.The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed).     Default value.Result of search for a root.Absolute error tolerance.&Lower and upper bounds for the search.Function to find the roots of.         (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafe  !" (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 05:<=DRT#1Data types which could be multiplied by constant.$vConfidence interval. It assumes that confidence interval forms single interval and isn't set of disjoint intervals.%cLower error estimate, or distance between point estimate and lower bound of confidence interval.&cUpper error estimate, or distance between point estimate and upper bound of confidence interval.'<Confidence level corresponding to given confidence interval.(TA point estimate and its confidence interval. It's parametrized by both error type e and value type a,. This module provides two types of error:  NormalErr& for normally distributed errors and $O for error with normal distribution. See their documentation for more details. For example 144 5+ (assuming normality) could be expressed as FEstimate { estPoint = 144 , estError = NormalErr 5 }Or if we want to express  144 + 6 - 4 at CL95 we could write: Estimate { estPoint = 144 , estError = ConfInt { confIntLDX = 4 , confIntUDX = 6 , confIntCL = cl95 }Prior to statistics 0.14 Estimate% data type used following definition: data Estimate = Estimate { estPoint :: {-# UNPACK #-} !Double , estLowerBound :: {-# UNPACK #-} !Double , estUpperBound :: {-# UNPACK #-} !Double , estConfidenceLevel :: {-# UNPACK #-} !Double } Now type Estimate ConfInt Double# should be used instead. Function )5 allow to easily construct estimate from same inputs.*Point estimate.+!Confidence interval for estimate.,Newtype wrapper for p-value.-Confidence level. In context of confidence intervals it's probability of said interval covering true value of measured value. In context of statistical tests it's 1-" where  is significance of test.BSince confidence level are usually close to 1 they are stored as 1-CL2 internally. There are two smart constructors for CL: . and mkCLFromSignificance' (and corresponding variant returning Maybe). First creates CL( from confidence level and second from 1 - CL or significance level.cl95mkCLFromSignificance 0.05BPrior to 0.14 confidence levels were passed to function as plain Doubles. Use . to convert them to CL..Create confidence level from probability  or probability confidence interval contain true value of estimate. Will throw exception if parameter is out of [0,1] rangemkCL 0.95 -- same as cl95mkCLFromSignificance 0.05/Same as . but returns Nothing7 instead of error if parameter is out of [0,1] rangemkCLE 0.95 -- same as cl95 Just (mkCLFromSignificance 0.05)0Same as mkCLFromSignificance but returns Nothing7 instead of error if parameter is out of [0,1] range-mkCLFromSignificanceE 0.05 -- same as cl95 Just (mkCLFromSignificance 0.05)1IGet confidence level. This function is subject to rounding errors. If 1 - CL is needed use 2 instead2Get significance level.395% confidence level4Construct PValue. Returns Nothing# if argument is out of [0,1] range.5&Create estimate with asymmetric error.)&Create estimate with asymmetric error.6Get confidence interval7 cl95 > cl90True$#8$9%&'(:*+,;-<./012345Central estimateHLower and upper errors. Both should be positive but it's not checked.Confidence level for interval)CPoint estimate. Should lie within interval but it's not checked."Lower and upper bounds of intervalConfidence level for interval6=>?@ABC7DEF#8$9%&'(:*+-.1235)6#8$9%&'(:*+,;-<./012345)6=>?@ABC7DEF%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone :DORTGSort a vector.HReturn the indices of a vector.I8Compute the minimum and maximum of a vector in one pass.JEfficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.KMultiply a number by itself.LSimple for loop. Counts from start to end-1.M&Simple reverse-for loop. Counts from start-1 to end (which must be less than start). NOGPQHIJKLMR SGQHIJKLMR NOGPQHIJKLMR(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:TUTUTU-2011 Aleksey Khudyakov, 2014 Bryan O'SullivanBSD3NoneV Convert from a row-major vector.W=Return the dimensions of this matrix, as a (row,column) pair.XMatrix-vector multiplication.Y)Calculate the Euclidean norm of a vector.ZReturn the given column.[Return the given row.\eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking. VNumber of rows.Number of columns.(Flat list of values, in row-major order.WXYZ[]Row.Column.\^LVWXYZ]^ VWXYZ[]\^2014 Bryan O'SullivanBSD3None_O(r*c)Q Compute the QR decomposition of a matrix. The result returned is the matrices (q,r)._`__`+(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:aO(n)Y Arithmetic mean. This uses Kahan-Babuaka-Neumaier summation, so is more accurate than  welfordMean) unless the input values are very large.bvMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n.chUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1.d^Standard deviation. This is simply the square root of the unbiased estimate of the variance.abcdabcdabcd(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 02345:DR e4An estimator of a property of a sample, such as its a.AThe use of an algebraic data type here allows functions such as f and  bootstrapBCA1 to use more efficient algorithms when possible.gRun an e over a sample.hO(e*r*s)d Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg).iCreate vector using resamplesfO(n) or O(n^2)c Compute a statistical estimate repeatedly over a sample, each time omitting a successive element.jO(n)( Compute the jackknife mean of a sample.kO(n)F Compute the jackknife variance of a sample with a correction factor c;, so we can get either the regular or "unbiased" variance.lO(n)5 Compute the unbiased jackknife variance of a sample.mO(n), Compute the jackknife variance of a sample.nO(n)6 Compute the jackknife standard deviation of a sample.o Drop the kth element of a vector.p:Split a generator into several that can run independently.eqrstuvwxyghEstimation functions.Number of resamples to compute.Original sample.ifjklmnz{o|p euqrstvwxyhfpeqrstuvwxyghifjklmnz{o|p2014 Bryan O'SullivanBSD3None}zPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one moreD element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see ~ for details)./Compute the ordinary least-squares solution to A x = b.Solve the equation R x = b.~Compute RS, the coefficient of determination that indicates goodness-of-fit of a regression.gThis value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power.{Bootstrap a regression function. Returns both the results of the regression and the requested confidence interval values.%Balance units of work across workers.}tNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by .GResponder vector. Must have the same length as the predictor vectors.A& has at least as many rows as columns.b# has the same length as columns in A.R& is an upper-triangular square matrix.b* is of the same length as rows/columns in R.~Predictors (regressors). Responders.Regression coefficients.Number of resamples to compute.Confidence level.Regression function.Predictor vectors.Responder vector.}}~(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use .)%Density estimates at each mesh point.]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two. Lower bound (min) of the mesh range. Upper bound (max) of the mesh range.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafe<=T%Continuous probability distributuion.Minimal complete definition is  and either  or .@Probability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)"x<Inverse of the cumulative distribution function. The value x for which P(X"dx) = pA. If probability is outside of [0,1] range function should call 1-complement of quantile: "complQuantile x "a quantile (1 - x)Natural logarithm of density.uType class common to all distributions. Only c.d.f. could be defined for both discrete and continuous distributions.KCumulative distribution function. The probability that a random variable X is less or equal than x , i.e. P(X"dx8). Cumulative should be defined for infinities as well: 'cumulative d +" = 1 cumulative d -" = 0+One's complement of cumulative distibution: (complCumulative d x = 1 - cumulative d x(It's useful when one is interested in P(X>x) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05<=The normal distribution.IStandard normal distribution with mean equal to 0 and variance equal to 1+Create normal distribution from parameters.WIMPORTANT: prior to 0.10 release second parameter was variance not standard deviation.Mean of distribution"Standard deviation of distribution (c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneuBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution.tBCA algorithm is described in ch. 5 of Davison, Hinkley "Confidence intervals" in section 5.3 "Percentile method"Confidence levelFull data sampleFEstimates obtained from resampled data and estimator used for this.2Safe  (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy !"%&05OT_Report of a sample analysis.+A simple index indicating that this is the n th report.The name of this report.See h.Raw measurements. These are notN corrected for the estimated measurement overhead that can be found via the  field of  . Report analysis. Analysis of outliers. Data for a KDE of times. $Data for a KDE chart of performance.:Result of a bootstrap analysis of a non-parametric sample.+Estimates calculated via linear regression.\Estimated measurement overhead, in seconds. Estimation is performed via linear regression.Estimated mean.Estimated standard deviation.BDescription of the effects of outliers on the estimated variance.Results of a linear regression. Name of the responding variable.1Map from name to value of predictor coefficients.R goodness-of-fit estimate.sAnalysis of the extent to which outliers in a sample affect its standard deviation (and to some extent, its mean)."Qualitative description of effect. $Brief textual description of effect.!@Quantitative description of effect (a fraction between 0 and 1)."pA description of the extent to which outliers in the sample data affect the sample mean and standard deviation.#Less than 1% effect.$Between 1% and 10%.%Between 10% and 50%.&+Above 50% (i.e. measurements are useless).'COutliers from sample data, calculated using the boxplot technique.*JMore than 3 times the interquartile range (IQR) below the first quartile.+9Between 1.5 and 3 times the IQR below the first quartile.,9Between 1.5 and 3 times the IQR above the third quartile.-3More than 3 times the IQR above the third quartile..ZSpecification of a collection of benchmarks and environments. A benchmark may consist of:GAn environment that creates input data for benchmarks, created with s. A single ? item with a name, created with y.A (possibly nested) group of .s, created with z.25A collection of measurements made while benchmarking.;Measurements related to garbage collection are tagged with GC:. They will only be available if a benchmark is run with  "+RTS -T".Packed storage.w When GC statistics cannot be collected, GC values will be set to huge negative values. If a field is labeled with "GC " below, use k and m% to safely convert to "real" values.4*Total wall-clock time elapsed, in seconds.5RTotal CPU time elapsed, in seconds. Includes both user and kernel (system) time.6gCycles, in unspecified units that may be CPU cycles. (On i386 and x86_64, this is measured using the rdtsc instruction.)7#Number of loop iterations measured.8(GC)* Number of bytes allocated. Access using k.9(GC)9 Number of garbage collections performed. Access using k.:(GC)A Number of bytes copied during garbage collection. Access using k.;(GC)j Wall-clock time spent doing real work ("mutation"), as distinct from garbage collection. Access using m.<(GC)b CPU time spent doing real work ("mutation"), as distinct from garbage collection. Access using m.=(GC)? Wall-clock time spent doing garbage collection. Access using m.>(GC)9 CPU time spent doing garbage collection. Access using m.??A pure function or impure action that can be benchmarked. The N parameter indicates the number of times to run the given function or action.E%Top-level benchmarking configuration.GLConfidence interval for bootstrap estimation (greater than 0, less than 1).HObsolete, unused. This option used to force garbage collection between every benchmark run, but it no longer has an effect (we now unconditionally force garbage collection). This option remains solely for backwards API compatibility.IrNumber of seconds to run a single benchmark. (In practice, execution time will very slightly exceed this limit.)J2Number of resamples to perform when bootstrapping.KRegressions to perform.LiFile to write binary measurement and analysis data to. If not specified, this will be a temporary file.M7File to write report output to, with template expanded.NFile to write CSV summary to.O(File to write JSON-formatted results to.P.File to write JUnit-compatible XML results to.Q>Verbosity level to use when running and analysing benchmarks.R)Template file to use if writing a report.SNumber of iterationsTType of matching to use, if anyUMode of operationZ'Execution mode for a benchmark program.[List all benchmarks.\Print the version.] Print help^Default Benchmark mode_How to match a benchmark name.`+Match by prefix. For example, a prefix of "foo" will match "foobar".a7Match by searching given substring in benchmark paths.bSame as a, but case insensitive.c,Control the amount of information displayed.g Construct a ?( value from an impure action, where the < parameter indicates the number of times to run the action.hField names in a 2, record, in the order in which they appear.i Field names and accessors for a 2 record.j"Normalise every measurement as if 7 was 1.(7 itself is left unaffected.)kConvert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return .lVConvert from a true value back to the packed representation used for GC measurements.mConvert a (possibly unavailable) GC measurement to a true value. If the measurement is a huge negative number that corresponds to "no data", this will return .nVConvert from a true value back to the packed representation used for GC measurements.oZApply an argument to a function, and evaluate the result to weak head normal form (WHNF).pNApply an argument to a function, and evaluate the result to normal form (NF).qlPerform an action, then evaluate its result to normal form. This is particularly useful for forcing a lazy $ action to be completely performed.rmPerform an action, then evaluate its result to weak head normal form (WHNF). This is useful for forcing an S action whose result is an expression to be evaluated down to a more useful value.sRun a benchmark (or collection of benchmarks) in the given environment. The purpose of an environment is to lazily create input data to pass to the functions that will be benchmarked.A common example of environment data is input that is read from a file. Another is a large data structure constructed in-place.By deferring the creation of an environment when its associated benchmarks need the its, we avoid two problems that this strategy caused:Memory pressure distorted the results of unrelated benchmarks. If one benchmark needed e.g. a gigabyte-sized input, it would force the garbage collector to do extra work when running some other benchmark that had no use for that input. Since the data created by an environment is only available when it is in scope, it should be garbage collected before other benchmarks are run.4The time cost of generating all needed inputs could be significant in cases where no inputs (or just a few) were really needed. This occurred often, for instance when just one out of a large suite of benchmarks was run, or when a user would list the collection of benchmarks without running any. Creation.N An environment is created right before its related benchmarks are run. The y action that creates the environment is run, then the newly created environment is evaluated to normal form (hence the P constraint) before being passed to the function that receives the environment.Complex environments.j If you need to create an environment that contains multiple values, simply pack the values into a tuple.Lazy pattern matching.q In situations where a "real" environment is not needed, e.g. if a list of benchmark names is being generated,  undefined will be passed to the function that receives the environment. This avoids the overhead of generating an environment that will not actually be used.The function that receives the environment must use lazy pattern matching to deconstruct the tuple, as use of strict pattern matching will cause a crash if  undefined is passed in.Example. This program runs benchmarks in an environment that contains two values. The first value is the contents of a text file; the second is a string. Pay attention to the use of a lazy pattern to deconstruct the tuple in the function that returns the benchmarks to be run. setupEnv = do let small = replicate 1000 (1 :: Int) big <- map length . words <$> readFile "/usr/dict/words" return (small, big) main = defaultMain [ -- notice the lazy pattern match here! env setupEnv $ \ ~(small,big) -> bgroup "main" [ bgroup "small" [ bench "length" $ whnf length small , bench "length . filter" $ whnf (length . filter (==1)) small ] , bgroup "big" [ bench "length" $ whnf length big , bench "length . filter" $ whnf (length . filter (==1)) big ] ] ] Discussion.@ The environment created in the example above is intentionally not; ideal. As Haskell's scoping rules suggest, the variable big/ is in scope for the benchmarks that use only small<. It would be better to create a separate environment for bigR, so that it will not be kept alive while the unrelated benchmarks are being run.tSame as s, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.ulCreate a Benchmarkable where a fresh environment is allocated for every batch of runs of the benchmarkable.HThe environment is evaluated to normal form before the benchmark is run. When using o, r, etc. Gauge creates a ? whichs runs a batch of N repeat runs of that expressions. Gauge may run any number of these batches to get accurate measurements. Environments created by s and t/, are shared across all these batches of runs.!This is fine for simple benchmarks on static input, but when benchmarking IO operations where these operations can modify (and especially grow) the environment this means that later batches might have their accuracy effected due to longer, for example, longer garbage collection pauses.An example: Suppose we want to benchmark writing to a Chan, if we allocate the Chan using environment and our benchmark consists of writeChan env ()i, the contents and thus size of the Chan will grow with every repeat. If Gauge runs a 1,000 batches of 1,000 repeats, the result is that the channel will have 999,000 items in it by the time the last batch is run. Since GHC GC has to copy the live set for every major GC this means our last set of writes will suffer a lot of noise of the previous repeats.fBy allocating a fresh environment for every batch of runs this function should eliminate this effect.vSame as u, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.wCreate a Benchmarkable where a fresh environment is allocated for every run of the operation to benchmark. This is useful for benchmarking mutable operations that need a fresh environment, such as sorting a mutable Vector.As with s and uJ the environment is evaluated to normal form before the benchmark is run.This introduces extra noise and result in reduce accuracy compared to other Gauge benchmarks. But allows easier benchmarking for mutable operations than was previously possible.xSame as w, but but allows for an additional callback to clean up the environment. Resource clean up is exception safe, that is, it runs even if the . throws an exception.yCreate a single benchmark.z6Group several benchmarks together under a common name.{Add the given prefix to a name. If the prefix is empty, the name is returned unmodified. Otherwise, the prefix and name are separated by a '/' character.|nRetrieve the names of all benchmarks. Grouped benchmarks are prefixed with the name of the group they're in.  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrspCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.RTake the newly created environment and make it available to the given benchmarks.tpCreate the environment. The environment will be evaluated to normal form before being passed to the benchmark.!Clean up the created environment.RTake the newly created environment and make it available to the given benchmarks.unCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.bFunction returning the IO action that should be benchmarked with the newly generated environment.vnCreate an environment for a batch of N runs. The environment will be evaluated to normal form before running.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly generated environment.w5Action that creates the environment for a single run.bFunction returning the IO action that should be benchmarked with the newly genereted environment.x5Action that creates the environment for a single run.!Clean up the created environment.bFunction returning the IO action that should be benchmarked with the newly genereted environment.y!A name to identify the benchmark.An activity to be benchmarked.z+A name to identify the group of benchmarks.$Benchmarks to group under this name.{Prefix.Name.|}~~  !"#$%&'()*+,-.0/123456789:;<=>?@ABCDEFTSUQJGHIKLMNOPRVWXYZ\[]^_`abcedfghijklmnopqrstuvwxyz{|}~EFGHIJKLMNOPQRSTUVZ[\]^WXY_`abcdef?@ABCD./0123456789:;<=>klmnih}jstuvwxgyz{|opqr'()*+,-"#$%& !  7   !"#$%&'()*+,-./012 3456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~(c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHCNone<=DIR,The monad in which most gauge code executes. NoneDR  (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy!"05TEStatistics about memory usage and the garbage collector. Apart from  and 6 all are cumulative values since the program started. is cargo-culted from the  data type that  GHC.Stats has. Since B was marked as deprecated and will be removed in GHC 8.4, we use 9 to provide a backwards-compatible view of GC statistics.Total number of bytes allocatedJNumber of garbage collections performed (any generation, major and minor)(Maximum number of live bytes seen so farWNumber of byte usage samples taken, or equivalently the number of major GCs performed.1Sum of all byte usage samples, can be used with c to calculate averages with arbitrary weighting (if you are sampling this record multiple times). Number of bytes copied during GC4Number of live bytes at the end of the last major GC$Current number of bytes lost to slop;Maximum number of bytes lost to slop at any one time so far%Maximum number of megabytes allocatediCPU time spent running mutator threads. This does not include any profiling overhead or initialization.VWall clock time spent running mutator threads. This does not include initialization.CPU time spent running GC Wall clock time spent running GC*Total CPU time elapsed since program start)Total wall clock time elapsed since startfReturn the amount of elapsed CPU time, combining user and kernel (system) time into a single measure.IReturn the current wallclock time, in seconds since some arbitrary time.You must call # once before calling this function!Read the CPU cycle counter.Set up time measurement.Try to get GC statistics, bearing in mind that the GHC runtime will throw an exception if statistics collection was not enabled using "+RTS -T".Try to get GC statistics, bearing in mind that the GHC runtime will throw an exception if statistics collection was not enabled using "+RTS -T".=Measure the execution of a benchmark a given number of times.gThe amount of time a benchmark must run for in order for us to have some trust in the raw measurement.lWe set this threshold so that we can generate enough data to later perform meaningful statistical analyses.-The threshold is 30 milliseconds. One use of o must accumulate more than 300 milliseconds of total measurements above this threshold before it will finish.Run a single benchmark, and return measurements collected while executing it, along with the amount of time the measurement process took.An empty structure.IApply the difference between two sets of GC statistics to a measurement.IApply the difference between two sets of GC statistics to a measurement.Convert a number of seconds to a string. The string will consist of four decimal places, followed by a short description of the time units.#Operation to benchmark.Number of iterations.Lower bound on how long the benchmarking process should take. In practice, this time limit may be exceeded in order to generate enough data to perform meaningful statistical analyses.Statistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".Statistics gathered at the end of a run.Statistics gathered at the  beginning of a run.Value to "modify".!(c) 2009 Neil Brown BSD-stylebos@serpentine.com experimentalGHC TrustworthyDRRun a  action with the given E.<Return a random number generator, creating one if necessary.IThis is not currently thread-safe, but in a harmless way (we might call * more than once if multiple threads race)./Return an estimate of the measurement overhead.Memoise the result of an  action.This is not currently thread-safe, but hopefully in a harmless way. We might call the given action more than once if multiple threads race, so our caller's job is to write actions that can be run multiple times safely.(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone9;OT0An internal class that acts like Printf/HPrintf.`The implementation is visible to the rest of the program, but the details of the class are not.Print a "normal" note.Print verbose output.Print an error message.;ansi escape on unix to rewind and clear the line to the end  (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy!"0DR =Classify outliers in a data set, using the boxplot technique.gCompute the extent to which outliers in the sample data affect the sample mean and standard deviation./Count the total number of outliers in a sample.Display the mean of a 7, and characterise the outliers present in the sample. Multiply the Estimate,s in an analysis by the given value, using 8.%Perform an analysis of a measurement.3Regress the given predictors against the responder.bErrors may be returned under various circumstances, such as invalid names or lack of needed data.See }) for details of the regression performed.$Given a list of accessor names (see hg), return either a mapping from accessor name to function or an error message if any names are wrong.fGiven predictor and responder names, do some basic validation, then hand back the relevant accessors.Display a report of the ' present in a . "Bootstrap estimate of sample mean.3Bootstrap estimate of sample standard deviation.Number of original iterations.1Number of iterations used to compute the sample.Value to multiply by.Experiment number.Experiment name. Sample data.Predictor names.Responder name.Predictor names.Responder name." !"#$%&'()*+,-"'()*+,-"#$%& ! (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone!"Run a single benchmark.Analyse a single benchmark.3Run a single benchmark and analyse its performance.)Run, and analyse, one or more benchmarks.2Run a benchmark without analysing its performance. Iterate over benchmarks. BA predicate that chooses whether to run a benchmark by its name.!Number of loop iterations to run.BA predicate that chooses whether to run a benchmark by its name.   (c) 2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone!"05#Default benchmarking configuration. lA string describing the version of this benchmark (really, the version of gauge that was used to build it).  Default configuration to useProgram Argument        (c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHC Trustworthy%An entry point that can be used as a main function. $import Gauge.Main fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) main = defaultMain [ bgroup "fib" [ bench "10" $ whnf fib 10 , bench "35" $ whnf fib 35 , bench "37" $ whnf fib 37 ] ]YCreate a function that can tell if a name given on the command line matches a benchmark.%An entry point that can be used as a main' function, with configurable defaults.Example: import Gauge.Main.Options import Gauge.Main myConfig = defaultConfig { -- Do not GC between runs. forceGC = False } main = defaultMainWith myConfig [ bench "fib 30" $ whnf fib 30 ]!If you save the above example as "Fib.hs"/, you should be able to compile it as follows: ghc -O --make FibRun  "Fib --help"< on the command line to get a list of command line options. Run a set of .s with the given Z.!This can be useful if you have a ZZ from some other source (e.g. from a one in your benchmark driver's command-line parser).HDisplay an error message from a command line parsing failure, and exit.Command line arguments..?gopqrstuvwxyz?.stuvwxgyzpoqr(c) 2009-2014 Bryan O'Sullivan BSD-stylebos@serpentine.com experimentalGHCNone!";Run a benchmark interactively, and analyse its performance.QRun a benchmark interactively, analyse its performance, and return the analysis.;Run a benchmark interactively, and analyse its performance.QRun a benchmark interactively, analyse its performance, and return the analysis..?gopqrstuvwxyz?.stuvwxgyzpoqr!"#$$%&'()*+,,-./001234566789::;<=>?@ABCCDEFGHIJIKLLMNOPQRSTUVWXXYZ[\]]^_`abcdefghijklmnopqrstuvwxyz{|}~                             ! " # $ % & ' ( ) * + ,-./-.0-.1-.2-.3-45-46-47-48 9 : ; < = > ? @ A B C D E F G H I J K L M : > B C N O P Q R S T U V WXYZ[\]^__`abcdefghijklmnopqrstuvwxyz{|}~a-qt---     ]        "gauge-0.1.2-4yxM9YUeBm39GtHWVgbXhj Gauge.TypesGauge.Analysis Gauge.MainGaugeStatistics.Types.InternalStatistics.TransformStatistics.Sample.HistogramStatistics.QuantileStatistics.Matrix.TypesStatistics.Matrix.MutableStatistics.Math.RootFindingStatistics.InternalStatistics.TypesStatistics.FunctionStatistics.Sample.InternalStatistics.MatrixStatistics.Matrix.AlgorithmsStatistics.SampleStatistics.ResamplingStatistics.RegressionStatistics.Sample.KernelDensityStatistics.DistributionStatistics.Distribution.NormalStatistics.Resampling.Bootstrap Paths_gaugeGauge.Monad.InternalGauge.Monad.ExceptTGauge.Measurement Gauge.MonadGauge.IO.PrintfGauge.InternalGauge.Main.Options DataRecord MeasurementAnalysedReport reportNumber reportName reportKeysreportMeasuredreportAnalysisreportOutliers reportKDEsKDEkdeType kdeValueskdePDFSampleAnalysis anRegress anOverheadanMeananStdDev anOutlierVar Regression regResponder regCoeffs regRSquareOutlierVarianceovEffectovDesc ovFraction OutlierEffect UnaffectedSlightModerateSevereOutliers samplesSeen lowSeverelowMildhighMild highSevere Benchmark Environment BenchGroupMeasuredmeasTime measCpuTime measCycles measIters measAllocated measNumGcsmeasBytesCopiedmeasMutatorWallSecondsmeasMutatorCpuSecondsmeasGcWallSecondsmeasGcCpuSeconds BenchmarkableallocEnvcleanEnv runRepeatedlyperRunConfig confIntervalforceGC timeLimit resamples regressions rawDataFile reportFilecsvFilejsonFile junitFile verbositytemplateitersmatchmode displayMode DisplayMode Condensed StatsTableModeListVersionHelp DefaultMode MatchTypePrefixPatternIPattern VerbosityQuietNormalVerbosetoBenchmarkable measureKeysmeasureAccessorsrescalefromInttoInt fromDoubletoDoublewhnfnfnfIOwhnfIOenvenvWithCleanup perBatchEnvperBatchEnvWithCleanup perRunEnvperRunEnvWithCleanupbenchbgroup addPrefix benchNamesmeasure$fNFDataDataRecord$fNFDataReport $fNFDataKDE$fNFDataSampleAnalysis$fNFDataRegression$fNFDataOutlierVariance$fMonoidOutliers$fNFDataOutlierEffect$fNFDataOutliers$fShowBenchmark$fNFDataMeasured $fEqVerbosity$fOrdVerbosity$fBoundedVerbosity$fEnumVerbosity$fReadVerbosity$fShowVerbosity$fDataVerbosity$fGenericVerbosity $fEqMatchType$fOrdMatchType$fBoundedMatchType$fEnumMatchType$fReadMatchType$fShowMatchType$fDataMatchType$fGenericMatchType$fEqMode $fReadMode $fShowMode $fDataMode $fGenericMode$fEqDisplayMode$fReadDisplayMode$fShowDisplayMode$fDataDisplayMode$fGenericDisplayMode $fEqConfig $fReadConfig $fShowConfig $fDataConfig$fGenericConfig $fEqMeasured$fReadMeasured$fShowMeasured$fDataMeasured$fGenericMeasured $fEqOutliers$fReadOutliers$fShowOutliers$fDataOutliers$fGenericOutliers$fEqOutlierEffect$fOrdOutlierEffect$fReadOutlierEffect$fShowOutlierEffect$fDataOutlierEffect$fGenericOutlierEffect$fEqOutlierVariance$fReadOutlierVariance$fShowOutlierVariance$fDataOutlierVariance$fGenericOutlierVariance$fEqRegression$fReadRegression$fShowRegression$fGenericRegression$fEqSampleAnalysis$fReadSampleAnalysis$fShowSampleAnalysis$fGenericSampleAnalysis$fEqKDE $fReadKDE $fShowKDE $fDataKDE $fGenericKDE $fEqReport $fReadReport $fShowReport$fGenericReport$fEqDataRecord$fReadDataRecord$fShowDataRecord$fGenericDataRecordclassifyOutliersoutlierVariance countOutliers analyseMeanscale analyseSampleregressresolveAccessorsvalidateAccessors noteOutliers defaultConfig defaultMain makeMatcherdefaultMainWithrunMode benchmark benchmark' benchmarkWithbenchmarkWith'SampledctidctifftfftCD dctWorker idctWorkermfftfihalvevectorOK histogram_ weightedAvgSortedmodErrMMatrixMatrixrowscolsexponent_vectorMVectorVectordebug $fShowMatrix unsafeBounds replicatethaw unsafeFreeze unsafeRead unsafeWrite unsafeModify immutablyRoot NotBracketed SearchFailedfromRootridders$fAlternativeRoot$fApplicativeRoot$fMonadPlusRoot $fMonadRoot $fFunctorRoot defaultShow1 defaultShow2defaultReadPrecM1defaultReadPrecM2expectbaseGHC.ReadRead readsPrecreadListreadPrec readListPrecGHC.ShowShow showsPrecshowshowListScaleConfInt confIntLDX confIntUDX confIntCLEstimateestimateFromIntervalestPointestErrorPValueCLmkCLmkCLEmkCLFromSignificanceEconfidenceLevelsignificanceLevelcl95 mkPValueEestimateFromErrconfidenceInterval$fOrdCL$fScaleEstimate$fScaleConfInt$fNFDataConfInt$fNFDataEstimate$fNFDataPValue $fReadPValue $fShowPValue $fNFDataCL$fReadCL$fShowCLsortindicesminMaxnextHighestPowerOfTwosquareforrforMM inplaceSortST inplaceSortIO-math-functions-0.2.1.0-5HGc50cbfqADZLQ6JEdEhK Numeric.MathFunctions.Comparisonwithin robustSumVarsum fromVector dimension multiplyVnormcolumnrow unsafeIndex transposeqr innerProductmeanvariancevarianceUnbiasedstdDev Estimator jackknifeestimateresampleresampleVector jackknifeMeanjackknifeVariance_jackknifeVarianceUnbjackknifeVariancejackknifeStdDevdropAtsplitGenMeanVarianceVarianceUnbiasedStdDevFunction Bootstrap fullSamplepfxSumLpfxSumR singletonErr olsRegressrSquareolssolvebootstrapRegressbalancekdekde_ ContDistrquantiledensity logDensityGHC.Errerror complQuantile Distribution cumulativecomplCumulativeNormalDistributionstandard normalDistrEND ndPdfDenom ndCdfDenom$fContDistrNormalDistribution $fDistributionNormalDistribution$fReadNormalDistribution$fShowNormalDistribution bootstrapBCAT:<catchIOversionbindirlibdirdatadir libexecdir sysconfdir getBinDir getLibDir getDataDir getLibexecDir getSysconfDirgetDataFileNameGHC.IntInt64GHC.BaseNothingghc-prim GHC.TypesIOdeepseq-1.4.2.0Control.DeepSeqNFDatanoopmeasureAccessors_pureFuncimpure addOutliersrunGaugeCritconfiggenoverhead$fMonadBracketGauge$fMonadReaderGaugeExceptT runExceptTfinally$fMonadIOExceptT$fMonadTransExceptT$fMonadReaderExceptT$fMonadExceptT$fMonadFailureExceptT$fApplicativeExceptT$fFunctorExceptT GCStatisticsgcStatsCurrentBytesUsedgcStatsCurrentBytesSlop GHC.StatsGCStatsgcStatsBytesAllocated gcStatsNumGcsgcStatsMaxBytesUsedgcStatsNumByteUsageSamplesgcStatsCumulativeBytesUsedgcStatsBytesCopiedgcStatsMaxBytesSlopgcStatsPeakMegabytesAllocatedgcStatsMutatorCpuSecondsgcStatsMutatorWallSecondsgcStatsGcCpuSecondsgcStatsGcWallSecondsgcStatsCpuSecondsgcStatsWallSeconds getCPUTimegetTimeinitializeTime getCycles getGCStatsgetGCStatistics threshold runBenchmarkmeasured applyGCStatsapplyGCStatisticssecsrunBenchmarkablerunBenchmarkable_squishseries withConfiggetGen*mwc-random-0.13.6.0-C2JET14bUvr87meYowzx06System.Random.MWCcreateSystemRandom getOverheadmemoiseCritHPrintfTypenoteprolix printErrorrewindClearLine chPrintfImpl PrintfContchPrintf$fCritHPrintfType(->)$fCritHPrintfTypeIO$fCritHPrintfTypeGauge singleton renderNamesrunOne analyseOnerunAndAnalyseOne runAndAnalyse runFixedItersprintOverallEffect versionInfo parseWithopts optionErrorrangedescribeheader regressParams parseError selectBenches