śĪżņĀ{      !"#$%&'()*+,-./0123456789: ; < = > ? @ A B C D E F G H I J K L M N O P QRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyzportable experimentalbos@serpentine.com{=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an { block. On Hugs this is just unsafePerformIO. {{|}~€‚ƒ„…†‡ˆ‰portable experimentalbos@serpentine.comŠ‹Sort an array. -Partially sort an array, such that the least k elements will be  at the front.  The number k of least elements.  Return the indices of an array. 9Compute the minimum and maximum of an array in one pass. !Create an array, using the given Œ action to populate each  element. !Create an array, using the given  action to populate each  element. portable experimentalbos@serpentine.com>Weights for affecting the importance of elements of a sample. >A function that estimates a property of a sample, such as its  mean.  Sample data. portable experimentalbos@serpentine.com @A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong. AResample a data set repeatedly, with replacement, computing each # estimate over the resampled data. >Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element. Ž Drop the kth element of a vector.     portable experimentalbos@serpentine.com 7The interface shared by all probability distributions. 5Probability density function. The probability that a  the random variable X has the value x , i.e. P(X=x). :Cumulative distribution function. The probability that a  random variable X is less than x , i.e. P(X"dx). <Inverse of the cumulative distribution function. The value  x for which P(X"dx). Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Probability p Initial guess Lower bound on interval Upper bound on interval   portable experimentalbos@serpentine.com‘’“”portable experimentalbos@serpentine.comA very large number.  The largest • x such that 2**(x-1) is approximately  representable as a –.  sqrt 2  sqrt (2 * pi) 2 / sqrt pi 1 / sqrt 2  The smallest – larger than 1.    portable experimentalbos@serpentine.com ! Parameters a and b to the $ function. "#O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. $O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. %O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: . midspread medianUnbiased 4 (toU [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. &2California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. 'Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. (9Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. )6Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. *Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. +Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica. !"#$%&'()*+ #!"$%&')(*+ !""#$%&'()*+portable experimentalbos@serpentine.com—˜™š›œ,-#Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. .?Harmonic mean. This algorithm performs a single pass over the  sample. /:Geometric mean of a sample containing no negative values. 0 Compute the k3th central moment of a sample. The central moment - is also known as the moment about the mean. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 1 Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 2;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  $ skewness $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.   skewness $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its 4 is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 3?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its 4 is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 4'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. 5Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. 6;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. ž7'Maximum likelihood estimate of a sample' s variance. 8Unbiased estimate of a sample' s variance. 9;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. ,-./0123456789,-./0123456789,-./0123456789 portable experimentalbos@serpentine.com :The normal distribution. Ÿ ”¢£;<=¤„¦:;<=:<=;:;<= portable experimentalbos@serpentine.com §Ø©Ŗ«¬>9Evaluate a series of Chebyshev polynomials. Uses Clenshaw's  algorithm. Parameter of each function.  Coefficients of each polynomial  term, in increasing order. ?The binomial coefficient.  7 `choose` 3 == 35 @Compute the factorial function n !. Returns " if the E input is above 170 (above which the result cannot be represented by  a 64-bit –). A@Compute the natural logarithm of the factorial function. Gives ! 16 decimal digits of precision. B/Compute the incomplete gamma integral function ³(s,x).  Uses Algorithm AS 239 by Shea. s x C,Compute the logarithm of the gamma function “(x ). Uses  Algorithm AS 245 by Macleod. Gives an accuracy of 10 &12 significant decimal digits, except  for small regions around x = 1 and x = 2, where the function * goes to zero. For greater accuracy, use D. Returns ") if the input is outside of the range (0 < x  "d 1e305). D-Compute the logarithm of the gamma function, “(x ). Uses a  Lanczos approximation. This function is slower than C, but gives 14 or more 7 significant decimal digits of accuracy, except around x = 1 and  x' = 2, where the function goes to zero. Returns ") if the input is outside of the range (0 < x  "d 1e305). >?@ABCD>?@ABCD>?@ABCD portable experimentalbos@serpentine.com EThe binomial distribution. ­FNumber of trials. G Probability. ®Æ°±²³“HNumber of trials.  Probability. µEFGHEFGHFGEFGFGH portable experimentalbos@serpentine.comIThe gamma distribution. ¶JShape parameter, k. KScale parameter, Ń. ·ø¹IJKIJKJKIJKJK portable experimentalbos@serpentine.com LŗMNO»¼Pm l k ½¾æLMNOPLMNOPMNOLMNOMNOPportable experimentalbos@serpentine.comQĄĮRĀĆÄQRQRQRportable experimentalbos@serpentine.com SÅTO(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth V , at least k simple powers  must be collected.  For the W', at least 2 simple powers are needed.  For Z$, we need at least 3 simple powers.  For [), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. U5The order (number) of simple powers collected from a . V Compute the kth central moment of a . The central 4 moment is also known as the moment about the mean. W'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires S with U at least 2. X;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. YUnbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires S with U at least 2. Z;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  / skewness . powers 3 $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  ) skewness . powers 3 $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its W is zero.  Requires S with U at least 3. [?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its W is  zero.  Requires S with U at least 4. \'The number of elements in the original . This is the  sample's zeroth simple power. ]$The sum of elements in the original . This is the  sample's first simple power. ^0The arithmetic mean of elements in the original . >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. STUVWXYZ[\]^ STU\]^WXYVZ[ STUVWXYZ[\]^portable experimentalbos@serpentine.com_Ę`a» (scale) parameter. b_`ab_`ab`_``abportable experimentalbos@serpentine.comc8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v d*The width of the convolution kernel used. ePoints from the range of a . fgh0Bandwidth estimator for an Epanechnikov kernel. i+Bandwidth estimator for a Gaussian kernel. jCCompute the optimal bandwidth from the observed data for the given  kernel. k>Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data lAEpanechnikov kernel for probability density function estimation. m=Gaussian kernel for probability density function estimation. n<Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate oBA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  Sample data p;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate qASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate Ēcdefghijklmnopqpqefgkdjhiclmnocdefgfghijklmnopqportable experimentalbos@serpentine.com ČÉr.A point and interval estimate computed via an . stPoint estimate. u>Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). v>Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). w.Confidence level of the confidence intervals. ŹxBBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data rstuvwxrstuvwxrstuvwstuvwxportable experimentalbos@serpentine.comy?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. z@Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process. yzyzyzĖ !"#$%&'()*+,-./01234566789:;<=>?@&ABCDEF$GHIJK L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a NbcdefC$HGEFgh&ijcOklmmnopqrstuvwxyyz{|}~€‚ƒ„‚ƒ…‚ƒ†‚ƒ‡‚ƒˆ‚ƒ‰‚ƒŠ‚ƒ‹‚ƒŒ‚ƒ‚ƒŽ‚ƒ‚ƒ‚ƒ‘’’“”•–—˜™šš›()*–—œ–—žžŸŸ  ”¢ £ & $ ¤ „ ( ) * ¦ ¦ § § Ø Ø © ( ) Ŗ « * & $ ¬ › ( ) * ­ $ & ( ) *®Æ()*d°±Ÿ²³“statistics-0.4.1Statistics.FunctionStatistics.TypesStatistics.ResamplingStatistics.Distribution!Statistics.Distribution.GeometricStatistics.ConstantsStatistics.QuantileStatistics.SampleStatistics.Distribution.NormalStatistics.Math Statistics.Distribution.BinomialStatistics.Distribution.Gamma&Statistics.Distribution.HypergeometricStatistics.Distribution.PoissonStatistics.Sample.Powers#Statistics.Distribution.ExponentialStatistics.KernelDensityStatistics.Resampling.BootstrapStatistics.AutocorrelationStatistics.InternalStatistics.RandomVariatesort partialSortindicesminMaxcreateUcreateIOWeights EstimatorSampleResample fromResampleresample jackknifeVariancevarianceMeanmean Distributiondensity cumulativequantilefindRootGeometricDistribution pdSuccess fromSuccessm_huge m_max_expm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilon ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedrange harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvarianceUnbiasedstdDev fastVariancefastVarianceUnbiased fastStdDevNormalDistributionstandard fromParams fromSample chebyshevchoose factorial logFactorialincompleteGammalogGamma logGammaLBinomialDistributionbdTrials bdProbabilitybinomialGammaDistributiongdShapegdScaleHypergeometricDistributionhdMhdLhdKPoissonDistribution fromLambdaPowerspowersordercountsumExponentialDistributionedLambdaKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFEstimateestPoint estLowerBound estUpperBoundestConfidenceLevel bootstrapBCAautocovarianceautocorrelationinlinePerformIOmwc-random-0.8.0.0System.Random.MWCnormal uniformVectorwithSystemRandomrestoresave initializecreateuniformRuniformVariateGenGenIOGenSTSeedMMbaseGHC.STSTghc-prim GHC.TypesIOdropAtPGDIntDoubleT1TV robustVarfastVarND ndPdfDenom ndCdfDenomLFCBD isIntegralfloorf integralErrorHDPDpdLambdaED errorShort:<estimate