úÎò¡éuy      !"#$%&'()*+,-./012345678 9 : ; < = > ? @ A B C D E F G H I J K L M NOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxportable experimentalbos@serpentine.comy=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an y block. On Hugs this is just unsafePerformIO. yyportable experimentalbos@serpentine.comz{Sort an array. -Partially sort an array, such that the least k elements will be  at the front.  The number k of least elements. 9Compute the minimum and maximum of an array in one pass. BCreate an array, using the given action to populate each element. portable experimentalbos@serpentine.com>Weights for affecting the importance of elements of a sample. >A function that estimates a property of a sample, such as its  mean.  Sample data. portable experimentalbos@serpentine.com@A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong.  AResample a data set repeatedly, with replacement, computing each # estimate over the resampled data. >Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element. | Drop the kth element of a vector.     portable experimentalbos@serpentine.com 7The interface shared by all probability distributions. 5Probability density function. The probability that a  stochastic variable x has the value X , i.e. P(x=X). :Cumulative distribution function. The probability that a  stochastic variable x is less than X , i.e. P(x<X). <Inverse of the cumulative distribution function. The value  X for which P(x<X). Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Probability p Initial guess Lower bound on interval Upper bound on interval     portable experimentalbos@serpentine.com}~€portable experimentalbos@serpentine.comA very large number.  The largest  x such that 2**(x-1) is approximately  representable as a ‚.  sqrt 2  sqrt (2 * pi) 2 / sqrt pi 1 / sqrt 2 The smallest ‚ larger than 1. portable experimentalbos@serpentine.com  Parameters a and b to the " function. !O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. "O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. #O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: . midspread medianUnbiased 4 (toU [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. $2California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. %Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. &9Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. '6Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. (Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. )Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica.  !"#$%&'() ! "#$%'&()  !"#$%&'()portable experimentalbos@serpentine.comƒ„…†‡ˆ*+#Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. ,?Harmonic mean. This algorithm performs a single pass over the  sample. -:Geometric mean of a sample containing no negative values. . Compute the kth central moment of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. / Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 0;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  / skewness . powers 3 $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  ) skewness . powers 5 $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its 2 is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. 1?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its 2 is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. ‰2'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. 3Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. 4;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. Š5'Maximum likelihood estimate of a sample' s variance. 6Unbiased estimate of a sample' s variance. 7;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. *+,-./01234567*+,-./01234567*+,-./01234567 portable experimentalbos@serpentine.com8‹9:» (scale) parameter. ;89:;89:;9899:; portable experimentalbos@serpentine.com <The normal distribution. ŒŽ=>?‘’“<=>?<>?=<=>? portable experimentalbos@serpentine.com ”•–—˜™@9Evaluate a series of Chebyshev polynomials. Uses Clenshaw's  algorithm. Parameter of each function.  Coefficients of each polynomial  term, in increasing order. AThe binomial coefficient.  7 `choose` 3 == 35 BCompute the factorial function n !. Returns " if the E input is above 170 (above which the result cannot be represented by  a 64-bit ‚). C@Compute the natural logarithm of the factorial function. Gives ! 16 decimal digits of precision. D/Compute the incomplete gamma integral function ³(s,x).  Uses Algorithm AS 239 by Shea. s x E,Compute the logarithm of the gamma function “(x ). Uses  Algorithm AS 245 by Macleod. Gives an accuracy of 10 &12 significant decimal digits, except  for small regions around x = 1 and x = 2, where the function * goes to zero. For greater accuracy, use F. Returns ") if the input is outside of the range (0 < x  "d 1e305). F-Compute the logarithm of the gamma function, “(x ). Uses a  Lanczos approximation. This function is slower than E, but gives 14 or more 7 significant decimal digits of accuracy, except around x = 1 and  x' = 2, where the function goes to zero. Returns ") if the input is outside of the range (0 < x  "d 1e305). @ABCDEF@ABCDEF@ABCDEF portable experimentalbos@serpentine.com GThe binomial distribution. šHNumber of trials. I Probability. ›œžŸJNumber of trials.  Probability. GHIJGHIJHIGHIHIJ portable experimentalbos@serpentine.comKThe gamma distribution.  LShape parameter, k. MScale parameter, Ñ. ¡¢£KLMKLMLMKLMLMportable experimentalbos@serpentine.com N¤OPQ¥¦Rm l k §¨©NOPQRNOPQROPQNOPQOPQRportable experimentalbos@serpentine.comSª«T¬­®STSTSTportable experimentalbos@serpentine.com U¯VO(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth X , at least k simple powers  must be collected.  For the Y', at least 2 simple powers are needed.  For \$, we need at least 3 simple powers.  For ]), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. W5The order (number) of simple powers collected from a . X Compute the kth central moment of a . The central 4 moment is also known as the moment about the mean. Y'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires U with W at least 2. Z;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. [Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires U with W at least 2. \;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  / skewness . powers 3 $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  ) skewness . powers 3 $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its Y is zero.  Requires U with W at least 3. ]?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its Y is  zero.  Requires U with W at least 4. ^'The number of elements in the original . This is the  sample's zeroth simple power. _$The sum of elements in the original . This is the  sample's first simple power. `0The arithmetic mean of elements in the original . >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. UVWXYZ[\]^_` UVW^_`YZ[X\] UVWXYZ[\]^_`portable experimentalbos@serpentine.coma8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v b*The width of the convolution kernel used. cPoints from the range of a . def0Bandwidth estimator for an Epanechnikov kernel. g+Bandwidth estimator for a Gaussian kernel. hCCompute the optimal bandwidth from the observed data for the given  kernel. i>Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data jAEpanechnikov kernel for probability density function estimation. k=Gaussian kernel for probability density function estimation. l<Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate mBA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  Sample data n;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate oASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate °abcdefghijklmnonocdeibhfgajklmabcdedefghijklmnoportable experimentalbos@serpentine.com ±²p.A point and interval estimate computed via an . qrPoint estimate. s>Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). t>Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). u.Confidence level of the confidence intervals. ³vBBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data pqrstuvpqrstuvpqrstuqrstuvportable experimentalbos@serpentine.comw?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. x@Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process. wxwxwx´ !"#$%&'()*+,-./01233456789:;<=#>?@ABC!DEFGH I J K L M N O L P Q R S T U V W X Y Z [ \ ]^_`aObKcde@!EDBCfg#hijjklmnopqrstuvvwxyz{|}~€%&'‚ƒ„‚ƒ…††‡‡ˆˆ‰Š ‹ Œ # ! Ž % & '     ‘ ‘ ’ % & ' # !  % & '“!#%&'”•%&'c–‡—˜™statistics-0.2.2Statistics.FunctionStatistics.TypesStatistics.ResamplingStatistics.Distribution!Statistics.Distribution.GeometricStatistics.ConstantsStatistics.QuantileStatistics.Sample#Statistics.Distribution.ExponentialStatistics.Distribution.NormalStatistics.Math Statistics.Distribution.BinomialStatistics.Distribution.Gamma&Statistics.Distribution.HypergeometricStatistics.Distribution.PoissonStatistics.Sample.PowersStatistics.KernelDensityStatistics.Resampling.BootstrapStatistics.AutocorrelationStatistics.Internalsort partialSortminMaxcreateUWeights EstimatorSampleResample fromResampleresample jackknifeVariancevarianceMeanmean Distribution probability cumulativeinversefindRootGeometricDistribution pdSuccess fromSuccessm_huge m_max_expm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilon ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedrange harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvarianceUnbiasedstdDev fastVariancefastVarianceUnbiased fastStdDevExponentialDistributionedLambda fromLambda fromSampleNormalDistributionstandard fromParams chebyshevchoose factorial logFactorialincompleteGammalogGamma logGammaLBinomialDistributionbdTrials bdProbabilitybinomialGammaDistributiongdShapegdScaleHypergeometricDistributionhdMhdLhdKPoissonDistributionPowerspowersordercountsumKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFEstimateestPoint estLowerBound estUpperBoundestConfidenceLevel bootstrapBCAautocovarianceautocorrelationinlinePerformIOMMdropAtGDghc-prim GHC.TypesIntDoubleT1TV robustVarfastVarEDND ndPdfDenom ndCdfDenomLFCBDHDPDpdLambda errorShort:<estimate