úÎűV~      !"#$%&'()*+,-./0123456789:;<= > ? @ A B C D E F G H I J K L M N O P Q R S TUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}portable experimentalbos@serpentine.com~=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an ~ block. On Hugs this is just unsafePerformIO. ~~€‚ƒ„…†‡ˆ‰Š‹Œportable experimentalbos@serpentine.comŽSort an array. -Partially sort an array, such that the least k elements will be  at the front.  The number k of least elements.  Return the indices of an array. 9Compute the minimum and maximum of an array in one pass. !Create an array, using the given  action to populate each  element. !Create an array, using the given  action to populate each  element. portable experimentalbos@serpentine.com>Weights for affecting the importance of elements of a sample. >A function that estimates a property of a sample, such as its  mean. GSample with weights. First element of sample is data, second is weight  Sample data.    portable experimentalbos@serpentine.com @A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong. AResample a data set repeatedly, with replacement, computing each # estimate over the resampled data. >Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element. ‘’ Drop the kth element of a vector.      portable experimentalbos@serpentine.com “”7The interface shared by all probability distributions. 5Probability density function. The probability that a  the random variable X has the value x , i.e. P(X=x). :Cumulative distribution function. The probability that a  random variable X is less than x , i.e. P(X"dx). <Inverse of the cumulative distribution function. The value  x for which P(X"dx). Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Probability p Initial guess Lower bound on interval Upper bound on interval   portable experimentalbos@serpentine.com•–—˜portable experimentalbos@serpentine.comA very large number.  The largest ™ x such that 2**(x-1) is approximately  representable as a š.  sqrt 2  sqrt (2 * pi) 2 / sqrt pi  1 / sqrt 2! The smallest š larger than 1.  !!  !portable experimentalbos@serpentine.com " Parameters a and b to the % function. #$O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. %O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. &O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: / midspread medianUnbiased 4 (U.to [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. '2California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. (Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. )9Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. *6Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. +Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. ,Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica. "#$%&'()*+, $"#%&'(*)+, "##$%&'()*+,portable experimentalbos@serpentine.com›œžŸ -.#Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. /AArithmetic mean for weighted sample. It uses algorithm analogous  to one in . 0?Harmonic mean. This algorithm performs a single pass over the  sample. 1:Geometric mean of a sample containing no negative values. 2 Compute the k3th central moment of a sample. The central moment - is also known as the moment about the mean. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 3 Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 4;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  % skewness $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.   skewness $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its 6 is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 5?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its 6 is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. Ą˘6'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. 7Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. 8;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. Ł9.Weighted variance. This is biased estimation. ¤:'Maximum likelihood estimate of a sample' s variance. ;Unbiased estimate of a sample' s variance. <;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance.  -./0123456789:;< -./0123456789:;<-./0123456789:;< portable experimentalbos@serpentine.com =The normal distribution. ĽŚ§¨Š>JStandard normal distribution with mean equal to 0 and variance equal to 1 ?+Create normal distribution from parameters Mean of distribution Variance of distribution @4Create distribution using parameters estimated from A sample. Variance is estimated using maximum likelihood method  (biased estimation). ŞŤŹ=>?@=?@>=>?@ portable experimentalbos@serpentine.com ­ŽŻ°ą˛A9Evaluate a series of Chebyshev polynomials. Uses Clenshaw's  algorithm. Parameter of each function.  Coefficients of each polynomial  term, in increasing order. BThe binomial coefficient.  7 `choose` 3 == 35 CCompute the factorial function n !. Returns " if the E input is above 170 (above which the result cannot be represented by  a 64-bit š). D@Compute the natural logarithm of the factorial function. Gives ! 16 decimal digits of precision. E/Compute the incomplete gamma integral function ł(s,x).  Uses Algorithm AS 239 by Shea. s x F,Compute the logarithm of the gamma function “(x ). Uses  Algorithm AS 245 by Macleod. Gives an accuracy of 10 &12 significant decimal digits, except  for small regions around x = 1 and x = 2, where the function * goes to zero. For greater accuracy, use G. Returns ") if the input is outside of the range (0 < x  "d 1e305). G-Compute the logarithm of the gamma function, “(x ). Uses a  Lanczos approximation. This function is slower than F, but gives 14 or more 7 significant decimal digits of accuracy, except around x = 1 and  x' = 2, where the function goes to zero. Returns ") if the input is outside of the range (0 < x  "d 1e305). ABCDEFGABCDEFGABCDEFG portable experimentalbos@serpentine.com HThe binomial distribution. łINumber of trials. J Probability. ´ľśˇ¸šşKNumber of trials.  Probability. ťHIJKHIJKIJHIJIJK portable experimentalbos@serpentine.comLThe gamma distribution. źMShape parameter, k. NScale parameter, Ń. ˝žżLMNLMNMNLMNMN portable experimentalbos@serpentine.com OŔPQRÁÂSm l k ĂÄĹOPQRSOPQRSPQROPQRPQRSportable experimentalbos@serpentine.comTĆÇUČÉĘTUTUTUportable experimentalbos@serpentine.comVËWO(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth Y , at least k simple powers  must be collected.  For the Z', at least 2 simple powers are needed.  For ]$, we need at least 3 simple powers.  For ^), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. X5The order (number) of simple powers collected from a  . ĚY Compute the kth central moment of a  . The central 4 moment is also known as the moment about the mean. Z'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires V with X at least 2. [;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. \Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires V with X at least 2. ];Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  0 skewness . powers 3 $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  * skewness . powers 3 $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its Z is zero.  Requires V with X at least 3. ^?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its Z is  zero.  Requires V with X at least 4. _'The number of elements in the original  . This is the  sample's zeroth simple power. `$The sum of elements in the original  . This is the  sample's first simple power. a0The arithmetic mean of elements in the original  . >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. VWXYZ[\]^_`a VWX_`aZ[\Y]^ VWXYZ[\]^_`aportable experimentalbos@serpentine.combÍcdť (scale) parameter. ebcdebcdecbccdeportable experimentalbos@serpentine.comf8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v g*The width of the convolution kernel used. hPoints from the range of a  . ijk0Bandwidth estimator for an Epanechnikov kernel. l+Bandwidth estimator for a Gaussian kernel. mCCompute the optimal bandwidth from the observed data for the given  kernel. n>Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data oAEpanechnikov kernel for probability density function estimation. p=Gaussian kernel for probability density function estimation. q<Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate rBA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  Sample data s;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate tASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate Îfghijklmnopqrststhijngmklfopqrfghijijklmnopqrstportable experimentalbos@serpentine.com ĎĐu.A point and interval estimate computed via an . vwPoint estimate. x>Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). y>Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). z.Confidence level of the confidence intervals. Ń{BBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data uvwxyz{uvwxyz{uvwxyzvwxyz{portable experimentalbos@serpentine.com|?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. }@Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process. |}|}|}Ň  !"#$%&'()*+,-./01234567789:;<=>?@A'BCDEFGH%IJKLMN O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d QefghiE%JIGHjk'lmfRnoppqrstuvwxyz{||}~€‚ƒ„…†‡…†ˆ…†‰…†Š…†‹…†Œ…†…†Ž…†…†…†‘…†’…†“…†”••–—˜™š›œžžŸ)*+™š ™šĄ˘˘ŁŁ¤¤ĽŚ§¨ Š ' % Ş Ť ) * + Ź Ź ­ ­ Ž Ž Ż ) * ° ą + ' % ˛ Ÿ ) * + ł % ' ) * +´ľ)*+gœśˇŁ¸šşstatistics-0.5.0.0Statistics.FunctionStatistics.TypesStatistics.ResamplingStatistics.Distribution!Statistics.Distribution.GeometricStatistics.ConstantsStatistics.QuantileStatistics.SampleStatistics.Distribution.NormalStatistics.Math Statistics.Distribution.BinomialStatistics.Distribution.Gamma&Statistics.Distribution.HypergeometricStatistics.Distribution.PoissonStatistics.Sample.Powers#Statistics.Distribution.ExponentialStatistics.KernelDensityStatistics.Resampling.BootstrapStatistics.AutocorrelationStatistics.InternalStatistics.RandomVariatesort partialSortindicesminMaxcreateUcreateIOWeights EstimatorWeightedSampleSampleResample fromResampleresample jackknifeVariancevarianceMeanmean Distributiondensity cumulativequantilefindRootGeometricDistribution pdSuccess fromSuccessm_huge m_max_expm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilon ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedrange meanWeighted harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvarianceUnbiasedstdDevvarianceWeighted fastVariancefastVarianceUnbiased fastStdDevNormalDistributionstandard fromParams fromSample chebyshevchoose factorial logFactorialincompleteGammalogGamma logGammaLBinomialDistributionbdTrials bdProbabilitybinomialGammaDistributiongdShapegdScaleHypergeometricDistributionhdMhdLhdKPoissonDistribution fromLambdaPowerspowersordercountsumExponentialDistributionedLambdaKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFEstimateestPoint estLowerBound estUpperBoundestConfidenceLevel bootstrapBCAautocovarianceautocorrelationinlinePerformIOmwc-random-0.8.0.0System.Random.MWCnormal uniformVectorwithSystemRandomrestoresave initializecreateuniformRuniformVariateGenGenIOGenSTSeedMMbaseGHC.STSTghc-prim GHC.TypesIOindexeddropAtPGDIntDoubleT1TVsqr robustSumVarrobustSumVarWeightedfastVarND ndPdfDenom ndCdfDenomLFCBD isIntegralfloorf integralErrorHDPDpdLambdaED errorShort:<estimate