!"#$%&'()*+,-./0123456789:;<=>?@A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [\]^_`abcdefghijklmnopqrstuvwxyz{|}~portable experimentalbos@serpentine.com=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an  block. On Hugs this is just unsafePerformIO. portable experimentalbos@serpentine.comSort a vector. -Partially sort a vector, such that the least k elements will be  at the front.  The number k of least elements.  Return the indices of a vector. Zip a vector with its indices. 9Compute the minimum and maximum of a vector in one pass. 9Create a vector, using the given action to populate each  element. portable experimentalbos@serpentine.com>Weights for affecting the importance of elements of a sample. >A function that estimates a property of a sample, such as its  mean. GSample with weights. First element of sample is data, second is weight  Sample data.    portable experimentalbos@serpentine.com @A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong. AResample a data set repeatedly, with replacement, computing each # estimate over the resampled data. >Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element.  Drop the kth element of a vector.      portable experimentalbos@serpentine.com 7The interface shared by all probability distributions. 5Probability density function. The probability that a  the random variable X has the value x , i.e. P(X=x). :Cumulative distribution function. The probability that a  random variable X is less than x , i.e. P(X"dx). <Inverse of the cumulative distribution function. The value  x for which P(X"dx). Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Probability p Initial guess Lower bound on interval Upper bound on interval   portable experimentalbos@serpentine.comportable experimentalbos@serpentine.com A very large number.  The largest  x such that 2**(x-1) is approximately  representable as a .  sqrt 2  sqrt (2 * pi) 2 / sqrt pi  1 / sqrt 2! The smallest   such that 1 +  "` 1. " log(sqrt((2*pi)) / 2#Positive infinity. $Negative infinity. %Not a number.  !"#$% ! "#$%  !"#$%portable experimentalbos@serpentine.com & Parameters a and b to the ) function. '(O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. )O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. *O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: 5 midspread medianUnbiased 4 (U.fromList [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. +2California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. ,Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. -9Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. .6Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. /Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. 0Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica. &'()*+,-./0 (&')*+,.-/0 &''()*+,-./0portable experimentalbos@serpentine.com12#Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. 3AArithmetic mean for weighted sample. It uses algorithm analogous  to one in 2 4?Harmonic mean. This algorithm performs a single pass over the  sample. 5:Geometric mean of a sample containing no negative values. 6 Compute the k3th central moment of a sample. The central moment - is also known as the moment about the mean. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 7 Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 8;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  % skewness $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.   skewness $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its : is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. 9?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its : is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. :'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. ;Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. <;Standard deviation. This is simply the square root of the $ unbiased estimate of the variance. =.Weighted variance. This is biased estimation. >'Maximum likelihood estimate of a sample' s variance. ?Unbiased estimate of a sample' s variance. @;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance.  123456789:;<=>?@ 123456789:;<=>?@123456789:;<=>?@ portable experimentalbos@serpentine.com AThe normal distribution. BJStandard normal distribution with mean equal to 0 and variance equal to 1 C+Create normal distribution from parameters Mean of distribution Variance of distribution D4Create distribution using parameters estimated from A sample. Variance is estimated using maximum likelihood method  (biased estimation). ABCDACDBABCD portable experimentalbos@serpentine.comE8Evaluate a Chebyshev polynomial of the first kind. Uses  Clenshaw' s algorithm. Parameter of each function. ;Coefficients of each polynomial term, in increasing order. F?Evaluate a Chebyshev polynomial of the first kind. Uses Broucke's F ECHEB algorithm, and his convention for coefficient handling, and so  gives different results than E for the same inputs. Parameter of each function. ;Coefficients of each polynomial term, in increasing order. )Quickly compute the natural logarithm of n G k, with  no checking. G!Compute the binomial coefficient n `G` k. For  values of k2 > 30, this uses an approximation for performance D reasons. The approximation is accurate to 7 decimal places in the > worst case, but is typically accurate to 9 decimal places or  better.  Example:  7 `choose` 3 == 35 HCompute the factorial function n !. Returns " if the E input is above 170 (above which the result cannot be represented by  a 64-bit ). I@Compute the natural logarithm of the factorial function. Gives ! 16 decimal digits of precision. J7Compute the normalized lower incomplete gamma function  (s,x). Normalization means that  (",x$)=1. Uses Algorithm AS 239 by Shea. s x K,Compute the logarithm of the gamma function (x ). Uses  Algorithm AS 245 by Macleod. Gives an accuracy of 10 &12 significant decimal digits, except  for small regions around x = 1 and x = 2, where the function * goes to zero. For greater accuracy, use L. Returns ") if the input is outside of the range (0 < x  "d 1e305). L-Compute the logarithm of the gamma function, (x ). Uses a  Lanczos approximation. This function is slower than K, but gives 14 or more 7 significant decimal digits of accuracy, except around x = 1 and  x' = 2, where the function goes to zero. Returns ") if the input is outside of the range (0 < x  "d 1e305). ,Compute the log gamma correction factor for x "e 10. This : correction factor is suitable for an alternate (but less % numerically accurate) definition of K: Elgg x = 0.5 * log(2*pi) + (x-0.5) * log x - x + logGammaCorrection x M4Compute the natural logarithm of the beta function. N%Compute the natural logarithm of 1 + x. This is accurate even  for values of x near zero, where use of log(1+x) would lose  precision. EFGHIJKLMN GMEFHIJKLN EFGHIJKLMN portable experimentalbos@serpentine.com OThe binomial distribution. PNumber of trials. Q Probability. RNumber of trials.  Probability. OPQROPQRPQOPQPQR portable experimentalbos@serpentine.comSThe gamma distribution. TShape parameter, k. UScale parameter, . STUSTUTUSTUTU portable experimentalbos@serpentine.com VWXYZm l k VWXYZVWXYZWXYVWXYWXYZportable experimentalbos@serpentine.com[\[\[\[\portable experimentalbos@serpentine.com ]^O(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth ` , at least k simple powers  must be collected.  For the a', at least 2 simple powers are needed.  For d$, we need at least 3 simple powers.  For e), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. _5The order (number) of simple powers collected from a sample. ` Compute the k,th central moment of a sample. The central 4 moment is also known as the moment about the mean. a'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires ] with _ at least 2. b;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. cUnbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires ] with _ at least 2. d;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  0 skewness . powers 3 $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  * skewness . powers 3 $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its a is zero.  Requires ] with _ at least 3. e?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its a is  zero.  Requires ] with _ at least 4. f'The number of elements in the original Sample. This is the  sample's zeroth simple power. g$The sum of elements in the original Sample. This is the  sample's first simple power. h0The arithmetic mean of elements in the original Sample. >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. ]^_`abcdefgh ]^_fghabc`de ]^_`abcdefghportable experimentalbos@serpentine.comijk (scale) parameter. lijklijkljijjklportable experimentalbos@serpentine.comm8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v n*The width of the convolution kernel used. oPoints from the range of a Sample. pqr0Bandwidth estimator for an Epanechnikov kernel. s+Bandwidth estimator for a Gaussian kernel. tCCompute the optimal bandwidth from the observed data for the given  kernel. u>Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data vAEpanechnikov kernel for probability density function estimation. w=Gaussian kernel for probability density function estimation. x<Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate yBA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  sample data z;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate  Data sample {ASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate  Data sample mnopqrstuvwxyz{z{opquntrsmvwxymnopqpqrstuvwxyz{portable experimentalbos@serpentine.com |.A point and interval estimate computed via an . }~Point estimate. >Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). >Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). .Confidence level of the confidence intervals. BBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data |}~|}~|}~}~portable experimentalbos@serpentine.com?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. @Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process.  !"#$%&'()*+,-./0123456789::;<=>?@ABCD&EFGHIJK$LMNOPQ R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j TklmnoH$MLJKpq&rslUtuvvwxyz{|}~()*  & $ ( ) *          ( ) * & $  ( ) *  $ & ( ) *()*mstatistics-0.6.0.2Statistics.FunctionStatistics.TypesStatistics.ResamplingStatistics.Distribution!Statistics.Distribution.GeometricStatistics.ConstantsStatistics.QuantileStatistics.SampleStatistics.Distribution.NormalStatistics.Math Statistics.Distribution.BinomialStatistics.Distribution.Gamma&Statistics.Distribution.HypergeometricStatistics.Distribution.PoissonStatistics.Sample.Powers#Statistics.Distribution.ExponentialStatistics.KernelDensityStatistics.Resampling.BootstrapStatistics.AutocorrelationStatistics.Internalsort partialSortindicesindexedminMaxcreateWeights EstimatorWeightedSampleSampleResample fromResampleresample jackknifeVariancevarianceMeanmean Distributiondensity cumulativequantilefindRootGeometricDistribution pdSuccess fromSuccessm_huge m_max_expm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilonm_ln_sqrt_2_pi m_pos_inf m_neg_infm_NaN ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedrange meanWeighted harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvarianceUnbiasedstdDevvarianceWeighted fastVariancefastVarianceUnbiased fastStdDevNormalDistributionstandard fromParams fromSample chebyshevchebyshevBrouckechoose factorial logFactorialincompleteGammalogGamma logGammaLlogBetalog1pBinomialDistributionbdTrials bdProbabilitybinomialGammaDistributiongdShapegdScaleHypergeometricDistributionhdMhdLhdKPoissonDistribution fromLambdaPowerspowersordercountsumExponentialDistributionedLambdaKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFEstimateestPoint estLowerBound estUpperBoundestConfidenceLevel bootstrapBCAautocovarianceautocorrelationinlinePerformIOMMdropAtPGDghc-prim GHC.TypesIntDoubleT1TVsqr robustSumVarrobustSumVarWeightedfastVar^ND ndPdfDenom ndCdfDenomLFBC logChooseFastlogGammaCorrectionBD isIntegralfloorf integralErrorHDPDpdLambdaED errorShort:<estimate