,!͆      !"#$%&'()*+,-./01234567 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W XYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~portable experimentalbos@serpentine.com=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an  block. On Hugs this is just unsafePerformIO. portable experimentalbos@serpentine.com(An immutable snapshot of the state of a . -State of the pseudo-random number generator. 7The class of types for which we can generate uniformly  distributed random variates. The uniform PRNG uses Marsaglia'!s MWC256 (also known as MWC8222) A multiply-with-carry generator, which has a period of 2^8222 and @ fares well in tests of randomness. It is also extremely fast, 9 between 2 and 3 times faster than the Mersenne Twister. Note : Marsaglia',s PRNG is not known to be cryptographically @ secure, so you should not use it for cryptographic operations. =Generate a single uniformly distributed random variate. The * range of values produced varies by type: ) For fixed-width integral types, the type's entire range is  used. + For floating point numbers, the range (0,1] is used. Zero is 8 explicitly excluded, to allow variates to be used in 9 statistical calculations that require non-zero values  (e.g. uses of the  function).  The range of random  variates is the same as for  . To generate a ) variate with a range of [0,1), subtract  2**(-33). To do the same with  variates, subtract  2**(-53). 4Create a generator for variates using a fixed seed. BCreate a generator for variates using the given seed, of which up < to 256 elements will be used. For arrays of less than 256 ; elements, part of the default seed will be used to finish  initializing the generator' s state.  Examples:   initialize (singletonU 42)  ) initialize (toU [4, 8, 15, 16, 23, 42]) =If a seed contains fewer than 256 elements, it is first used ! verbatim, then its elements are ed against elements of the . default seed until 256 elements are reached. Save the state of a , for later use by .  Create a new # that mirrors the state of a saved . @Using the current time as a seed, perform an action that uses a D random variate generator. This is a horrible fallback for Windows  systems. %Seed a PRNG with data from the system's fast source of  pseudo-random numbers ("/dev/urandom" on Unix-like systems),  then run the given action. Note4: on Windows, this code does not yet use the native E Cryptographic API as a source of random numbers (it uses the system D clock instead). As a result, the sequences it generates may not be  highly independent. Unchecked 64-bit left shift. Unchecked 64-bit right shift. <Compute the next index into the state pool. This is simply  addition modulo 256. :Generate an array of pseudo-random variates. This is not " necessarily faster than invoking  repeatedly in a loop, : but it may be more convenient to use in some situations. 0Generate a normally distributed random variate. The implementation uses Doornik's modified ziggurat algorithm. B Compared to the ziggurat algorithm usually used, this is slower, C but generates more independent variates that pass stringent tests  of randomness.     portable experimentalbos@serpentine.com Sort an array. -Partially sort an array, such that the least k elements will be  at the front.  The number k of least elements.  Return the indices of an array. 9Compute the minimum and maximum of an array in one pass. !Create an array, using the given  action to populate each  element. !Create an array, using the given  action to populate each  element.    portable experimentalbos@serpentine.com>Weights for affecting the importance of elements of a sample. >A function that estimates a property of a sample, such as its  mean.  Sample data. portable experimentalbos@serpentine.com@A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong. AResample a data set repeatedly, with replacement, computing each # estimate over the resampled data. >Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element.  Drop the kth element of a vector. portable experimentalbos@serpentine.com 7The interface shared by all probability distributions. 5Probability density function. The probability that a  the random variable X has the value x , i.e. P(X=x). :Cumulative distribution function. The probability that a  random variable X is less than x , i.e. P(X"dx). <Inverse of the cumulative distribution function. The value  x for which P(X"dx). !Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Probability p Initial guess Lower bound on interval Upper bound on interval  !  !   !portable experimentalbos@serpentine.com"#$"#$"#$#"##$portable experimentalbos@serpentine.com%A very large number. & The largest  x such that 2**(x-1) is approximately  representable as a . ' sqrt 2(  sqrt (2 * pi)) 2 / sqrt pi* 1 / sqrt 2+ The smallest  larger than 1. %&'()*++%*)&'(%&'()*+portable experimentalbos@serpentine.com , Parameters a and b to the / function. -.O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. /O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. 0O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: . midspread medianUnbiased 4 (toU [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. 12California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. 2Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. 39Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. 46Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. 5Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. 6Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica. ,-./0123456 .,-/0124356 ,--./0123456 portable experimentalbos@serpentine.com78#Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. 9?Harmonic mean. This algorithm performs a single pass over the  sample. ::Geometric mean of a sample containing no negative values. ; Compute the k3th central moment of a sample. The central moment - is also known as the moment about the mean. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. < Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. =;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  $ skewness $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.   skewness $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its ? is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. >?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its ? is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. ?'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. @Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. A;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. B'Maximum likelihood estimate of a sample' s variance. CUnbiased estimate of a sample' s variance. D;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. 789:;<=>?@ABCD789:;<=>?@ABCD789:;<=>?@ABCD portable experimentalbos@serpentine.comEFG (scale) parameter. HEFGHEFGHFEFFGH portable experimentalbos@serpentine.com IThe normal distribution. JKLIJKLIKLJIJKL portable experimentalbos@serpentine.com M9Evaluate a series of Chebyshev polynomials. Uses Clenshaw's  algorithm. Parameter of each function.  Coefficients of each polynomial  term, in increasing order. NThe binomial coefficient.  7 `choose` 3 == 35 OCompute the factorial function n !. Returns " if the E input is above 170 (above which the result cannot be represented by  a 64-bit ). P@Compute the natural logarithm of the factorial function. Gives ! 16 decimal digits of precision. Q/Compute the incomplete gamma integral function (s,x).  Uses Algorithm AS 239 by Shea. s x R,Compute the logarithm of the gamma function (x ). Uses  Algorithm AS 245 by Macleod. Gives an accuracy of 10 &12 significant decimal digits, except  for small regions around x = 1 and x = 2, where the function * goes to zero. For greater accuracy, use S. Returns ") if the input is outside of the range (0 < x  "d 1e305). S-Compute the logarithm of the gamma function, (x ). Uses a  Lanczos approximation. This function is slower than R, but gives 14 or more 7 significant decimal digits of accuracy, except around x = 1 and  x' = 2, where the function goes to zero. Returns ") if the input is outside of the range (0 < x  "d 1e305). MNOPQRSMNOPQRSMNOPQRS portable experimentalbos@serpentine.com TThe binomial distribution. UNumber of trials. V Probability. WNumber of trials.  Probability. TUVWTUVWUVTUVUVWportable experimentalbos@serpentine.comXThe gamma distribution. YShape parameter, k. ZScale parameter, . XYZXYZYZXYZYZportable experimentalbos@serpentine.com [\]^_m l k [\]^_[\]^_\]^[\]^\]^_portable experimentalbos@serpentine.com`a`a`a`aportable experimentalbos@serpentine.com bcO(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth e , at least k simple powers  must be collected.  For the f', at least 2 simple powers are needed.  For i$, we need at least 3 simple powers.  For j), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. d5The order (number) of simple powers collected from a . e Compute the kth central moment of a . The central 4 moment is also known as the moment about the mean. f'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires b with d at least 2. g;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. hUnbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires b with d at least 2. i;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  / skewness . powers 3 $ toU [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  ) skewness . powers 3 $ toU [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its f is zero.  Requires b with d at least 3. j?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its f is  zero.  Requires b with d at least 4. k'The number of elements in the original . This is the  sample's zeroth simple power. l$The sum of elements in the original . This is the  sample's first simple power. m0The arithmetic mean of elements in the original . >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. bcdefghijklm bcdklmfgheij bcdefghijklmportable experimentalbos@serpentine.comn8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v o*The width of the convolution kernel used. pPoints from the range of a . qrs0Bandwidth estimator for an Epanechnikov kernel. t+Bandwidth estimator for a Gaussian kernel. uCCompute the optimal bandwidth from the observed data for the given  kernel. v>Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data wAEpanechnikov kernel for probability density function estimation. x=Gaussian kernel for probability density function estimation. y<Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate zBA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  Sample data {;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate |ASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate nopqrstuvwxyz{|{|pqrvoustnwxyznopqrqrstuvwxyz{|portable experimentalbos@serpentine.com }.A point and interval estimate computed via an . ~Point estimate. >Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). >Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). .Confidence level of the confidence intervals. BBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data }~}~}~~portable experimentalbos@serpentine.com?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. @Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process.  !"#$%&'()**+,-./0123456789:;<=>?@AABCDEFGHIJ K 1 L M N O P Q / R S T U V W X Y Z [ \ ] Z ^ _ ` a b c d e f g hijklmno]pYqrsN/SRPQtu1vwxxyz{|}~345         1 / 3 4 5        3 4 5 1 /345/1345345qstatistics-0.3.6Statistics.RandomVariateStatistics.FunctionStatistics.TypesStatistics.ResamplingStatistics.Distribution!Statistics.Distribution.GeometricStatistics.ConstantsStatistics.QuantileStatistics.Sample#Statistics.Distribution.ExponentialStatistics.Distribution.NormalStatistics.Math Statistics.Distribution.BinomialStatistics.Distribution.Gamma&Statistics.Distribution.HypergeometricStatistics.Distribution.PoissonStatistics.Sample.PowersStatistics.KernelDensityStatistics.Resampling.BootstrapStatistics.AutocorrelationStatistics.InternalSeedGenVariateuniformcreate initializesaverestorewithSystemRandom uniformArraynormalsort partialSortindicesminMaxcreateUcreateIOWeights EstimatorSampleResample fromResampleresample jackknifeVariancevarianceMeanmean Distributiondensity cumulativequantilefindRootGeometricDistribution pdSuccess fromSuccessm_huge m_max_expm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilon ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedrange harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvarianceUnbiasedstdDev fastVariancefastVarianceUnbiased fastStdDevExponentialDistributionedLambda fromLambda fromSampleNormalDistributionstandard fromParams chebyshevchoose factorial logFactorialincompleteGammalogGamma logGammaLBinomialDistributionbdTrials bdProbabilitybinomialGammaDistributiongdShapegdScaleHypergeometricDistributionhdMhdLhdKPoissonDistributionPowerspowersordercountsumKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFEstimateestPoint estLowerBound estUpperBoundestConfidenceLevel bootstrapBCAautocovarianceautocorrelationinlinePerformIObase GHC.Floatlog integer-gmpGHC.Integer.TypeIntegerghc-prim GHC.TypesIntFloatDouble wordsTo64Bit wordToBool wordToFloat wordsToDoubleioffcoff Data.BitsxorwithTimeshiftLshiftR nextIndex uniformWord32uniform1uniform2 defaultSeedMMGHC.STSTIOdropAtPGDT1TV robustVarfastVarEDND ndPdfDenom ndCdfDenomLFCBDHDPDpdLambda errorShort:<estimate