h$       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                                                        !!""""""""""""""""""""""""""""""""""""""""""####$$$$%%%%%&&&&&''((((((((((((((((((((((()))))))))))))))))))))))**********************++++,,0None? statisticsO(nlogn) Compute the Kendall's tau from a vector of paired data. Return NaN when number of pairs <= 1.-(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone statisticsAn unchecked, non-integer-valued version of Loader's saddle point algorithm. statisticsCompute the entropy of a Poisson distribution using the best available method.%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?  statisticsSort a vector. statisticsSort a vector. statistics&Sort a vector using a custom ordering. statistics-Partially sort a vector, such that the least k elements will be at the front. statisticsReturn the indices of a vector.  statisticsZip a vector with its indices.  statistics8Compute the minimum and maximum of a vector in one pass.  statisticsEfficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.  statisticsMultiply a number by itself.  statisticsSimple for loop. Counts from start to end-1. statistics&Simple reverse-for loop. Counts from start-1 to end (which must be less than start). statistics The number k of least elements.       .(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportable Safe-Inferred>(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone&3578?' statistics Parameters  and  to the  function. Exact meaning of parameters is described in [Hyndman1996] in section "Piecewise linear functions" statisticsO(nlog n). Estimate the kth q-quantile of a sample, using the weighted average method. Up to rounding errors it's same as  quantile s.The following properties should hold otherwise an error will be thrown.(the length of the input is greater than 0the input does not contain NaNk D 0 and k D q statisticsO(nlog n). Estimate the kth q-quantile of a sample x, using the continuous sample method with the given parameters.The following properties should hold, otherwise an error will be thrown.input sample must be nonemptythe input does not contain NaN 0 D k D q statisticsO(knlog n). Estimate set of the kth q-quantile of a sample x, using the continuous sample method with the given parameters. This is faster than calling quantile repeatedly since sample should be sorted only onceThe following properties should hold, otherwise an error will be thrown.input sample must be nonemptythe input does not contain NaN)for every k in set of quantiles 0 D k D q statisticsO(knlog n). Same as quantiles but uses  container instead of  one. statistics2California Department of Public Works definition, =0, =1. Gives a linear interpolation of the empirical CDF. This corresponds to method 4 in R and Mathematica. statisticsHazen's definition, =0.5, =0.5. This is claimed to be popular among hydrologists. This corresponds to method 5 in R and Mathematica. statistics9Definition used by the SPSS statistics application, with =0, =0 (also known as Weibull's definition). This corresponds to method 6 in R and Mathematica. statistics6Definition used by the S statistics application, with =1, ;=1. The interpolation points divide the sample range into n-1 intervals. This corresponds to method 7 in R and Mathematica and is default in R. statisticsMedian unbiased definition, =1/3, =1/3. The resulting quantile estimates are approximately median unbiased regardless of the distribution of x6. This corresponds to method 8 in R and Mathematica. statisticsNormal unbiased definition, =3/8, =3/8. An approximately unbiased estimate if the empirical distribution approximates the normal distribution. This corresponds to method 9 in R and Mathematica. statisticsO(nlog n) Estimate median of sample statisticsO(nlog n). Estimate the range between q-quantiles 1 and q-1 of a sample x, using the continuous sample method with the given parameters.For instance, the interquartile range (IQR) can be estimated as follows: midspread medianUnbiased 4 (U.fromList [1,1,2,2,3]) ==> 1.333333 statisticsO(nlog n?). Estimate the median absolute deviation (MAD) of a sample x using . It's robust estimate of variability in sample and defined as: MAD = \operatorname{median}(| X_i - \operatorname{median}(X) |) # statisticsWe use / as default value which is same as R's default. statisticsk, the desired quantile. statisticsq, the number of quantiles. statisticsx, the sample data. statistics Parameters  and . statisticsk, the desired quantile. statisticsq, the number of quantiles. statisticsx, the sample data. statistics Parameters  and . statisticsx, the sample data. statistics Parameters  and . statisticsq, the number of quantiles. statisticsx, the sample data. statistics Parameters  and . statisticsx, the sample data. statistics Parameters  and . statisticsk, the desired quantile. statisticsq, the number of quantiles. statisticsx, the sample data.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?.+ statisticsO(n)% Compute a histogram over a data set.)The result consists of a pair of vectors:!The lower bound of each interval.*The number of samples within the interval.Interval (bin) sizes are uniform, and the upper and lower bounds are chosen automatically using the -; function. To specify these parameters directly, use the , function., statisticsO(n)% Compute a histogram over a data set.Interval (bin) sizes are uniform, based on the supplied upper and lower bounds.- statisticsO(n) Compute decent defaults for the lower and upper bounds of a histogram, based on the desired number of bins and the range of the sample data.$The upper and lower bounds used are  (lo-d, hi+d), where 8d = (maximum sample - minimum sample) / ((bins - 1) * 2)8If all elements in the sample are the same and equal to x range is set to (x - |x| 10, x + |x|10) . And if x is equal to 0 range is set to (-1,1). This is needed to avoid creating histogram with zero bin size.+ statistics"Number of bins (must be positive). statisticsSample data (cannot be empty)., statisticsNumber of bins. This value must be positive. A zero or negative value will cause an error. statisticsLower bound on interval range. Sample data less than this will cause an error. statisticsUpper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. statistics Sample data.- statistics"Number of bins (must be positive). statisticsSample data (cannot be empty).+,-+,-(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?/Y././(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone@0 statisticsEstimate distribution from sample. First parameter in sample is distribution type and second is element type.1 statisticsEstimate distribution from sample. Returns nothing is there's not enough data to estimate or sample clearly doesn't come from distribution in question. For example if there's negative samples in exponential distribution.2 statisticsGenerate discrete random variates which have given distribution. 4 is superclass because it's always possible to generate real-valued variates from integer values4 statisticsGenerate discrete random variates which have given distribution.6 statisticsType class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one. If the distribution has well-defined entropy for all valid parameter values then it should be an instance of this type class.7 statistics/Returns the entropy of a distribution, in nats.8 statisticsType class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one. 9 should return < if entropy is undefined for the chosen parameter values.9 statisticsReturns the entropy of a distribution, in nats, if such is defined.: statisticsType class for distributions with variance. If distribution have finite variance for all valid parameter values it should be instance of this type class.Minimal complete definition is ; or <= statisticsType class for distributions with variance. If variance is undefined for some parameter values both > and ? should return Nothing.Minimal complete definition is > or ?@ statisticsType class for distributions with mean. If a distribution has finite mean for all valid values of parameters it should be instance of this type class.B statistics(Type class for distributions with mean. C should return , if it's undefined for current value of dataD statistics$Continuous probability distribution.Minimal complete definition is G and either E or F.E statisticsProbability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)ExF statisticsNatural logarithm of density.G statisticsx) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.O statisticsGenerate variates from continuous distribution using inverse transform rule.P statisticsApproximate the value of X for which P(x>X)=p.This method uses a combination of Newton-Raphson iteration and bisection with the given guess as a starting point. The upper and lower bounds specify the interval in which the probability distribution reaches the value p.Q statistics(Sum probabilities in inclusive interval.P statistics Distribution statistics Probability p statistics Initial guess statisticsLower bound on interval statisticsUpper bound on interval"0123456789:<;=>?@ABCDGEFHIJKLMNOPQ"LMNIJKDGEFHBC@A=>?:<;8967014523OPQ(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone 38BdR statistics Uniform distribution from A to BS statisticsLow boundary of distributionT statisticsUpper boundary of distributionU statisticsCreate uniform distribution.V statisticsCreate uniform distribution.RSTUVRUVST(c) 2013 John McDonnell;BSD3bos@serpentine.com experimentalportableNone38>?Dh statistics.Linear transformation applied to distribution. "LinearTransform   _ x' =  + xj statisticsLocation parameter.k statisticsScale parameter.l statisticsDistribution being transformed.m statistics,Apply linear transformation to distribution.n statistics(Get fixed point of linear transformationm statistics Fixed point statisticsScale parameter statistics Distributionhijklmnhijklnm (c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone 38F statisticsStudent-T distribution statisticsCreate Student-T distribution. Number of parameters must be positive. statisticsCreate Student-T distribution. Number of parameters must be positive. statistics0Create an unstandardized Student-t distribution. statisticsNumber of degrees of freedom statistics5Central value (0 for standard Student T distribution) statisticsScale parameter (c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38G statisticsCreate Poisson distribution. statisticsCreate Poisson distribution. (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38H~ statisticsm statisticsl statisticsk statisticsm statisticsl statisticsk (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38J statisticsDistribution over [0..] statisticsDistribution over [1..] statisticsCreate geometric distribution. statisticsCreate geometric distribution. statisticsCreate geometric distribution. statisticsCreate geometric distribution. statistics Success rate statistics Success rate statistics Success rate statistics Success rate (c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38N] statisticsThe gamma distribution. statisticsShape parameter, k. statisticsScale parameter, . statisticsCreate gamma distribution. Both shape and scale parameters must be positive. statisticsCreate gamma distribution. Both shape and scale parameters must be positive. statisticsCreate gamma distribution. Both shape and scale parameters must be non-negative. statisticsCreate gamma distribution. Both shape and scale parameters must be non-negative. statisticsShape parameter. k statisticsScale parameter, . statisticsShape parameter. k statisticsScale parameter, . statisticsShape parameter. k statisticsScale parameter, . statisticsShape parameter. k statisticsScale parameter, .(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone 38O statisticsF distribution(c) 2016 Andr Szabolcs SzelpBSD3a.sz.szelp@gmail.com experimentalportableNone 38Qz statistics"The discrete uniform distribution. statisticsa,, the lower bound of the support {a, ..., b} statisticsb,, the upper bound of the support {a, ..., b} statisticsConstruct discrete uniform distribution on support {1, ..., n}. Range n must be >0. statistics?Construct discrete uniform distribution on support {a, ..., b}. statisticsRange statisticsLower boundary (inclusive) statisticsUpper boundary (inclusive)(c) 2010 Alexey KhudyakovBSD3bos@serpentine.com experimentalportableNone 38S" statisticsChi-squared distribution statistics Get number of degrees of freedom statisticsConstruct chi-squared distribution. Number of degrees of freedom must be positive. statisticsConstruct chi-squared distribution. Number of degrees of freedom must be positive.(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone 38V/ statisticsCauchy-Lorentz distribution. statisticsCentral value of Cauchy-Lorentz distribution which is its mode and median. Distribution doesn't have mean so function is named after median. statisticsScale parameter of Cauchy-Lorentz distribution. It's different from variance and specify half width at half maximum (HWHM). statisticsCauchy distribution statisticsCauchy distribution statisticsStandard Cauchy distribution. It's centered at 0 and have 1 FWHM statistics Central point statisticsScale parameter (FWHM) statistics Central point statisticsScale parameter (FWHM)(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38X statisticsThe binomial distribution. statisticsNumber of trials. statistics Probability. statisticsConstruct binomial distribution. Number of trials must be non-negative and probability must be in [0,1] range statisticsConstruct binomial distribution. Number of trials must be non-negative and probability must be in [0,1] range statisticsNumber of trials. statistics Probability. statisticsNumber of trials. statistics Probability.(C) 2012 Edward Kmett, BSD-style (see the file LICENSE)Edward Kmett  provisionalDeriveDataTypeableNone 38] statisticsThe beta distribution statisticsAlpha shape parameter statisticsBeta shape parameter statisticsCreate beta distribution. Both shape parameters must be positive. statisticsCreate beta distribution. Both shape parameters must be positive. statisticsCreate beta distribution. Both shape parameters must be non-negative. So it allows to construct improper beta distribution which could be used as improper prior. statisticsCreate beta distribution. Both shape parameters must be non-negative. So it allows to construct improper beta distribution which could be used as improper prior. statisticsShape parameter alpha statisticsShape parameter beta statisticsShape parameter alpha statisticsShape parameter beta statisticsShape parameter alpha statisticsShape parameter beta statisticsShape parameter alpha statisticsShape parameter beta(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone38?iF  statisticsO(n) Collect the n simple powers of a sample.Functions computed over a sample's simple powers require at least a certain number (or order) of powers to be collected.To compute the kth  , at least k$ simple powers must be collected.For the &, at least 2 simple powers are needed.For #, we need at least 3 simple powers.For (, at least 4 simple powers are required.*This function is subject to stream fusion. statistics5The order (number) of simple powers collected from a sample. statistics Compute the kth central moment of a sample. The central moment is also known as the moment about the mean. statisticsMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n4. This is the second central moment of the sample.This is less numerically robust than the variance function in the /0 module, but the number is essentially free to compute if you have already collected a sample's simple powers. Requires  with  at least 2. statisticsStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance. statisticsUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1. Requires  with  at least 2. statisticsCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewed. Most of its mass is on the right of the distribution, with the tail on the left. skewness . powers 3 $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. ?skewness . powers 3 $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its  is zero. Requires  with  at least 3. statisticsCompute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that the sample's variance is due more to infrequent severe deviations than to frequent modest deviations.1A sample's excess kurtosis is not defined if its  is zero. Requires  with  at least 4. statistics'The number of elements in the original Sample-. This is the sample's zeroth simple power. statistics$The sum of elements in the original Sample,. This is the sample's first simple power. statistics0The arithmetic mean of elements in the original Sample.?This is less numerically robust than the mean function in the /0 module, but the number is essentially free to compute if you have already collected a sample's simple powers. statisticsn, the number of powers, where n >= 2.  1None?l( statisticsCalculate rank of every element of sample. In case of ties ranks are averaged. Sample should be already sorted in ascending order.Rank is index of element in the sample, numeration starts from 1. In case of ties average of ranks of equal elements is assigned to each$rank (==) (fromList [10,20,30::Int])> fromList [1.0,2.0,3.0]'rank (==) (fromList [10,10,10,30::Int])> fromList [2.0,2.0,2.0,4.0] statisticsCompute rank of every element of vector. Unlike rank it doesn't require sample to be sorted. statisticsSplit tagged vector statisticsEquivalence relation statisticsVector to rank(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?n statistics#Discrete cosine transform (DCT-II). statisticsDiscrete cosine transform (DCT-II). Only real part of vector is transformed, imaginary part is ignored. statistics>Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x) statisticsInverse discrete cosine transform (DCT-III). Only real part of vector is transformed, imaginary part is ignored. statisticsInverse fast Fourier transform. statistics2Radix-2 decimation-in-time fast Fourier transform.(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?s statisticsGaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use .)%Density estimates at each mesh point. statisticsGaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point. statisticsThe number of mesh points to use in the uniform discretization of the interval  (min,max). If this value is not a power of two, then it is rounded up to the next power of two. statisticsThe number of mesh points to use in the uniform discretization of the interval  (min,max). If this value is not a power of two, then it is rounded up to the next power of two. statistics Lower bound (min) of the mesh range. statistics Upper bound (max) of the mesh range.2(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneu statistics=Weights for affecting the importance of elements of a sample. statisticsSample with weights. First element of sample is data, second is weight statistics Sample data.+(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone? statisticsO(n) Range. The difference between the largest and smallest elements of a sample. statisticsO(n) Arithmetic mean. This uses Kahan-Babuka-Neumaier summation, so is more accurate than ) unless the input values are very large. statisticsO(n) Arithmetic mean. This uses Welford's algorithm to provide numerical stability, using a single pass over the sample data. Compared to , this loses a surprising amount of precision unless the inputs are very large. statisticsO(n) Arithmetic mean for weighted sample. It uses a single-pass algorithm analogous to the one used by . statisticsO(n) Harmonic mean. This algorithm performs a single pass over the sample. statisticsO(n): Geometric mean of a sample containing no negative values. statistics Compute the kth central moment of a sample. The central moment is also known as the moment about the mean.This function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation. statistics Compute the kth and jth central moments of a sample.This function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation. statisticsCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewed. Most of its mass is on the right of the distribution, with the tail on the left. :skewness $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. 4skewness $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its  is zero.This function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation. statisticsCompute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that more of the sample's variance is due to infrequent severe deviations, rather than more frequent modest deviations.1A sample's excess kurtosis is not defined if its  is zero.This function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation. statisticsMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n. statisticsUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1. statisticsCalculate mean and maximum likelihood estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once. statisticsCalculate mean and unbiased estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once. statisticsStandard deviation. This is simply the square root of the unbiased estimate of the variance. statisticsStandard error of the mean. This is the standard deviation divided by the square root of the sample size. statistics-Weighted variance. This is biased estimation. statistics3Maximum likelihood estimate of a sample's variance. statistics)Unbiased estimate of a sample's variance. statisticsStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance. statisticsCovariance of sample of pairs. For empty sample it's set to zero statisticsCorrelation coefficient for sample of pairs. Also known as Pearson's correlation. For empty sample it's set to zero. statisticsPair two samples. It's like 3 but requires that both samples have equal size.(c) 2017 Gregory W. SchwartzBSD3gsch@mail.med.upenn.edu experimentalportableNone?w statisticsO(n)* Normalize a sample using standard scores: z = \frac{x - \mu}{\sigma} Where  is sample mean and  is standard deviation computed from unbiased variance estimation. If sample to small to compute  or it's equal to 0 Nothing is returned.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone38?  statistics7The convolution kernel. Its parameters are as follows:Scaling factor, 1/nh Bandwidth, h&A point at which to sample the input, pOne sample value, v statistics)The width of the convolution kernel used. statisticsPoints from the range of a Sample. statistics/Bandwidth estimator for an Epanechnikov kernel. statistics*Bandwidth estimator for a Gaussian kernel. statisticsCompute the optimal bandwidth from the observed data for the given kernel.This function uses an estimate based on the standard deviation of a sample (due to Deheuvels), which performs reasonably well for unimodal distributions but leads to oversmoothing for more complex ones. statisticsChoose a uniform range of points at which to estimate a sample's probability density function.If you are using a Gaussian kernel, multiply the sample's bandwidth by 3 before passing it to this function.If this function is passed an empty vector, it returns values of positive and negative infinity. statisticsEpanechnikov kernel for probability density function estimation. statistics cl90True statisticsPoint estimate statistics1 error statisticsPoint estimate statistics1 error statisticsCentral estimate statisticsLower and upper errors. Both should be positive but it's not checked. statisticsConfidence level for interval statisticsPoint estimate. Should lie within interval but it's not checked. statistics"Lower and upper bounds of interval statisticsConfidence level for interval11None358  statisticsTest type for test which compare positional (mean,median etc.) information of samples. statisticsTest whether samples differ in position. Null hypothesis is samples are not different statisticsTest if first sample (A) is larger than second (B). Null hypothesis is first sample is not larger than second. statistics+Test if second sample is larger than first. statisticsResult of statistical test. statisticsProbability of getting value of test statistics at least as extreme as measured. statisticsStatistic used for test. statistics>Distribution of test statistics if null hypothesis is correct. statisticsResult of hypothesis testing statistics"Null hypothesis should be rejected statistics"Data is compatible with hypothesis statistics4Check whether test is significant for given p-value. statisticssignificant if parameter is , not significant otherwiseNone? statisticsTwo-sample Student's t-test. It assumes that both samples are normally distributed and have same variance. Returns Nothing' if sample sizes are not sufficient. statisticsTwo-sample Welch's t-test. It assumes that both samples are normally distributed but doesn't assume that they have same variance. Returns Nothing$ if sample sizes are not sufficient. statisticsPaired two-sample t-test. Two samples are paired in a within-subject design. Returns Nothing# if sample size is not sufficient. statisticsone- or two-tailed test statisticsSample A statisticsSample B statisticsone- or two-tailed test statisticsSample A statisticsSample B statisticsone- or two-tailed test statisticspaired samples(c) 2014 Danny NavarroBSD3bos@serpentine.com experimentalportableNone statisticsKruskal-Wallis ranking.All values are replaced by the absolute rank in the combined samples.The samples and values need not to be ordered but the values in the result are ordered. Assigned ranks (ties are given their average rank). statisticsThe Kruskal-Wallis Test.8In textbooks the output value is usually represented by K or H*. This function already does the ranking. statisticsPerform Kruskal-Wallis Test for the given samples and required significance. For additional information check ". This is just a helper function.It uses  Chi-Squared distribution for approximation as long as the sizes are larger than 5. Otherwise the test returns . (c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone?7 statisticsCheck that sample could be described by distribution. Returns Nothing is sample is emptyThis test uses Marsaglia-Tsang-Wang exact algorithm for calculation of p-value. statistics Variant of ' which uses CDF in form of function. statisticsTwo sample Kolmogorov-Smirnov test. It tests whether two data samples could be described by the same distribution without making any assumptions about it. If either of samples is empty returns Nothing.9This test uses approximate formula for computing p-value. statistics!Calculate Kolmogorov's statistic D for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0. statistics!Calculate Kolmogorov's statistic D for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0. statistics!Calculate Kolmogorov's statistic D for two data samples. If either of samples is empty returns 0. statisticsCalculate cumulative probability function for Kolmogorov's distribution with n< parameters or probability of getting value smaller than d with n-elements sample.It uses algorithm by Marsgalia et. al. and provide at least 7-digit accuracy. statistics Distribution statistics Data sample statisticsCDF of distribution statistics Data sample statisticsSample 1 statisticsSample 2 statistics CDF function statisticsSample statistics Distribution statisticsSample statistics First sample statistics Second sample statisticsSize of the sample statisticsD value!None?΅ statisticsGeneric form of Pearson chi squared tests for binned data. Data sample is supplied in form of tuples (observed quantity, expected number of events). Both must be positive.This test should be used only if all bins have expected values of at least 5. statisticsChi squared test for data with normal errors. Data is supplied in form of pair (observation with error, and expectation). statisticsNumber of additional degrees of freedom. One degree of freedom is due to the fact that the are N observation in total and accounted for automatically. statisticsObservation and expectation. statistics+Number of additional degrees of freedom. statisticsObservation and expectation."(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 35678?֑  statistics4An estimator of a property of a sample, such as its .The use of an algebraic data type here allows functions such as  and  bootstrapBCA1 to use more efficient algorithms when possible. statisticsA resample drawn randomly, with replacement, from a set of data points. Distinct from a normal array to make it harder for your humble author's brain to go wrong. statisticsRun an  over a sample. statistics6Single threaded and deterministic version of resample. statisticsO(e*r*s) Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg). statisticsCreate vector using resamples statisticsO(n) or O(n^2) Compute a statistical estimate repeatedly over a sample, each time omitting a successive element. statisticsO(n)( Compute the jackknife mean of a sample. statisticsO(n)5 Compute the unbiased jackknife variance of a sample. statisticsO(n), Compute the jackknife variance of a sample. statisticsO(n)6 Compute the jackknife standard deviation of a sample. statistics:Split a generator into several that can run independently. statisticsEstimation functions. statisticsNumber of resamples to compute. statisticsOriginal sample. statisticsEstimation functions. statisticsNumber of resamples to compute. statisticsOriginal sample.#2014 Bryan O'SullivanBSD3None9 statisticsPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one more element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see  for details). statistics/Compute the ordinary least-squares solution to A x = b. statisticsCompute R, the coefficient of determination that indicates goodness-of-fit of a regression.This value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power. statisticsBootstrap a regression function. Returns both the results of the regression and the requested confidence interval values. statisticsNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by . statisticsResponder vector. Must have the same length as the predictor vectors. statisticsA& has at least as many rows as columns. statisticsb# has the same length as columns in A. statisticsPredictors (regressors). statistics Responders. statisticsRegression coefficients. statisticsNumber of resamples to compute. statisticsConfidence level. statisticsRegression function. statisticsPredictor vectors. statisticsResponder vector.$None&X statisticsCalculate confidence intervals for Poisson-distributed value using normal approximation statisticsCalculate confidence intervals for Poisson-distributed value for single measurement. These are exact confidence intervals statisticsCalculate confidence interval using normal approximation. Note that this approximation breaks down when p3 is either close to 0 or to 1. In particular if np < 5 or  1 - np < 5) this approximation shouldn't be used. statisticsClopper-Pearson confidence interval also known as exact confidence intervals. statisticsNumber of trials statisticsNumber of successes statisticsNumber of trials statisticsNumber of successes%(c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNone& statistics3Calculate (n,T@,T@) values for both samples. Where n4 is reduced sample where equal pairs are removed. statisticsTests whether a given result from a Wilcoxon signed-rank matched-pairs test is significant at the given level.This function can perform a one-tailed or two-tailed test. If the first parameter to this function is  TwoTailed, the test is performed two-tailed to check if the two samples differ significantly. If the first parameter is  OneTailed, the check is performed one-tailed to decide whether the first sample (i.e. the first sample you passed to ) is greater than the second sample (i.e. the second sample you passed to ). If you wish to perform a one-tailed test in the opposite direction, you can either pass the parameters in a different order to , or simply swap the values in the resulting pair before passing them to this function. statisticsObtains the critical value of T to compare against, given a sample size and a p-value (significance level). Your T value must be less than or equal to the return of this function in order for the test to work out significant. If there is a Nothing return, the sample size is too small to make a decision.wilcoxonSignificant tests the return value of  for you, so you should use wilcoxonSignificant for determining test results. However, this function is useful, for example, for generating lookup tables for Wilcoxon signed rank critical values.The return values of this function are generated using the method detailed in the Mitic's paper. According to that paper, the results may differ from other published lookup tables, but (Mitic claims) the values obtained by this function will be the correct ones. statisticsWorks out the significance level (p-value) of a T value, given a sample size and a T value from the Wilcoxon signed-rank matched-pairs test.See the notes on wilcoxonCriticalValue for how this is calculated. statisticsThe Wilcoxon matched-pairs signed-rank test. The samples are zipped together: if one is longer than the other, both are truncated to the length of the shorter sample.For one-tailed test it tests whether first sample is significantly greater than the second. For two-tailed it checks whether they significantly differCheck  and  for additional information. statisticsHow to compare two samples statistics#The p-value at which to test (e.g.  mkPValue 0.05) statisticsThe (n,T@, T@) values from . statisticsReturn 3 if the sample was too small to make a decision. statisticsThe sample size statisticsThe p-value (e.g.  mkPValue 0.05() for which you want the critical value. statisticsThe critical value (of T), or Nothing if the sample is too small to make a decision. statisticsThe sample size statistics3The value of T for which you want the significance. statisticsThe significance (p-value). statisticsPerform one-tailed test. statisticsSample of pairs statisticsReturn 3 if the sample was too small to make a decision.&(c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNone statisticsThe Wilcoxon Rank Sums Test.This test calculates the sum of ranks for the given two samples. The samples are ordered, and assigned ranks (ties are given their average rank), then these ranks are summed for each sample.The return value is (WA, WA) where WA is the sum of ranks of the first sample and WA is the sum of ranks of the second sample. This test is trivially transformed into the Mann-Whitney U test. You will probably want to use  and the related functions for testing significance, but this function is exposed for completeness. statisticsThe Mann-Whitney U Test.This is sometimes known as the Mann-Whitney-Wilcoxon U test, and confusingly many sources state that the Mann-Whitney U test is the same as the Wilcoxon's rank sum test (which is provided as ). The Mann-Whitney U is a simple transform of Wilcoxon's rank sum test.Again confusingly, different sources state reversed definitions for UA and UA, so it is worth being explicit about what this function returns. Given two samples, the first, xsA, of size nA and the second, xsA, of size nA, this function returns (UA, UA) where UA = WA - (nA(nA+1))/2 and UA = WA - (nA(nA+1))/2, where (WA, WA) is the return value of wilcoxonRankSums xs1 xs2.Some sources instead state that UA and UA should be the other way round, often expressing this using UA' = nAnA - UA (since UA + UA = nAnA).All of which you probably don't care about if you just feed this into . statisticsCalculates the critical value of Mann-Whitney U for the given sample sizes and significance level.This function returns the exact calculated value of U for all sample sizes; it does not use the normal approximation at all. Above sample size 20 it is generally recommended to use the normal approximation instead, but this function will calculate the higher critical values if you need them.The algorithm to generate these values is a faster, memoised version of the simple unoptimised generating function given in section 2 of "The Mann Whitney Wilcoxon Distribution Using Linked Lists" statistics:Calculates whether the Mann Whitney U test is significant.If both sample sizes are less than or equal to 20, the exact U critical value (as calculated by ) is used. If either sample is larger than 20, the normal approximation is used instead.If you use a one-tailed test, the test indicates whether the first sample is significantly larger than the second. If you want the opposite, simply reverse the order in both the sample size and the (UA, UA) pairs. statisticsPerform Mann-Whitney U Test for two samples and required significance. For additional information check documentation of  and ". This is just a helper function.One-tailed test checks whether first sample is significantly larger than second. Two-tailed whether they are significantly different. statisticsThe sample size statistics>The p-value (e.g. 0.05) for which you want the critical value. statisticsThe critical value (of U). statistics0Perform one-tailed test (see description above). statistics=The samples' size from which the (UA,UA) values were derived. statistics(The p-value at which to test (e.g. 0.05) statisticsThe (UA, UA) values from . statisticsReturn 3 if the sample was too small to make a decision. statistics0Perform one-tailed test (see description above). statistics(The p-value at which to test (e.g. 0.05) statistics First sample statistics Second sample statisticsReturn 3 if the sample was too small to make a decision.  '(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone statisticsBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution.BCA algorithm is described in ch. 5 of Davison, Hinkley "Confidence intervals" in section 5.3 "Percentile method" statisticsBasic bootstrap. This method simply uses empirical quantiles for confidence interval. statisticsConfidence level statisticsFull data sample statisticsEstimates obtained from resampled data and estimator used for this. statisticsConfidence vector statisticsEstimate from full sample and vector of estimates obtained from resamples2((c) 2020 Ximin LuoBSD3infinity0@pwned.gg experimentalportableNone38o statisticsThe lognormal distribution. statistics7Standard log normal distribution with mu 0 and sigma 1.Mean is sqrt e and variance is  (e - 1) * e. statistics/Create log normal distribution from parameters. statistics/Create log normal distribution from parameters. statisticsCreate log normal distribution from mean and standard deviation. statisticsVariance is estimated using maximum likelihood method (biased estimation) over the log of the data.Returns Nothing if sample contains less than one element or variance is zero (all elements are equal) statisticsMu statisticsSigma statisticsMu statisticsSigma statisticsMu statisticsSigma)(c) 2015 Mihai MaruseacBSD3mihai.maruseac@maruseac.com experimentalportableNone 38 statistics Location. statisticsScale. statisticsCreate an Laplace distribution. statisticsCreate an Laplace distribution. statisticsCreate Laplace distribution from sample. No tests are made to check whether it truly is Laplace. Location of distribution estimated as median of sample. statisticsLocation statisticsScale statisticsLocation statisticsScale*(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 38 statistics#Create an exponential distribution. statistics#Create an exponential distribution. statistics5Create exponential distribution from sample. Returns Nothing if sample is empty or contains negative elements. No other tests are made to check whether it truly is exponential. statisticsRate parameter. statisticsRate parameter.+None?  statistics:Pearson correlation for sample of pairs. Exactly same as  statistics=Compute pairwise Pearson correlation between rows of a matrix statistics0compute Spearman correlation between two samples statistics>compute pairwise Spearman correlation between rows of a matrix,(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?  statisticsCompute the autocovariance of a sample, i.e. the covariance of the sample against a shifted version of itself. statisticsCompute the autocorrelation function of a sample, and the upper and lower bounds of confidence intervals for each element.Note: The calculation of the 95% confidence interval assumes a stationary Gaussian process.45645789:;<=>?@ABCDEFGHHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}J~                                                                                                                                 rsfx2220dxrs       !!""w"q"""""""""""""""""""""""""""""""""""""""####$$$$%%%%%&&&&&''((((((((((((((((((((((()))))))))))))))))))))))*3*********************++++,,--......111'*statistics-0.16.0.2-Jm6etYxnYCLGcdYVA502raStatistics.QuantileStatistics.FunctionStatistics.Correlation.KendallStatistics.Sample.HistogramStatistics.Sample.InternalStatistics.DistributionStatistics.Distribution.Uniform!Statistics.Distribution.Transform Statistics.Distribution.StudentTStatistics.Distribution.Poisson&Statistics.Distribution.Hypergeometric!Statistics.Distribution.GeometricStatistics.Distribution.Gamma%Statistics.Distribution.FDistribution'Statistics.Distribution.DiscreteUniform"Statistics.Distribution.ChiSquared%Statistics.Distribution.CauchyLorentz Statistics.Distribution.BinomialStatistics.Distribution.BetaStatistics.Sample.PowersStatistics.TransformStatistics.Sample.KernelDensityStatistics.TypesStatistics.SampleStatistics.Sample.Normalize&Statistics.Sample.KernelDensity.SimpleStatistics.Distribution.WeibullStatistics.Distribution.NormalStatistics.Test.TypesStatistics.Test.StudentTStatistics.Test.KruskalWallis!Statistics.Test.KolmogorovSmirnovStatistics.Test.ChiSquaredStatistics.ResamplingStatistics.RegressionStatistics.ConfidenceIntStatistics.Test.WilcoxonTStatistics.Test.MannWhitneyUStatistics.Resampling.Bootstrap!Statistics.Distribution.LognormalStatistics.Distribution.Laplace#Statistics.Distribution.ExponentialStatistics.CorrelationStatistics.Autocorrelation(Statistics.Distribution.Poisson.InternalStatistics.Internal StatisticsSampleStatistics.Test.InternalStatistics.Types.InternalExponentialDistribution1data-default-class-0.1.2.0-IIN1s3V8yfYEDHe5yjxXHVData.Default.ClassdefDefault-math-functions-0.3.4.2-1euogtkQ57VHg0NMdQ3HJA Numeric.MathFunctions.ComparisonwithinkendallsortgsortsortBy partialSortindicesindexedminMaxnextHighestPowerOfTwosquareforrfor unsafeModify ContParam weightedAvgquantile quantiles quantilesVeccadpwhazenspsssmedianUnbiasednormalUnbiasedmedian midspreadmad continuousBy$fFromJSONContParam$fToJSONContParam$fBinaryContParam$fDefaultContParam $fFunctorPair$fFoldablePair$fShowContParam $fEqContParam$fOrdContParam$fDataContParam$fGenericContParam histogram histogram_range robustSumVarsum FromSample fromSample DiscreteGengenDiscreteVarContGen genContVarEntropyentropy MaybeEntropy maybeEntropyVariancevariancestdDev MaybeVariance maybeVariance maybeStdDevMeanmean MaybeMean maybeMean ContDistrdensity logDensity complQuantile DiscreteDistr probabilitylogProbability Distribution cumulativecomplCumulative genContinuousfindRootsumProbabilitiesUniformDistributionuniformAuniformB uniformDistr uniformDistrE$fContGenUniformDistribution!$fMaybeEntropyUniformDistribution$fEntropyUniformDistribution"$fMaybeVarianceUniformDistribution$fMaybeMeanUniformDistribution$fVarianceUniformDistribution$fMeanUniformDistribution$fContDistrUniformDistribution!$fDistributionUniformDistribution$fBinaryUniformDistribution$fFromJSONUniformDistribution$fToJSONUniformDistribution$fReadUniformDistribution$fShowUniformDistribution$fEqUniformDistribution$fDataUniformDistribution$fGenericUniformDistributionLinearTransformlinTransLocation linTransScale linTransDistr scaleAroundlinTransFixedPoint$fContGenLinearTransform$fEntropyLinearTransform$fMaybeEntropyLinearTransform$fVarianceLinearTransform$fMaybeVarianceLinearTransform$fMeanLinearTransform$fMaybeMeanLinearTransform$fContDistrLinearTransform$fDistributionLinearTransform$fFunctorLinearTransform$fBinaryLinearTransform$fToJSONLinearTransform$fFromJSONLinearTransform$fEqLinearTransform$fShowLinearTransform$fReadLinearTransform$fDataLinearTransform$fGenericLinearTransformStudentT studentTndfstudentT studentTEstudentTUnstandardized$fContGenStudentT$fMaybeEntropyStudentT$fEntropyStudentT$fMaybeVarianceStudentT$fMaybeMeanStudentT$fContDistrStudentT$fDistributionStudentT$fBinaryStudentT$fFromJSONStudentT$fToJSONStudentT$fReadStudentT$fShowStudentT $fEqStudentT$fDataStudentT$fGenericStudentTPoissonDistribution poissonLambdapoissonpoissonE!$fMaybeEntropyPoissonDistribution$fEntropyPoissonDistribution"$fMaybeVariancePoissonDistribution$fMaybeMeanPoissonDistribution$fMeanPoissonDistribution$fVariancePoissonDistribution"$fDiscreteDistrPoissonDistribution!$fDistributionPoissonDistribution$fBinaryPoissonDistribution$fFromJSONPoissonDistribution$fToJSONPoissonDistribution$fReadPoissonDistribution$fShowPoissonDistribution$fEqPoissonDistribution$fDataPoissonDistribution$fGenericPoissonDistributionHypergeometricDistributionhdMhdLhdKhypergeometrichypergeometricE($fMaybeEntropyHypergeometricDistribution#$fEntropyHypergeometricDistribution)$fMaybeVarianceHypergeometricDistribution%$fMaybeMeanHypergeometricDistribution$$fVarianceHypergeometricDistribution $fMeanHypergeometricDistribution)$fDiscreteDistrHypergeometricDistribution($fDistributionHypergeometricDistribution"$fBinaryHypergeometricDistribution$$fFromJSONHypergeometricDistribution"$fToJSONHypergeometricDistribution $fReadHypergeometricDistribution $fShowHypergeometricDistribution$fEqHypergeometricDistribution $fDataHypergeometricDistribution#$fGenericHypergeometricDistributionGeometricDistribution0 gdSuccess0GeometricDistribution gdSuccess geometric geometricE geometric0 geometric0E$fContGenGeometricDistribution"$fDiscreteGenGeometricDistribution#$fMaybeEntropyGeometricDistribution$fEntropyGeometricDistribution$$fMaybeVarianceGeometricDistribution $fMaybeMeanGeometricDistribution$fVarianceGeometricDistribution$fMeanGeometricDistribution$$fDiscreteDistrGeometricDistribution#$fDistributionGeometricDistribution$fBinaryGeometricDistribution$fFromJSONGeometricDistribution$fToJSONGeometricDistribution$fReadGeometricDistribution$fShowGeometricDistribution$fContGenGeometricDistribution0#$fDiscreteGenGeometricDistribution0$$fMaybeEntropyGeometricDistribution0$fEntropyGeometricDistribution0%$fMaybeVarianceGeometricDistribution0!$fMaybeMeanGeometricDistribution0 $fVarianceGeometricDistribution0$fMeanGeometricDistribution0%$fDiscreteDistrGeometricDistribution0$$fDistributionGeometricDistribution0$fBinaryGeometricDistribution0 $fFromJSONGeometricDistribution0$fToJSONGeometricDistribution0$fReadGeometricDistribution0$fShowGeometricDistribution0$fEqGeometricDistribution0$fDataGeometricDistribution0$fGenericGeometricDistribution0$fEqGeometricDistribution$fDataGeometricDistribution$fGenericGeometricDistributionGammaDistributiongdShapegdScale gammaDistr gammaDistrEimproperGammaDistrimproperGammaDistrE$fContGenGammaDistribution$fMaybeEntropyGammaDistribution $fMaybeVarianceGammaDistribution$fMaybeMeanGammaDistribution$fMeanGammaDistribution$fVarianceGammaDistribution$fContDistrGammaDistribution$fDistributionGammaDistribution$fBinaryGammaDistribution$fFromJSONGammaDistribution$fToJSONGammaDistribution$fReadGammaDistribution$fShowGammaDistribution$fEqGammaDistribution$fDataGammaDistribution$fGenericGammaDistribution FDistributionfDistributionNDF1fDistributionNDF2 fDistributionfDistributionRealfDistributionEfDistributionRealE$fContGenFDistribution$fMaybeEntropyFDistribution$fEntropyFDistribution$fMaybeVarianceFDistribution$fMaybeMeanFDistribution$fContDistrFDistribution$fDistributionFDistribution$fBinaryFDistribution$fFromJSONFDistribution$fToJSONFDistribution$fReadFDistribution$fShowFDistribution$fEqFDistribution$fDataFDistribution$fGenericFDistributionDiscreteUniform rangeFromrangeTodiscreteUniformdiscreteUniformAB$fDiscreteGenDiscreteUniform$fContGenDiscreteUniform$fMaybeEntropyDiscreteUniform$fEntropyDiscreteUniform$fMaybeVarianceDiscreteUniform$fMaybeMeanDiscreteUniform$fVarianceDiscreteUniform$fMeanDiscreteUniform$fDiscreteDistrDiscreteUniform$fDistributionDiscreteUniform$fBinaryDiscreteUniform$fFromJSONDiscreteUniform$fToJSONDiscreteUniform$fReadDiscreteUniform$fShowDiscreteUniform$fEqDiscreteUniform$fDataDiscreteUniform$fGenericDiscreteUniform ChiSquared chiSquaredNDF chiSquared chiSquaredE$fContGenChiSquared$fMaybeEntropyChiSquared$fEntropyChiSquared$fMaybeVarianceChiSquared$fMaybeMeanChiSquared$fVarianceChiSquared$fMeanChiSquared$fContDistrChiSquared$fDistributionChiSquared$fBinaryChiSquared$fFromJSONChiSquared$fToJSONChiSquared$fReadChiSquared$fShowChiSquared$fEqChiSquared$fDataChiSquared$fGenericChiSquaredCauchyDistributioncauchyDistribMediancauchyDistribScalecauchyDistributioncauchyDistributionEstandardCauchy $fMaybeEntropyCauchyDistribution$fEntropyCauchyDistribution$fContGenCauchyDistribution$fContDistrCauchyDistribution $fDistributionCauchyDistribution$fBinaryCauchyDistribution$fFromJSONCauchyDistribution$fToJSONCauchyDistribution$fReadCauchyDistribution$fShowCauchyDistribution$fEqCauchyDistribution$fDataCauchyDistribution$fGenericCauchyDistributionBinomialDistributionbdTrials bdProbabilitybinomial binomialE"$fMaybeEntropyBinomialDistribution$fEntropyBinomialDistribution#$fMaybeVarianceBinomialDistribution$fMaybeMeanBinomialDistribution$fVarianceBinomialDistribution$fMeanBinomialDistribution#$fDiscreteDistrBinomialDistribution"$fDistributionBinomialDistribution$fBinaryBinomialDistribution$fFromJSONBinomialDistribution$fToJSONBinomialDistribution$fReadBinomialDistribution$fShowBinomialDistribution$fEqBinomialDistribution$fDataBinomialDistribution$fGenericBinomialDistributionBetaDistributionbdAlphabdBeta betaDistr betaDistrEimproperBetaDistrimproperBetaDistrE$fContGenBetaDistribution$fContDistrBetaDistribution$fMaybeEntropyBetaDistribution$fEntropyBetaDistribution$fMaybeVarianceBetaDistribution$fVarianceBetaDistribution$fMaybeMeanBetaDistribution$fMeanBetaDistribution$fDistributionBetaDistribution$fBinaryBetaDistribution$fFromJSONBetaDistribution$fToJSONBetaDistribution$fReadBetaDistribution$fShowBetaDistribution$fEqBetaDistribution$fDataBetaDistribution$fGenericBetaDistributionPowerspowersorder centralMomentvarianceUnbiasedskewnesskurtosiscount$fBinaryPowers$fToJSONPowers$fFromJSONPowers $fEqPowers $fReadPowers $fShowPowers $fDataPowers$fGenericPowersCDdctdct_idctidct_ifftfftkdekde_WeightsWeightedSample welfordMean meanWeighted harmonicMean geometricMeancentralMoments meanVariancemeanVarianceUnb stdErrMeanvarianceWeighted fastVariancefastVarianceUnbiased fastStdDev covariance correlationpair standardizeKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDF$fBinaryPoints$fToJSONPoints$fFromJSONPoints $fEqPoints $fReadPoints $fShowPoints $fDataPoints$fGenericPointsWeibullDistributionweibullStandard weibullDistrweibullDistrErrweibullDistrApproxMeanStddevErr%$fFromSampleWeibullDistributionDouble$fContGenWeibullDistribution!$fMaybeEntropyWeibullDistribution$fEntropyWeibullDistribution$fVarianceWeibullDistribution"$fMaybeVarianceWeibullDistribution$fMeanWeibullDistribution$fMaybeMeanWeibullDistribution$fContDistrWeibullDistribution!$fDistributionWeibullDistribution$fBinaryWeibullDistribution$fFromJSONWeibullDistribution$fToJSONWeibullDistribution$fReadWeibullDistribution$fShowWeibullDistribution$fEqWeibullDistribution$fDataWeibullDistribution$fGenericWeibullDistributionNormalDistributionstandard normalDistr normalDistrEnormalDistrErr$$fFromSampleNormalDistributionDouble$fContGenNormalDistribution $fMaybeEntropyNormalDistribution$fEntropyNormalDistribution$fVarianceNormalDistribution!$fMaybeVarianceNormalDistribution$fMeanNormalDistribution$fMaybeMeanNormalDistribution$fContDistrNormalDistribution $fDistributionNormalDistribution$fBinaryNormalDistribution$fFromJSONNormalDistribution$fToJSONNormalDistribution$fReadNormalDistribution$fShowNormalDistribution$fEqNormalDistribution$fDataNormalDistribution$fGenericNormalDistribution LowerLimit lowerLimitllConfidenceLevel UpperLimit upperLimitulConfidenceLevelScalescaleConfInt confIntLDX confIntUDX confIntCL NormalErr normalErrorEstimateestPointestErrorPValueCLmkCLmkCLEmkCLFromSignificancemkCLFromSignificanceEconfidenceLevelsignificanceLevelcl90cl95cl99mkPValue mkPValueEpValuenSigmanSigma1 getNSigma getNSigma1estimateNormErr±estimateFromErrestimateFromIntervalconfidenceInterval asymErrors$fOrdCL $fNFDataCL $fFromJSONCL $fToJSONCL $fBinaryCL$fReadCL$fShowCL$fNFDataPValue$fFromJSONPValue$fToJSONPValue$fBinaryPValue $fReadPValue $fShowPValue$fNFDataEstimate$fToJSONEstimate$fFromJSONEstimate$fBinaryEstimate$fNFDataNormalErr$fToJSONNormalErr$fFromJSONNormalErr$fBinaryNormalErr$fNFDataConfInt$fToJSONConfInt$fFromJSONConfInt$fBinaryConfInt$fScaleEstimate$fScaleConfInt$fScaleNormalErr$fNFDataUpperLimit$fToJSONUpperLimit$fFromJSONUpperLimit$fBinaryUpperLimit$fNFDataLowerLimit$fToJSONLowerLimit$fFromJSONLowerLimit$fBinaryLowerLimit$fEqLowerLimit$fReadLowerLimit$fShowLowerLimit$fDataLowerLimit$fGenericLowerLimit$fEqUpperLimit$fReadUpperLimit$fShowUpperLimit$fDataUpperLimit$fGenericUpperLimit $fReadConfInt $fShowConfInt $fEqConfInt $fDataConfInt$fGenericConfInt $fEqNormalErr$fReadNormalErr$fShowNormalErr$fDataNormalErr$fGenericNormalErr $fEqEstimate$fReadEstimate$fShowEstimate$fGenericEstimate$fDataEstimate $fEqPValue $fOrdPValue $fDataPValue$fGenericPValue$fEqCL$fDataCL $fGenericCL$fVectorVectorCL$fMVectorMVectorCL $fUnboxCL$fVectorVectorPValue$fMVectorMVectorPValue $fUnboxPValue$fVectorVectorEstimate$fMVectorMVectorEstimate$fUnboxEstimate$fVectorVectorNormalErr$fMVectorMVectorNormalErr$fUnboxNormalErr$fVectorVectorConfInt$fMVectorMVectorConfInt$fUnboxConfInt$fVectorVectorUpperLimit$fMVectorMVectorUpperLimit$fUnboxUpperLimit$fVectorVectorLowerLimit$fMVectorMVectorLowerLimit$fUnboxLowerLimit PositionTest SamplesDifferAGreaterBGreaterTesttestSignificancetestStatisticstestDistribution TestResult SignificantNotSignificant isSignificant significant$fNFDataTestResult$fToJSONTestResult$fFromJSONTestResult$fBinaryTestResult $fNFDataTest $fToJSONTest$fFromJSONTest $fBinaryTest$fNFDataPositionTest$fToJSONPositionTest$fFromJSONPositionTest$fBinaryPositionTest$fEqPositionTest$fOrdPositionTest$fShowPositionTest$fDataPositionTest$fGenericPositionTest$fEqTest $fOrdTest $fShowTest $fDataTest $fGenericTest $fFunctorTest$fEqTestResult$fOrdTestResult$fShowTestResult$fDataTestResult$fGenericTestResult studentTTest welchTTest pairedTTestkruskalWallisRank kruskalWalliskruskalWallisTestkolmogorovSmirnovTestkolmogorovSmirnovTestCdfkolmogorovSmirnovTest2kolmogorovSmirnovCdfDkolmogorovSmirnovDkolmogorovSmirnov2DkolmogorovSmirnovProbabilitychi2test chi2testCont EstimatorVarianceUnbiasedStdDevFunction Bootstrap fullSample resamplesResample fromResampleestimate resampleSTresampleresampleVector jackknife jackknifeMeanjackknifeVarianceUnbjackknifeVariancejackknifeStdDevsplitGen$fBinaryResample$fToJSONResample$fFromJSONResample$fToJSONBootstrap$fFromJSONBootstrap$fBinaryBootstrap $fEqBootstrap$fReadBootstrap$fShowBootstrap$fGenericBootstrap$fFunctorBootstrap$fFoldableBootstrap$fTraversableBootstrap$fDataBootstrap $fEqResample$fReadResample$fShowResample$fDataResample$fGenericResample olsRegressolsrSquarebootstrapRegresspoissonNormalCI poissonCInaiveBinomialCI binomialCIwilcoxonMatchedPairSignedRankwilcoxonMatchedPairSignificant wilcoxonMatchedPairCriticalValuewilcoxonMatchedPairSignificancewilcoxonMatchedPairTestwilcoxonRankSums mannWhitneyUmannWhitneyUCriticalValuemannWhitneyUSignificantmannWhitneyUtest bootstrapBCAbasicBootstrapLognormalDistributionlognormalStandardlognormalDistrlognormalDistrErrlognormalDistrMeanStddevErr'$fFromSampleLognormalDistributionDouble$fContGenLognormalDistribution#$fMaybeEntropyLognormalDistribution$fEntropyLognormalDistribution$fVarianceLognormalDistribution$$fMaybeVarianceLognormalDistribution$fMeanLognormalDistribution $fMaybeMeanLognormalDistribution $fContDistrLognormalDistribution#$fDistributionLognormalDistribution$fBinaryLognormalDistribution$fFromJSONLognormalDistribution$fToJSONLognormalDistribution$fReadLognormalDistribution$fShowLognormalDistribution$fEqLognormalDistribution$fDataLognormalDistribution$fGenericLognormalDistributionLaplaceDistribution ldLocationldScalelaplacelaplaceE%$fFromSampleLaplaceDistributionDouble$fContGenLaplaceDistribution!$fMaybeEntropyLaplaceDistribution$fEntropyLaplaceDistribution"$fMaybeVarianceLaplaceDistribution$fMaybeMeanLaplaceDistribution$fVarianceLaplaceDistribution$fMeanLaplaceDistribution$fContDistrLaplaceDistribution!$fDistributionLaplaceDistribution$fBinaryLaplaceDistribution$fFromJSONLaplaceDistribution$fToJSONLaplaceDistribution$fReadLaplaceDistribution$fShowLaplaceDistribution$fEqLaplaceDistribution$fDataLaplaceDistribution$fGenericLaplaceDistributionedLambda exponential exponentialE)$fFromSampleExponentialDistributionDouble $fContGenExponentialDistribution%$fMaybeEntropyExponentialDistribution $fEntropyExponentialDistribution&$fMaybeVarianceExponentialDistribution"$fMaybeMeanExponentialDistribution!$fVarianceExponentialDistribution$fMeanExponentialDistribution"$fContDistrExponentialDistribution%$fDistributionExponentialDistribution$fBinaryExponentialDistribution!$fFromJSONExponentialDistribution$fToJSONExponentialDistribution$fReadExponentialDistribution$fShowExponentialDistribution$fEqExponentialDistribution$fDataExponentialDistribution $fGenericExponentialDistributionpearsonpearsonMatByRowspearmanspearmanMatByRowautocovarianceautocorrelationpoissonEntropybaseGHC.ReadReadreadList readsPrecreadPrec readListPrecGHC.ShowShowshow showsPrecshowList defaultShow1 defaultShow2 defaultShow3defaultReadPrecM1defaultReadPrecM2defaultReadPrecM3&vector-0.12.3.1-BS9vrRx3ry325IASWLF2UHData.Vector.Generic.BaseVector Data.FoldableFoldable GHC.MaybeNothingGHC.Errerrorrank rankUnsorted splitByTagsData.Vector.Genericzipghc-prim GHC.TypesTrue:<