ɜ      !"#$%&'()*+,-./0123456789:;<=>? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | }~ !!!!!!!!!!!!!!!""""""""""""#$$$ $ $ $ $ %%%%&&&&&'''''((.)None3:Calculate rank of sample. Sample should be already sorted.Split tagged vector !"#$Equivalence relationVector to rank !"#$None+0Result of hypothesis testing"Data is compatible with hypothesis"Null hypothesis should be rejectedTest type. Exact meaning depends on a specific test. But generally it's tested whether some statistics is too big (small) for ( or whether it too big or too small for Significant if parameter is %, not significant otherwiser &'()&'()*(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone;*Just like unsafePerformIO, but we inline it. Big performance gains as it exposes lots of things to further inlining. /Very unsafe/. In particular, you should do no memory allocation inside an * block. On Hugs this is just unsafePerformIO.***(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3#Discrete cosine transform (DCT-II). jDiscrete cosine transform (DCT-II). Only real part of vector is transformed, imaginary part is ignored. >Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x) sInverse discrete cosine transform (DCT-III). Only real part of vector is transformed, imaginary part is ignored. Inverse fast Fourier transform. 2Radix-2 decimation-in-time fast Fourier transform.  + , -./0   + , -./02014 Bryan O'SullivanBSD3None:Two-dimensional mutable matrix, stored in row-major order.2Two-dimensional matrix, stored in row-major order.Rows of matrix.Columns of matrix.aIn order to avoid overflows during matrix multiplication, a large exponent is stored separately. Matrix data. 1  1(c) 2014 Bryan O'SullivanBSD3NoneSGiven row and column numbers, calculate the offset into the flat row-major vector. eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.  !  ! !  !+(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone" Compare two 29 values for approximate equality, using Dawson's method.The required accuracy is specified in ULPs (units of least precision). If the two numbers differ by the given number of ULPs or less, this function returns True."#Number of ULPs of accuracy desired.""(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0#>The result of searching for a root of a mathematical function.$A root was successfully found.%hThe search failed to converge to within the given error tolerance after the given number of iterations.&fThe function does not have opposite signs when evaluated at the lower and upper bounds of the search.']Returns either the result of a search for a root, or the default value if the search failed.(:Use the method of Ridders to compute a root of a function.The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed).#$%&'Default value.Result of search for a root.(Absolute error tolerance.&Lower and upper bounds for the search.Function to find the roots of.3456789:#$%&'(#&%$'( #&%$'(3456789:(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone)=Weights for affecting the importance of elements of a sample.*4An estimator of a property of a sample, such as its mean.AThe use of an algebraic data type here allows functions such as  jackknife and  bootstrapBCA1 to use more efficient algorithms when possible.0FSample with weights. First element of sample is data, second is weight1 Sample data. )*+,-./01 )*+,-./01 */.-,+10))*/.-,+01,(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone;MAn unchecked, non-integer-valued version of Loader's saddle point algorithm.<yCompute entropy using Theorem 1 from "Sharp Bounds on the Entropy of the Poisson Law". This function is unused because  directEntorpy; is just as accurate and is faster by about a factor of 4.=Returns [x, x^2, x^3, x^4, ...]>bReturns an upper bound according to theorem 2 of "Sharp Bounds on the Entropy of the Poisson Law"?KReturns the average of the upper and lower bounds accounding to theorem 2.@KCompute entropy directly from its definition. This is just as accurate as < for lambda <= 1 and is faster, but is slow for large lambda, and produces some underestimation due to accumulation of floating point error.AOCompute the entropy of a poisson distribution using the best available method.;<B=>?CDEFGHIJKLM@A;A;<B=>?CDEFGHIJKLM@ANone32O(nlogn)_ Compute the Kendall's tau from a vector of paired data. Return NaN when number of pairs <= 1.2NOP222NOP-(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportable Safe-Inferred QRSTUVWXYZ[\]%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3HM 3Sort a vector.4Sort a vector.5&Sort a vector using a custom ordering.6-Partially sort a vector, such that the least k elements will be at the front.7Return the indices of a vector.8Zip a vector with its indices.98Compute the minimum and maximum of a vector in one pass.:Efficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.;Multiply a number by itself.<Simple for loop. Counts from start to end-1.=&Simple reverse-for loop. Counts from start-1 to end (which must be less than start).^_3456 The number k of least elements.789:;<=> "3456789:;<=> 9345687:";><= ^_3456789:;<=>.(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3`a`a`a +(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3?O(n)N Range. The difference between the largest and smallest elements of a sample.@O(n)Y Arithmetic mean. This uses Kahan-Babuaka-Neumaier summation, so is more accurate than A) unless the input values are very large.AO(n){ Arithmetic mean. This uses Welford's algorithm to provide numerical stability, using a single pass over the sample data. Compared to @P, this loses a surprising amount of precision unless the inputs are very large.BO(n)d Arithmetic mean for weighted sample. It uses a single-pass algorithm analogous to the one used by A.CO(n)H Harmonic mean. This algorithm performs a single pass over the sample.DO(n): Geometric mean of a sample containing no negative values.E Compute the k_th central moment of a sample. The central moment is also known as the moment about the mean.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.F Compute the kth and jth central moments of a sample.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.GZCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewedU. Most of its mass is on the right of the distribution, with the tail on the left. :skewness $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. 4skewness $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its I is zero.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.HCompute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that more of the sample's variance is due to infrequent severe deviations, rather than more frequent modest deviations.1A sample's excess kurtosis is not defined if its I is zero.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.IvMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n.JhUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1.KCalculate mean and maximum likelihood estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once.LCalculate mean and unbiased estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once.M^Standard deviation. This is simply the square root of the unbiased estimate of the variance.N-Weighted variance. This is biased estimation.O3Maximum likelihood estimate of a sample's variance.P)Unbiased estimate of a sample's variance.QhStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance.bcdefg?@ABCDEFGHIJKLMhNiOPQj01?@ABCDEFGHIJKLMNOPQ10?@ABCDEFGHIJKLMNOPQbcdefg?@ABCDEFGHIJKLMhNiOPQj (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneMRDGenerate discrete random variates which have given distribution. Tc is superclass because it's always possible to generate real-valued variates from integer valuesTCGenerate discrete random variates which have given distribution.V.Type class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one. If the distribution has well-defined entropy for all valid parameter values then it should be an instance of this type class.W/Returns the entropy of a distribution, in nats.XType class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one. Y should return k< if entropy is undefined for the chosen parameter values.YCReturns the entropy of a distribution, in nats, if such is defined.ZType class for distributions with variance. If distibution have finite variance for all valid parameter values it should be instance of this type class.Minimal complete definition is [ or \]gType class for distributions with variance. If variance is undefined for some parameter values both ^ and _ should return Nothing.Minimal complete definition is ^ or _`Type class for distributions with mean. If distribution have finite mean for all valid values of parameters it should be instance of this type class.b(Type class for distributions with mean. c should return k, if it's undefined for current value of datad%Continuous probability distributuion.Minimal complete definition is f and either e or g.e@Probability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)"xf<Inverse of the cumulative distribution function. The value x for which P(X"dx) = pA. If probability is outside of [0,1] range function should call lgNatural logarithm of density.h"Discrete probability distribution.iProbability of n-th outcome.j(Logarithm of probability of n-th outcomektType class common to all distributions. Only c.d.f. could be defined for both discrete and continous distributions.lKCumulative distribution function. The probability that a random variable X is less or equal than x , i.e. P(X"dx8). Cumulative should be defined for infinities as well: 'cumulative d +" = 1 cumulative d -" = 0m+One's complement of cumulative distibution: (complCumulative d x = 1 - cumulative d x(It's useful when one is interested in P(X<x) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.nNGenerate variates from continous distribution using inverse transform rule.oApproximate the value of X for which P(x>X)=p.This method uses a combination of Newton-Raphson iteration and bisection with the given guess as a starting point. The upper and lower bounds specify the interval in which the probability distribution reaches the value p.p(Sum probabilities in inclusive interval.!mnRSTUVWXYZ[\]^_`abcdefghijklmno Distribution Probability p Initial guessLower bound on intervalUpper bound on intervalpRSTUVWXYZ[\]^_`abcdefghijklmnopklmhijdefgbc`a]^_Z[\XYVWTURSnopmnRSTUVWXYZ[\]^_`abcdefghijklmnop (C) 2012 Edward Kmett, BSD-style (see the file LICENSE)Edward Kmett <ekmett@gmail.com> provisionalDeriveDataTypeableNone+0qThe beta distributionrAlpha shape parametersBeta shape parametertACreate beta distribution. Both shape parameters must be positive.uCCreate beta distribution. This construtor doesn't check parameters.qorstShape parameter alphaShape parameter betauShape parameter alphaShape parameter betapqrstuvwxyz{qrstuqrstursqorstupqrstuvwxyz{ (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0vThe binomial distribution.wNumber of trials.x Probability.ypConstruct binomial distribution. Number of trials must be non-negative and probability must be in [0,1] rangev|wx}~yNumber of trials. Probability.vwxyvwxywxv|wx}~y (c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0|Create Poisson distribution.z{|z{|z{|{ z{|(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone+0}Cauchy-Lorentz distribution.~Central value of Cauchy-Lorentz distribution which is its mode and median. Distribution doesn't have mean so function is named after median.Scale parameter of Cauchy-Lorentz distribution. It's different from variance and specify half width at half maximum (HWHM).Cauchy distribution}~ Central pointScale parameter (FWHM)}~}~~ }~(c) 2010 Alexey KhudyakovBSD3bos@serpentine.com experimentalportableNone+0Chi-squared distribution Get number of degrees of freedomUConstruct chi-squared distribution. Number of degrees of freedom must be positive.(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0#Create an exponential distribution.iCreate exponential distribution from sample. No tests are made to check whether it truly is exponential. (scale) parameter.(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0The gamma distribution.Shape parameter, k.Scale parameter, .MCreate gamma distribution. Both shape and scale parameters must be positive.XCreate gamma distribution. This constructor do not check whether parameters are validShape parameter. kScale parameter, .Shape parameter. kScale parameter, .(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0Create geometric distribution.Create geometric distribution.# Success rate Success rate(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0mlk(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0The normal distribution.IStandard normal distribution with mean equal to 0 and variance equal to 1+Create normal distribution from parameters.WIMPORTANT: prior to 0.10 release second parameter was variance not standard deviation.Create distribution using parameters estimated from sample. Variance is estimated using maximum likelihood method (biased estimation).Mean of distribution"Standard deviation of distribution          (c) 2013 John McDonnell;BSD3bos@serpentine.com experimentalportableNone+0234.Linear transformation applied to distribution. "LinearTransform   _ x' =  + xLocation parameter.Scale parameter.Distribution being transformed.,Apply linear transformation to distribution.(Get fixed point of linear transformation Fixed pointScale parameter Distribution  (c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone+0Student-T distributionECreate Student-T distribution. Number of parameters must be positive.0Create an unstandardized Student-t distribution.!"#$Number of degrees of freedom5Central value (0 for standard Student T distribution)Scale parameter%&'()*+,-./!"#$%&'()*+,-./(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone+0 Uniform distribution from A to BLow boundary of distributionUpper boundary of distributionCreate uniform distribution.0123456789:;<0123456789:;<(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone+0F distribution=>?@ABCDEFGHIJK=>?@ABCDEFGHIJK-2011 Aleksey Khudyakov, 2014 Bryan O'SullivanBSD3NoneConvert from a row-major list. Convert from a row-major vector.#Convert to a row-major flat vector.!Convert to a row-major flat list.=Return the dimensions of this matrix, as a (row,column) pair.LAvoid overflow in the matrix.EMatrix-matrix multiplication. Matrices must be of compatible sizes (note: not checked).Matrix-vector multiplication.Raise matrix to n7th power. Power must be positive (/note: not checked).=Element in the center of matrix (not corrected for exponent).)Calculate the Euclidean norm of a vector.Return the given column.Return the given row..Indicate whether any element of the matrix is NaN.SGiven row and column numbers, calculate the offset into the flat row-major vector.eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.Number of rows.Number of columns.(Flat list of values, in row-major order.Number of rows.Number of columns.(Flat list of values, in row-major order.LRow.Column.<<L2014 Bryan O'SullivanBSD3NoneO(r*c)Q Compute the QR decomposition of a matrix. The result returned is the matrices (q,r).MM(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3  Parameters a and b to the  function.O(n log n). Estimate the kth q:-quantile of a sample, using the weighted average method.O(n log n). Estimate the kth q-quantile of a sample x, using the continuous sample method with the given parameters. This is the method used by most statistical software, such as R, Mathematica, SPSS, and S.O(n log n). Estimate the range between q-quantiles 1 and q-1 of a sample x@, using the continuous sample method with the given parameters.IFor instance, the interquartile range (IQR) can be estimated as follows: @midspread medianUnbiased 4 (U.fromList [1,1,2,2,3]) ==> 1.3333332California Department of Public Works definition, a=0, bl=1. Gives a linear interpolation of the empirical CDF. This corresponds to method 4 in R and Mathematica.Hazen's definition, a=0.5, bn=0.5. This is claimed to be popular among hydrologists. This corresponds to method 5 in R and Mathematica.9Definition used by the SPSS statistics application, with a=0, b]=0 (also known as Weibull's definition). This corresponds to method 6 in R and Mathematica.6Definition used by the S statistics application, with a=1, b;=1. The interpolation points divide the sample range into n-1@ intervals. This corresponds to method 7 in R and Mathematica.Median unbiased definition, a=1/3, bm=1/3. The resulting quantile estimates are approximately median unbiased regardless of the distribution of x6. This corresponds to method 8 in R and Mathematica.Normal unbiased definition, a=3/8, b=3/8. An approximately unbiased estimate if the empirical distribution approximates the normal distribution. This corresponds to method 9 in R and Mathematica. k, the desired quantile.q, the number of quantiles.x, the sample data. Parameters a and b.k, the desired quantile.q, the number of quantiles.x, the sample data. Parameters a and b.q, the number of quantiles.x, the sample data.N N(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0 A resample drawn randomly, with replacement, from a set of data points. Distinct from a normal array to make it harder for your humble author's brain to go wrong.O(e*r*s)d Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg).Run an * over a sample.O(n) or O(n^2)c Compute a statistical estimate repeatedly over a sample, each time omitting a successive element.O(n)( Compute the jackknife mean of a sample.OO(n)F Compute the jackknife variance of a sample with a correction factor c;, so we can get either the regular or "unbiased" variance.O(n)5 Compute the unbiased jackknife variance of a sample.O(n), Compute the jackknife variance of a sample.O(n)6 Compute the jackknife standard deviation of a sample.P Drop the kth element of a vector.:Split a generator into several that can run independently.Estimation functions.Number of resamples to compute.Original sample.OQRPSTUV OQRPSTUV(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+0.A point and interval estimate computed via an *.Point estimate.XLower bound of the estimate interval (i.e. the lower bound of the confidence interval).XUpper bound of the estimate interval (i.e. the upper bound of the confidence interval).-Confidence level of the confidence intervals.7Multiply the point, lower bound, and upper bound in an  by the given value.sBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution.WXValue to multiply by.YConfidence level Sample data EstimatorsResampled dataZ[\] WXYZ[\]X2014 Bryan O'SullivanBSD3NonezPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one moreD element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see  for details)./Compute the ordinary least-squares solution to A x = b.^Solve the equation R x = b.Compute RS, the coefficient of determination that indicates goodness-of-fit of a regression.gThis value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power.{Bootstrap a regression function. Returns both the results of the regression and the requested confidence interval values._%Balance units of work across workers.tNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by .GResponder vector. Must have the same length as the predictor vectors.A& has at least as many rows as columns.b# has the same length as columns in A.^R& is an upper-triangular square matrix.b* is of the same length as rows/columns in R.Predictors (regressors). Responders.Regression coefficients.Number of resamples to compute.Confidence interval.Regression function.Predictor vectors.Responder vector._^_(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3O(n)% Compute a histogram over a data set.)The result consists of a pair of vectors:!The lower bound of each interval.*The number of samples within the interval.eInterval (bin) sizes are uniform, and the upper and lower bounds are chosen automatically using the ; function. To specify these parameters directly, use the  function.O(n)% Compute a histogram over a data set.PInterval (bin) sizes are uniform, based on the supplied upper and lower bounds.O(n) Compute decent defaults for the lower and upper bounds of a histogram, based on the desired number of bins and the range of the sample data.$The upper and lower bounds used are  (lo-d, hi+d), where 8d = (maximum sample - minimum sample) / ((bins - 1) * 2)8If all elements in the sample are the same and equal to x range is set to (x - |x| 10, x + |x|10) . And if x is equal to 0 range is set to (-1,1)A. This is needed to avoid creating histogram with zero bin size."Number of bins (must be positive).Sample data (cannot be empty).]Number of bins. This value must be positive. A zero or negative value will cause an error.PLower bound on interval range. Sample data less than this will cause an error.Upper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. Sample data."Number of bins (must be positive).Sample data (cannot be empty). (c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use .)%Density estimates at each mesh point.]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two.PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two. Lower bound (min) of the mesh range. Upper bound (max) of the mesh range.!(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+03 7The convolution kernel. Its parameters are as follows:Scaling factor, 1/nh Bandwidth, h&A point at which to sample the input, pOne sample value, v)The width of the convolution kernel used.Points from the range of a Sample./Bandwidth estimator for an Epanechnikov kernel.*Bandwidth estimator for a Gaussian kernel.KCompute the optimal bandwidth from the observed data for the given kernel.This function uses an estimate based on the standard deviation of a sample (due to Deheuvels), which performs reasonably well for unimodal distributions but leads to oversmoothing for more complex ones._Choose a uniform range of points at which to estimate a sample's probability density function.mIf you are using a Gaussian kernel, multiply the sample's bandwidth by 3 before passing it to this function.aIf this function is passed an empty vector, it returns values of positive and negative infinity.@Epanechnikov kernel for probability density function estimation.<Gaussian kernel for probability density function estimation.eKernel density estimator, providing a non-parametric way of estimating the PDF of a random variable.}A helper for creating a simple kernel density estimation function with automatically chosen bandwidth and estimation points.Simple Epanechnikov kernel density estimator. Returns the uniformly spaced points from the sample range at which the density function was estimated, and the estimates at those points.Simple Gaussian kernel density estimator. Returns the uniformly spaced points from the sample range at which the density function was estimated, and the estimates at those points.Number of points to select, nSample bandwidth, h Input dataKernel function Bandwidth, h Sample dataPoints at which to estimateBandwidth functionKernel functionDBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others)%Number of points at which to estimate sample data%Number of points at which to estimate Data sample%Number of points at which to estimate Data sample`abc`abc"(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone+03 O(n) Collect the n simple powers of a sample.XFunctions computed over a sample's simple powers require at least a certain number (or order) of powers to be collected.To compute the kth  , at least k$ simple powers must be collected.For the &, at least 2 simple powers are needed.For #, we need at least 3 simple powers.For (, at least 4 simple powers are required.*This function is subject to stream fusion.5The order (number) of simple powers collected from a sample. Compute the k_th central moment of a sample. The central moment is also known as the moment about the mean.vMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n4. This is the second central moment of the sample.CThis is less numerically robust than the variance function in the /0o module, but the number is essentially free to compute if you have already collected a sample's simple powers. Requires  with  at least 2.hStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance.hUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1. Requires  with  at least 2.ZCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewedU. Most of its mass is on the right of the distribution, with the tail on the left. Eskewness . powers 3 $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. ?skewness . powers 3 $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its  is zero. Requires  with  at least 3.Compute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that the sample's variance is due more to infrequent severe deviations than to frequent modest deviations.1A sample's excess kurtosis is not defined if its  is zero. Requires  with  at least 4.'The number of elements in the original Sample-. This is the sample's zeroth simple power.$The sum of elements in the original Sample,. This is the sample's first simple power.0The arithmetic mean of elements in the original Sample.?This is less numerically robust than the mean function in the /0o module, but the number is essentially free to compute if you have already collected a sample's simple powers.dn, the number of powers, where n >= 2.efg  defg#None3Generic form of Pearson chi squared tests for binned data. Data sample is supplied in form of tuples (observed quantity, expected number of events). Both must be positive.p-valueNumber of additional degrees of freedom. One degree of freedom is due to the fact that the are N observation in total and accounted for automatically.Observation and expectation.$(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone9Check that sample could be described by distribution. E means distribution is not compatible with data for given p-value.SThis test uses Marsaglia-Tsang-Wang exact alogorithm for calculation of p-value. Variant of ' which uses CFD in form of function.Two sample Kolmogorov-Smirnov test. It tests whether two data samples could be described by the same distribution without making any assumptions about it.8This test uses approxmate formula for computing p-value. !Calculate Kolmogorov's statistic Df for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0. !Calculate Kolmogorov's statistic Df for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0. !Calculate Kolmogorov's statistic DB for two data samples. If either of samples is empty returns 0. PCalculate cumulative probability function for Kolmogorov's distribution with n< parameters or probability of getting value smaller than d with n-elements sample.PIt uses algorithm by Marsgalia et. al. and provide at least 7-digit accuracy. Distributionp-value Data sampleCDF of distributionp-value Data samplep-valueSample 1Sample 2  CDF functionSample  DistributionSample  First sample Second sample Size of the sampleD value             %(c) 2014 Danny NavarroBSD3bos@serpentine.com experimentalportableNone Kruskal-Wallis ranking.EAll values are replaced by the absolute rank in the combined samples.The samples and values need not to be ordered but the values in the result are ordered. Assigned ranks (ties are given their average rank).The Kruskal-Wallis Test.8In textbooks the output value is usually represented by K or H*. This function already does the ranking.:Calculates whether the Kruskal-Wallis test is significant.It uses  Chi-Squaredc distribution for aproximation as long as the sizes are larger than 5. Otherwise the test returns k.oPerform Kruskal-Wallis Test for the given samples and required significance. For additional information check ". This is just a helper function. The samples' size(The p-value at which to test (e.g. 0.05) K value from kruskallWallish   h&(c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNoneThe Wilcoxon Rank Sums Test.This test calculates the sum of ranks for the given two samples. The samples are ordered, and assigned ranks (ties are given their average rank), then these ranks are summed for each sample.The return value is (W , W ) where W is the sum of ranks of the first sample and W is the sum of ranks of the second sample. This test is trivially transformed into the Mann-Whitney U test. You will probably want to use e and the related functions for testing significance, but this function is exposed for completeness.The Mann-Whitney U Test.This is sometimes known as the Mann-Whitney-Wilcoxon U test, and confusingly many sources state that the Mann-Whitney U test is the same as the Wilcoxon's rank sum test (which is provided as I). The Mann-Whitney U is a simple transform of Wilcoxon's rank sum test.bAgain confusingly, different sources state reversed definitions for U and U , so it is worth being explicit about what this function returns. Given two samples, the first, xs , of size n and the second, xs , of size n , this function returns (U , U ) where U = W - (n (n +1))/2 and U = W - (n (n +1))/2, where (W , W ) is the return value of wilcoxonRankSums xs1 xs2.Some sources instead state that U and U should be the other way round, often expressing this using U ' = n n - U (since U + U = n n ).FAll of which you probably don't care about if you just feed this into .cCalculates the critical value of Mann-Whitney U for the given sample sizes and significance level.(This function returns the exact calculated value of U for all sample sizes; it does not use the normal approximation at all. Above sample size 20 it is generally recommended to use the normal approximation instead, but this function will calculate the higher critical values if you need them.The algorithm to generate these values is a faster, memoised version of the simple unoptimised generating function given in section 2 of "The Mann Whitney Wilcoxon Distribution Using Linked Lists":Calculates whether the Mann Whitney U test is significant.aIf both sample sizes are less than or equal to 20, the exact U critical value (as calculated by Z) is used. If either sample is larger than 20, the normal approximation is used instead.If you use a one-tailed test, the test indicates whether the first sample is significantly larger than the second. If you want the opposite, simply reverse the order in both the sample size and the (U , U ) pairs.{Perform Mann-Whitney U Test for two samples and required significance. For additional information check documentation of  and ". This is just a helper function.One-tailed test checks whether first sample is significantly larger than second. Two-tailed whether they are significantly different.The sample size>The p-value (e.g. 0.05) for which you want the critical value.The critical value (of U).i0Perform one-tailed test (see description above).=The samples' size from which the (U ,U ) values were derived.(The p-value at which to test (e.g. 0.05)The (U , U ) values from .Return k3 if the sample was too small to make a decision.0Perform one-tailed test (see description above).(The p-value at which to test (e.g. 0.05) First sample Second sampleReturn k3 if the sample was too small to make a decision.  i'(c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNonejuThe coefficients for x^0, x^1, x^2, etc, in the expression prod_{r=1}^s (1 + x^r). See the Mitic paper for details.We can define: f(1) = 1 + x f(r) = (1 + x^r)*f(r-1) = f(r-1) + x^r * f(r-1) The effect of multiplying the equation by x^r is to shift all the coefficients by r down the list.1This list will be processed lazily from the head.oTests whether a given result from a Wilcoxon signed-rank matched-pairs test is significant at the given level.hThis function can perform a one-tailed or two-tailed test. If the first parameter to this function is r, the test is performed two-tailed to check if the two samples differ significantly. If the first parameter is m, the check is performed one-tailed to decide whether the first sample (i.e. the first sample you passed to L) is greater than the second sample (i.e. the second sample you passed to ). If you wish to perform a one-tailed test in the opposite direction, you can either pass the parameters in a different order to X, or simply swap the values in the resulting pair before passing them to this function.6Obtains the critical value of T to compare against, given a sample size and a p-value (significance level). Your T value must be less than or equal to the return of this function in order for the test to work out significant. If there is a Nothing return, the sample size is too small to make a decision.wilcoxonSignificant tests the return value of  for you, so you should use wilcoxonSignificant for determining test results. However, this function is useful, for example, for generating lookup tables for Wilcoxon signed rank critical values.The return values of this function are generated using the method detailed in the paper "Critical Values for the Wilcoxon Signed Rank Statistic", Peter Mitic, The Mathematica Journal, volume 6, issue 3, 1996, which can be found here:  Phttp://www.mathematica-journal.com/issue/v6i3/article/mitic/contents/63mitic.pdf. According to that paper, the results may differ from other published lookup tables, but (Mitic claims) the values obtained by this function will be the correct ones.Works out the significance level (p-value) of a T value, given a sample size and a T value from the Wilcoxon signed-rank matched-pairs test.See the notes on wilcoxonCriticalValue for how this is calculated.The Wilcoxon matched-pairs signed-rank test. The samples are zipped together: if one is longer than the other, both are truncated to the the length of the shorter sample.For one-tailed test it tests whether first sample is significantly greater than the second. For two-tailed it checks whether they significantly differCheck  and  for additional information.jk8Perform one- or two-tailed test (see description below).;The sample size from which the (T+,T-) values were derived.(The p-value at which to test (e.g. 0.05)The (T+, T-) values from .Return k3 if the sample was too small to make a decision.The sample size>The p-value (e.g. 0.05) for which you want the critical value.WThe critical value (of T), or Nothing if the sample is too small to make a decision.The sample size3The value of T for which you want the significance.The significance (p-value).Perform one-tailed test.(The p-value at which to test (e.g. 0.05) First sample Second sampleReturn k3 if the sample was too small to make a decision.  jk((c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3oCompute the autocovariance of a sample, i.e. the covariance of the sample against a shifted version of itself.{Compute the autocorrelation function of a sample, and the upper and lower bounds of confidence intervals for each element.NoteX: The calculation of the 95% confidence interval assumes a stationary Gaussian process.l123456789:;<=>??@@ABCDEFGHIJKLMNOP+QRRSTUVWXYZ[\]^0_`abcdefghijM k l m n o p q r s t u v w x y z { | } ~     \ u y  ] l        NOk   ! ! ! ! ! !!!!!!!!!!""""q"u"y"v"s"t"""l#$$$ $!$"$#$$%%%&%'%(&)&*&+&,&-'.'/'0'1'2(3(4)5)6)7)7)8)9):);<=>?@AB*CDEFGHIJ<=KLMNOPQRS,,T,,U,V,W,X,Y,Z,[,\,],^,_,`,a,b,c,defghijhikhilhimhinhiohiphiqhirhishithiuhivww.x. y y z z { { | } ~                 l u W                       8ulWly      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFzGHIJKLM!N!O!P!Q""R"S"T%U&V'W'XYstatistics-0.13.2.2Statistics.Test.TypesStatistics.TransformStatistics.Matrix.TypesStatistics.Matrix.MutableStatistics.FunctionStatistics.Math.RootFindingStatistics.TypesStatistics.Correlation.KendallStatistics.SampleStatistics.DistributionStatistics.Distribution.Beta Statistics.Distribution.BinomialStatistics.Distribution.Poisson%Statistics.Distribution.CauchyLorentz"Statistics.Distribution.ChiSquared#Statistics.Distribution.ExponentialStatistics.Distribution.Gamma!Statistics.Distribution.Geometric&Statistics.Distribution.HypergeometricStatistics.Distribution.Normal!Statistics.Distribution.Transform Statistics.Distribution.StudentTStatistics.Distribution.Uniform%Statistics.Distribution.FDistributionStatistics.MatrixStatistics.Matrix.AlgorithmsStatistics.QuantileStatistics.ResamplingStatistics.Resampling.BootstrapStatistics.RegressionStatistics.Sample.HistogramStatistics.Sample.KernelDensity&Statistics.Sample.KernelDensity.SimpleStatistics.Sample.PowersStatistics.Test.ChiSquared!Statistics.Test.KolmogorovSmirnovStatistics.Test.KruskalWallisStatistics.Test.MannWhitneyUStatistics.Test.WilcoxonTStatistics.AutocorrelationStatistics.Test.InternalStatistics.InternalStatistics.Function.Comparison(Statistics.Distribution.Poisson.InternalStatistics.ConstantsStatistics.Sample.Internal StatisticsSample TestResultNotSignificant SignificantTestType TwoTailed OneTailed significantCDdctdct_idctidct_ifftfftMMatrixMatrixrowscolsexponent_vectorMVectorVectordebug replicatethaw unsafeFreeze unsafeRead unsafeWrite unsafeModifybounds unsafeBounds immutablywithinRoot SearchFailed NotBracketedfromRootriddersWeights EstimatorFunctionStdDevVarianceUnbiasedVarianceMeanWeightedSamplekendallsortgsortsortBy partialSortindicesindexedminMaxnextHighestPowerOfTwosquareforrforrangemean welfordMean meanWeighted harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvariancevarianceUnbiased meanVariancemeanVarianceUnbstdDevvarianceWeighted fastVariancefastVarianceUnbiased fastStdDev DiscreteGengenDiscreteVarContGen genContVarEntropyentropy MaybeEntropy maybeEntropy MaybeVariance maybeVariance maybeStdDev MaybeMean maybeMean ContDistrdensityquantile logDensity DiscreteDistr probabilitylogProbability Distribution cumulativecomplCumulative genContinousfindRootsumProbabilitiesBetaDistributionbdAlphabdBeta betaDistrimproperBetaDistrBinomialDistributionbdTrials bdProbabilitybinomialPoissonDistribution poissonLambdapoissonCauchyDistributioncauchyDistribMediancauchyDistribScalecauchyDistributionstandardCauchy ChiSquared chiSquaredNDF chiSquaredExponentialDistributionedLambda exponentialexponentialFromSampleGammaDistributiongdShapegdScale gammaDistrimproperGammaDistrGeometricDistribution0 gdSuccess0GeometricDistribution gdSuccess geometric geometric0HypergeometricDistributionhdMhdLhdKhypergeometricNormalDistributionstandard normalDistrnormalFromSampleLinearTransformlinTransLocation linTransScale linTransDistr scaleAroundlinTransFixedPointStudentT studentTndfstudentTstudentTUnstandardizedUniformDistributionuniformAuniformB uniformDistr FDistributionfDistributionNDF1fDistributionNDF2 fDistributionfromList fromVectortoVectortoList dimensionmultiply multiplyVpowercenternormcolumnrow unsafeIndexmaphasNaN transposeqr ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedResample fromResampleresampleestimate jackknife jackknifeMeanjackknifeVarianceUnbjackknifeVariancejackknifeStdDevsplitGenEstimateestPoint estLowerBound estUpperBoundestConfidenceLevelscale bootstrapBCA olsRegressolsrSquarebootstrapRegress histogram histogram_kdekde_Kernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFPowerspowersordercountsumchi2testkolmogorovSmirnovTestkolmogorovSmirnovTestCdfkolmogorovSmirnovTest2kolmogorovSmirnovCdfDkolmogorovSmirnovDkolmogorovSmirnov2DkolmogorovSmirnovProbabilitykruskalWallisRank kruskalWalliskruskalWallisSignificantkruskalWallisTestwilcoxonRankSums mannWhitneyUmannWhitneyUCriticalValuemannWhitneyUSignificantmannWhitneyUtestwilcoxonMatchedPairSignedRankwilcoxonMatchedPairSignificant wilcoxonMatchedPairCriticalValuewilcoxonMatchedPairSignificancewilcoxonMatchedPairTestautocovarianceautocorrelationrank splitByTagsRankrankCntrankValrankNumrankVecghc-prim GHC.TypesTrue$fToJSONTestResult$fFromJSONTestResult$fToJSONTestType$fFromJSONTestTypeinlinePerformIO dctWorker idctWorkermfftfihalvevectorOK $fShowMatrixDouble$fAlternativeRoot$fApplicativeRoot$fMonadPlusRoot $fMonadRoot $fFunctorRoot $fBinaryRoot $fToJSONRoot$fFromJSONRootalyThm1 alyThm2UpperalyThm2 directEntropypoissonEntropyalyczipCoefficientsupperCoefficients4lowerCoefficients4upperCoefficients6lowerCoefficients6upperCoefficients8lowerCoefficients8upperCoefficients10lowerCoefficients10upperCoefficients12lowerCoefficients12 numOfTiesBy mergeSortmergemath-functions-0.1.5.2Numeric.MathFunctions.Constantsm_eulerMascheronim_ln_sqrt_2_pi m_epsilon m_1_sqrt_2 m_2_sqrt_pi m_sqrt_2_pim_sqrt_2m_NaN m_neg_inf m_pos_inf m_max_expm_tinym_hugeMM robustSumVarT1TVrobustSumVarWeightedfastVar^base Data.MaybeNothingGHC.ErrerrorPBD$fContGenBetaDistribution$fContDistrBetaDistribution$fMaybeEntropyBetaDistribution$fEntropyBetaDistribution$fMaybeVarianceBetaDistribution$fVarianceBetaDistribution$fMaybeMeanBetaDistribution$fMeanBetaDistribution$fDistributionBetaDistribution$fBinaryBetaDistribution$fToJSONBetaDistribution$fFromJSONBetaDistribution"$fMaybeEntropyBinomialDistribution$fEntropyBinomialDistribution#$fMaybeVarianceBinomialDistribution$fMaybeMeanBinomialDistribution$fVarianceBinomialDistribution$fMeanBinomialDistribution#$fDiscreteDistrBinomialDistribution"$fDistributionBinomialDistribution$fBinaryBinomialDistribution$fToJSONBinomialDistribution$fFromJSONBinomialDistributionPD!$fMaybeEntropyPoissonDistribution$fEntropyPoissonDistribution"$fMaybeVariancePoissonDistribution$fMaybeMeanPoissonDistribution$fMeanPoissonDistribution$fVariancePoissonDistribution"$fDiscreteDistrPoissonDistribution!$fDistributionPoissonDistribution$fBinaryPoissonDistribution$fToJSONPoissonDistribution$fFromJSONPoissonDistribution $fMaybeEntropyCauchyDistribution$fEntropyCauchyDistribution$fContGenCauchyDistribution$fContDistrCauchyDistribution $fDistributionCauchyDistribution$fBinaryCauchyDistribution$fToJSONCauchyDistribution$fFromJSONCauchyDistribution$fContGenChiSquared$fMaybeEntropyChiSquared$fEntropyChiSquared$fMaybeVarianceChiSquared$fMaybeMeanChiSquared$fVarianceChiSquared$fMeanChiSquared$fContDistrChiSquared$fDistributionChiSquared$fBinaryChiSquared$fToJSONChiSquared$fFromJSONChiSquaredED $fContGenExponentialDistribution%$fMaybeEntropyExponentialDistribution $fEntropyExponentialDistribution&$fMaybeVarianceExponentialDistribution"$fMaybeMeanExponentialDistribution!$fVarianceExponentialDistribution$fMeanExponentialDistribution"$fContDistrExponentialDistribution%$fDistributionExponentialDistribution$fBinaryExponentialDistribution$fToJSONExponentialDistribution!$fFromJSONExponentialDistributionGD$fContGenGammaDistribution$fMaybeEntropyGammaDistribution $fMaybeVarianceGammaDistribution$fMaybeMeanGammaDistribution$fMeanGammaDistribution$fVarianceGammaDistribution$fContDistrGammaDistribution$fDistributionGammaDistribution$fBinaryGammaDistribution$fToJSONGammaDistribution$fFromJSONGammaDistributionGD0$fContGenGeometricDistribution0#$fDiscreteGenGeometricDistribution0$$fMaybeEntropyGeometricDistribution0$fEntropyGeometricDistribution0%$fMaybeVarianceGeometricDistribution0!$fMaybeMeanGeometricDistribution0 $fVarianceGeometricDistribution0$fMeanGeometricDistribution0%$fDiscreteDistrGeometricDistribution0$$fDistributionGeometricDistribution0$fBinaryGeometricDistribution0$fToJSONGeometricDistribution0 $fFromJSONGeometricDistribution0$fContGenGeometricDistribution"$fDiscreteGenGeometricDistribution#$fMaybeEntropyGeometricDistribution$fEntropyGeometricDistribution$$fMaybeVarianceGeometricDistribution $fMaybeMeanGeometricDistribution$fVarianceGeometricDistribution$fMeanGeometricDistribution$$fDiscreteDistrGeometricDistribution#$fDistributionGeometricDistribution$fBinaryGeometricDistribution$fToJSONGeometricDistribution$fFromJSONGeometricDistributionHD($fMaybeEntropyHypergeometricDistribution#$fEntropyHypergeometricDistribution)$fMaybeVarianceHypergeometricDistribution%$fMaybeMeanHypergeometricDistribution$$fVarianceHypergeometricDistribution $fMeanHypergeometricDistribution)$fDiscreteDistrHypergeometricDistribution($fDistributionHypergeometricDistribution"$fBinaryHypergeometricDistribution"$fToJSONHypergeometricDistribution$$fFromJSONHypergeometricDistributionND ndPdfDenom ndCdfDenom$fContGenNormalDistribution $fMaybeEntropyNormalDistribution$fEntropyNormalDistribution$fVarianceNormalDistribution!$fMaybeVarianceNormalDistribution$fMeanNormalDistribution$fMaybeMeanNormalDistribution$fContDistrNormalDistribution $fDistributionNormalDistribution$fBinaryNormalDistribution$fToJSONNormalDistribution$fFromJSONNormalDistribution$fContGenLinearTransform$fEntropyLinearTransform$fMaybeEntropyLinearTransform$fVarianceLinearTransform$fMaybeVarianceLinearTransform$fMeanLinearTransform$fMaybeMeanLinearTransform$fContDistrLinearTransform$fDistributionLinearTransform$fFunctorLinearTransform$fBinaryLinearTransform$fToJSONLinearTransform$fFromJSONLinearTransformlogDensityUnscaledmodErr$fContGenStudentT$fMaybeEntropyStudentT$fEntropyStudentT$fMaybeVarianceStudentT$fMaybeMeanStudentT$fContDistrStudentT$fDistributionStudentT$fBinaryStudentT$fToJSONStudentT$fFromJSONStudentT$fContGenUniformDistribution!$fMaybeEntropyUniformDistribution$fEntropyUniformDistribution"$fMaybeVarianceUniformDistribution$fMaybeMeanUniformDistribution$fVarianceUniformDistribution$fMeanUniformDistribution$fContDistrUniformDistribution!$fDistributionUniformDistribution$fBinaryUniformDistribution$fToJSONUniformDistribution$fFromJSONUniformDistributionF _pdfFactor$fContGenFDistribution$fMaybeEntropyFDistribution$fEntropyFDistribution$fMaybeVarianceFDistribution$fMaybeMeanFDistribution$fContDistrFDistribution$fDistributionFDistribution$fBinaryFDistribution$fToJSONFDistribution$fFromJSONFDistribution avoidOverflow innerProductjackknifeVariance_dropAtpfxSumLpfxSumR singletonErr$fBinaryResample$fToJSONResample$fFromJSONResample:<$fNFDataEstimate$fBinaryEstimate$fToJSONEstimate$fFromJSONEstimatesolvebalance errorShort$fBinaryPoints$fToJSONPoints$fFromJSONPoints$fBinaryPowers$fToJSONPowers$fFromJSONPowerssumWithalookup coefficientssummedCoefficients