9y      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,- . / 0 1!2!3!4"5"6#7#8#9#:#;#<#=#>#?#@#A#B#C#D#E#F#G#H#I#J#K#L#M$N$O$P$Q$R$S$T$U$V$W$X$Y$Z$[$\$]$^$_$`$a%b&c&d&e&f&g&h&i'j'k'l'm(n(o(p(q(r)s)t)u)v)w*x*0None05Result of hypothesis testing"Null hypothesis should be rejected"Data is compatible with hypothesisTest type. Exact meaning depends on a specific test. But generally it's tested whether some statistics is too big (small) for ( or whether it too big or too small for Significant if parameter is y, not significant otherwiser   +(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneBzJust like unsafePerformIO, but we inline it. Big performance gains as it exposes lots of things to further inlining. /Very unsafe/. In particular, you should do no memory allocation inside an z block. On Hugs this is just unsafePerformIO.zzz(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:#Discrete cosine transform (DCT-II).jDiscrete cosine transform (DCT-II). Only real part of vector is transformed, imaginary part is ignored.>Inverse discrete cosine transform (DCT-III). It's inverse of  only up to scale parameter: (idct . dct) x = (* length x)sInverse discrete cosine transform (DCT-III). Only real part of vector is transformed, imaginary part is ignored.Inverse fast Fourier transform.2Radix-2 decimation-in-time fast Fourier transform. {|}~ {|}~2014 Bryan O'SullivanBSD3None:Two-dimensional mutable matrix, stored in row-major order.2Two-dimensional matrix, stored in row-major order. Rows of matrix.!Columns of matrix."aIn order to avoid overflows during matrix multiplication, a large exponent is stored separately.# Matrix data.  !"#$%&' " !#$%& %$ !"#& !"#$%&'(c) 2014 Bryan O'SullivanBSD3None,DAllocate new matrix. Matrix content is not initialized hence unsafe.0SGiven row and column numbers, calculate the offset into the flat row-major vector.1eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking. )*+, Number of rowNumber of columns-./012 $)*+,-./012 $)*0,+-./21 )*+,-./012,(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone3 Compare two 9 values for approximate equality, using Dawson's method.The required accuracy is specified in ULPs (units of least precision). If the two numbers differ by the given number of ULPs or less, this function returns True.3#Number of ULPs of accuracy desired.33(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone054>The result of searching for a root of a mathematical function.5fThe function does not have opposite signs when evaluated at the lower and upper bounds of the search.6hThe search failed to converge to within the given error tolerance after the given number of iterations.7A root was successfully found.8]Returns either the result of a search for a root, or the default value if the search failed.9:Use the method of Ridders to compute a root of a function.The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed).45678Default value.Result of search for a root.9Absolute error tolerance.&Lower and upper bounds for the search.Function to find the roots of.:;<=>?@A475689456789 456789:;<=>?@A(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneG=Weights for affecting the importance of elements of a sample.H4An estimator of a property of a sample, such as its mean.AThe use of an algebraic data type here allows functions such as  jackknife and  bootstrapBCA1 to use more efficient algorithms when possible.NFSample with weights. First element of sample is data, second is weightO Sample data. GHIJKLMNO GHMIJKLNO HIJKLMONGGHIJKLMNO-(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneMAn unchecked, non-integer-valued version of Loader's saddle point algorithm.yCompute entropy using Theorem 1 from "Sharp Bounds on the Entropy of the Poisson Law". This function is unused because  directEntorpy; is just as accurate and is faster by about a factor of 4.Returns [x, x^2, x^3, x^4, ...]bReturns an upper bound according to theorem 2 of "Sharp Bounds on the Entropy of the Poisson Law"KReturns the average of the upper and lower bounds accounding to theorem 2.KCompute entropy directly from its definition. This is just as accurate as  for lambda <= 1 and is faster, but is slow for large lambda, and produces some underestimation due to accumulation of floating point error.OCompute the entropy of a poisson distribution using the best available method.None:PO(nlogn)_ Compute the Kendall's tau from a vector of paired data. Return NaN when number of pairs <= 1.PPPP.(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableSafe%(c) 2009, 2010, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:OT QSort a vector.RSort a vector.S&Sort a vector using a custom ordering.T-Partially sort a vector, such that the least k elements will be at the front.UReturn the indices of a vector.VZip a vector with its indices.W8Compute the minimum and maximum of a vector in one pass.XEfficiently compute the next highest power of two for a non-negative integer. If the given value is already a power of two, it is returned unchanged. If negative, zero is returned.YMultiply a number by itself.ZSimple for loop. Counts from start to end-1.[&Simple reverse-for loop. Counts from start-1 to end (which must be less than start).QRST The number k of least elements.UVWXYZ[\ 3QRSTUVWXYZ[\ WQRSTVUX3Y\Z[ QRSTUVWXYZ[\/(c) 2013 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone: +(c) 2008 Don Stewart, 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:]O(n)N Range. The difference between the largest and smallest elements of a sample.^O(n)Y Arithmetic mean. This uses Kahan-Babuaka-Neumaier summation, so is more accurate than _) unless the input values are very large._O(n){ Arithmetic mean. This uses Welford's algorithm to provide numerical stability, using a single pass over the sample data. Compared to ^P, this loses a surprising amount of precision unless the inputs are very large.`O(n)d Arithmetic mean for weighted sample. It uses a single-pass algorithm analogous to the one used by _.aO(n)H Harmonic mean. This algorithm performs a single pass over the sample.bO(n): Geometric mean of a sample containing no negative values.c Compute the k_th central moment of a sample. The central moment is also known as the moment about the mean.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.d Compute the kth and jth central moments of a sample.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.eZCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewedU. Most of its mass is on the right of the distribution, with the tail on the left. :skewness $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. 4skewness $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its g is zero.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.fCompute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that more of the sample's variance is due to infrequent severe deviations, rather than more frequent modest deviations.1A sample's excess kurtosis is not defined if its g is zero.WThis function performs two passes over the sample, so is not subject to stream fusion.For samples containing many values very close to the mean, this function is subject to inaccuracy due to catastrophic cancellation.gvMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n.hhUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1.iCalculate mean and maximum likelihood estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once.jCalculate mean and unbiased estimate of variance. This function should be used if both mean and variance are required since it will calculate mean only once.k^Standard deviation. This is simply the square root of the unbiased estimate of the variance.l-Weighted variance. This is biased estimation.m3Maximum likelihood estimate of a sample's variance.n)Unbiased estimate of a sample's variance.ohStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance.pCCovariance of sample of pairs. For empty sample it's set to zeroqwCorrelation coefficient for sample of pairs. Also known as Pearson's correlation. For empty sample it's set to zero.rPair two samples. It's like 3 but requires that both samples have equal size.]^_`abcdefghijklmnopqrNO]^_`abcdefghijklmnopqrON]^_`abcdefghijklmnopqr]^_`abcdefghijklmnopqr -2011 Aleksey Khudyakov, 2014 Bryan O'SullivanBSD3NoneMsConvert from a row-major list.t-create a matrix from a list of lists, as rowsu Convert from a row-major vector.v/create a matrix from a list of vectors, as rowsw2create a matrix from a list of vectors, as columnsx#Convert to a row-major flat vector.y!Convert to a row-major flat list.z#Convert to a list of lists, as rows{%Convert to a list of vectors, as rows|(Convert to a list of vectors, as columns}Generate matrix using function~/Generate symmetric square matrix using function8Create the square identity matrix with given dimensions.FCreate a square matrix with given diagonal, other entries default to 0=Return the dimensions of this matrix, as a (row,column) pair.Avoid overflow in the matrix.EMatrix-matrix multiplication. Matrices must be of compatible sizes (note: not checked).Matrix-vector multiplication.Raise matrix to n7th power. Power must be positive (/note: not checked).=Element in the center of matrix (not corrected for exponent).)Calculate the Euclidean norm of a vector.Return the given column.Return the given row.)Apply function to every element of matrix.Indicate whether any element of the matrix is NaN.SGiven row and column numbers, calculate the offset into the flat row-major vector.eGiven row and column numbers, calculate the offset into the flat row-major vector, without checking.sNumber of rows.Number of columns.(Flat list of values, in row-major order.tuNumber of rows.Number of columns.(Flat list of values, in row-major order.vwxyz{|}Number of rowsNumber of columnsFunction which takes row and column as argument.~Number of rows and columnsFunction which takes row and column4 as argument. It must be symmetric in arguments: f i j == f j iRow.Column.$" !#%Zstuvwxyz{|}~$ !"#%ustvwxy{|z}~Zstuvwxyz{|}~ 2014 Bryan O'SullivanBSD3NoneO(r*c)Q Compute the QR decomposition of a matrix. The result returned is the matrices (q,r).0None:Calculate rank of every element of sample. In case of ties ranks are averaged. Sample should be already sorted in ascending order.$rank (==) (fromList [10,20,30::Int])> fromList [1.0,2.0,3.0]'rank (==) (fromList [10,10,10,30::Int])> fromList [2.0,2.0,2.0,4.0]_Compute rank of every element of vector. Unlike rank it doesn't require sample to be sorted.Split tagged vector Equivalence relationVector to rank None:(Pearson correlation for sample of pairs.=Compute pairwise pearson correlation between rows of a matrix0compute spearman correlation between two samples>compute pairwise spearman correlation between rows of a matrix (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneTDGenerate discrete random variates which have given distribution. c is superclass because it's always possible to generate real-valued variates from integer valuesCGenerate discrete random variates which have given distribution..Type class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one. If the distribution has well-defined entropy for all valid parameter values then it should be an instance of this type class./Returns the entropy of a distribution, in nats.Type class for distributions with entropy, meaning Shannon entropy in the case of a discrete distribution, or differential entropy in the case of a continuous one.  should return < if entropy is undefined for the chosen parameter values.CReturns the entropy of a distribution, in nats, if such is defined.Type class for distributions with variance. If distibution have finite variance for all valid parameter values it should be instance of this type class.Minimal complete definition is  or gType class for distributions with variance. If variance is undefined for some parameter values both  and  should return Nothing.Minimal complete definition is  or Type class for distributions with mean. If distribution have finite mean for all valid values of parameters it should be instance of this type class.(Type class for distributions with mean.  should return , if it's undefined for current value of data%Continuous probability distributuion.Minimal complete definition is  and either  or .@Probability density function. Probability that random variable X& lies in the infinitesimal interval [x,x+x ) equal to  density(x)"x<Inverse of the cumulative distribution function. The value x for which P(X"dx) = pA. If probability is outside of [0,1] range function should call Natural logarithm of density."Discrete probability distribution.Probability of n-th outcome.(Logarithm of probability of n-th outcometType class common to all distributions. Only c.d.f. could be defined for both discrete and continous distributions.KCumulative distribution function. The probability that a random variable X is less or equal than x , i.e. P(X"dx8). Cumulative should be defined for infinities as well: 'cumulative d +" = 1 cumulative d -" = 0+One's complement of cumulative distibution: (complCumulative d x = 1 - cumulative d x(It's useful when one is interested in P(X>x) and expression on the right side begin to lose precision. This function have default implementation but implementors are encouraged to provide more precise implementation.NGenerate variates from continous distribution using inverse transform rule.Approximate the value of X for which P(x>X)=p.This method uses a combination of Newton-Raphson iteration and bisection with the given guess as a starting point. The upper and lower bounds specify the interval in which the probability distribution reaches the value p.(Sum probabilities in inclusive interval.! Distribution Probability p Initial guessLower bound on intervalUpper bound on interval(C) 2012 Edward Kmett, BSD-style (see the file LICENSE)Edward Kmett <ekmett@gmail.com> provisionalDeriveDataTypeableNone05The beta distributionAlpha shape parameterBeta shape parameterACreate beta distribution. Both shape parameters must be positive.CCreate beta distribution. This construtor doesn't check parameters.Shape parameter alphaShape parameter betaShape parameter alphaShape parameter beta(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05The binomial distribution.Number of trials. Probability.pConstruct binomial distribution. Number of trials must be non-negative and probability must be in [0,1] rangeNumber of trials. Probability.(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05Create Poisson distribution. (c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone05Cauchy-Lorentz distribution.Central value of Cauchy-Lorentz distribution which is its mode and median. Distribution doesn't have mean so function is named after median.Scale parameter of Cauchy-Lorentz distribution. It's different from variance and specify half width at half maximum (HWHM).Cauchy distribution Central pointScale parameter (FWHM) (c) 2010 Alexey KhudyakovBSD3bos@serpentine.com experimentalportableNone05Chi-squared distribution Get number of degrees of freedomUConstruct chi-squared distribution. Number of degrees of freedom must be positive.          (c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05#Create an exponential distribution.iCreate exponential distribution from sample. No tests are made to check whether it truly is exponential.Rate parameter. !"#$% !"#$%(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05+The gamma distribution.,Shape parameter, k.-Scale parameter, ..MCreate gamma distribution. Both shape and scale parameters must be positive./XCreate gamma distribution. This constructor do not check whether parameters are valid+,-.Shape parameter. kScale parameter, ./Shape parameter. kScale parameter, .0123456789:+-,./+,-./,-+,-./0123456789:(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05DCreate geometric distribution.ECreate geometric distribution.#@ABCD Success rateE Success rateFGHIJKLMNOPQRSTUVWXYZ[\]^_@ABCDEBC@ADECA@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05jklmnmlkopqrstuvwxyjmlknjklmnklmjklmnopqrstuvwxy(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05The normal distribution.IStandard normal distribution with mean equal to 0 and variance equal to 1+Create normal distribution from parameters.WIMPORTANT: prior to 0.10 release second parameter was variance not standard deviation.Create distribution using parameters estimated from sample. Variance is estimated using maximum likelihood method (biased estimation).Mean of distribution"Standard deviation of distribution(c) 2013 John McDonnell;BSD3bos@serpentine.com experimentalportableNone059:;.Linear transformation applied to distribution. "LinearTransform   _ x' =  + xLocation parameter.Scale parameter.Distribution being transformed.,Apply linear transformation to distribution.(Get fixed point of linear transformation Fixed pointScale parameter Distribution(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone05Student-T distributionECreate Student-T distribution. Number of parameters must be positive.0Create an unstandardized Student-t distribution.Number of degrees of freedom5Central value (0 for standard Student T distribution)Scale parameter(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone05 Uniform distribution from A to BLow boundary of distributionUpper boundary of distributionCreate uniform distribution.(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNone05F distribution(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:  Parameters a and b to the  function.O(n log n). Estimate the kth q:-quantile of a sample, using the weighted average method.O(n log n). Estimate the kth q-quantile of a sample x, using the continuous sample method with the given parameters. This is the method used by most statistical software, such as R, Mathematica, SPSS, and S.O(n log n). Estimate the range between q-quantiles 1 and q-1 of a sample x@, using the continuous sample method with the given parameters.IFor instance, the interquartile range (IQR) can be estimated as follows: @midspread medianUnbiased 4 (U.fromList [1,1,2,2,3]) ==> 1.3333332California Department of Public Works definition, a=0, bl=1. Gives a linear interpolation of the empirical CDF. This corresponds to method 4 in R and Mathematica.Hazen's definition, a=0.5, bn=0.5. This is claimed to be popular among hydrologists. This corresponds to method 5 in R and Mathematica.9Definition used by the SPSS statistics application, with a=0, b]=0 (also known as Weibull's definition). This corresponds to method 6 in R and Mathematica.6Definition used by the S statistics application, with a=1, b;=1. The interpolation points divide the sample range into n-1@ intervals. This corresponds to method 7 in R and Mathematica.Median unbiased definition, a=1/3, bm=1/3. The resulting quantile estimates are approximately median unbiased regardless of the distribution of x6. This corresponds to method 8 in R and Mathematica.Normal unbiased definition, a=3/8, b=3/8. An approximately unbiased estimate if the empirical distribution approximates the normal distribution. This corresponds to method 9 in R and Mathematica. k, the desired quantile.q, the number of quantiles.x, the sample data. Parameters a and b.k, the desired quantile.q, the number of quantiles.x, the sample data. Parameters a and b.q, the number of quantiles.x, the sample data.   (c) 2015 Mihai MaruseacBSD3mihai.maruseac@maruseac.com experimentalportableNone05 Location.Scale.Create an Laplace distribution.Create Laplace distribution from sample. No tests are made to check whether it truly is Laplace. Location of distribution estimated as median of sample.LocationScale(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05  A resample drawn randomly, with replacement, from a set of data points. Distinct from a normal array to make it harder for your humble author's brain to go wrong. O(e*r*s)d Resample a data set repeatedly, with replacement, computing each estimate over the resampled data.?This function is expensive; it has to do work proportional to e*r*s, where e( is the number of estimation functions, r- is the number of resamples to compute, and s$ is the number of original samples.To improve performance, this function will make use of all available CPUs. At least with GHC 7.0, parallel performance seems best if the parallel garbage collector is disabled (RTS option -qg). Run an H over a sample.O(n) or O(n^2)c Compute a statistical estimate repeatedly over a sample, each time omitting a successive element.O(n)( Compute the jackknife mean of a sample.O(n)F Compute the jackknife variance of a sample with a correction factor c;, so we can get either the regular or "unbiased" variance.O(n)5 Compute the unbiased jackknife variance of a sample.O(n), Compute the jackknife variance of a sample.O(n)6 Compute the jackknife standard deviation of a sample. Drop the kth element of a vector.:Split a generator into several that can run independently.    Estimation functions.Number of resamples to compute.Original sample.                  (c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone!"05.A point and interval estimate computed via an H.Point estimate.XLower bound of the estimate interval (i.e. the lower bound of the confidence interval). XUpper bound of the estimate interval (i.e. the upper bound of the confidence interval).!-Confidence level of the confidence intervals."7Multiply the point, lower bound, and upper bound in an  by the given value.#sBias-corrected accelerated (BCA) bootstrap. This adjusts for both bias and skewness in the resampled distribution. !"Value to multiply by.#Confidence level Sample data EstimatorsResampled data$%&' !"# !#"  !"#$%&'2 2014 Bryan O'SullivanBSD3None-zPerform an ordinary least-squares regression on a set of predictors, and calculate the goodness-of-fit of the regression.The returned pair consists of:6A vector of regression coefficients. This vector has one moreD element than the list of predictors; the last element is the y-intercept value.R(, the coefficient of determination (see / for details)../Compute the ordinary least-squares solution to A x = b.Solve the equation R x = b./Compute RS, the coefficient of determination that indicates goodness-of-fit of a regression.gThis value will be 1 if the predictors fit perfectly, dropping to 0 if they have no explanatory power.0{Bootstrap a regression function. Returns both the results of the regression and the requested confidence interval values.%Balance units of work across workers.-tNon-empty list of predictor vectors. Must all have the same length. These will become the columns of the matrix A solved by ..GResponder vector. Must have the same length as the predictor vectors..A& has at least as many rows as columns.b# has the same length as columns in A.R& is an upper-triangular square matrix.b* is of the same length as rows/columns in R./Predictors (regressors). Responders.Regression coefficients.0Number of resamples to compute.Confidence interval.Regression function.Predictor vectors.Responder vector.-./0-./0-./0!(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:1O(n)% Compute a histogram over a data set.)The result consists of a pair of vectors:!The lower bound of each interval.*The number of samples within the interval.eInterval (bin) sizes are uniform, and the upper and lower bounds are chosen automatically using the 3; function. To specify these parameters directly, use the 2 function.2O(n)% Compute a histogram over a data set.PInterval (bin) sizes are uniform, based on the supplied upper and lower bounds.3O(n) Compute decent defaults for the lower and upper bounds of a histogram, based on the desired number of bins and the range of the sample data.$The upper and lower bounds used are  (lo-d, hi+d), where 8d = (maximum sample - minimum sample) / ((bins - 1) * 2)8If all elements in the sample are the same and equal to x range is set to (x - |x| 10, x + |x|10) . And if x is equal to 0 range is set to (-1,1)A. This is needed to avoid creating histogram with zero bin size.1"Number of bins (must be positive).Sample data (cannot be empty).2]Number of bins. This value must be positive. A zero or negative value will cause an error.PLower bound on interval range. Sample data less than this will cause an error.Upper bound on interval range. This value must not be less than the lower bound. Sample data that falls above the upper bound will cause an error. Sample data.3"Number of bins (must be positive).Sample data (cannot be empty).123123123"(c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:4]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:The coordinates of each mesh point. The mesh interval is chosen to be 20% larger than the range of the sample. (To specify the mesh interval, use 5.)%Density estimates at each mesh point.5]Gaussian kernel density estimator for one-dimensional data, using the method of Botev et al.,The result is a pair of vectors, containing:#The coordinates of each mesh point.%Density estimates at each mesh point.4PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two.5PThe number of mesh points to use in the uniform discretization of the interval  (min,max)X. If this value is not a power of two, then it is rounded up to the next power of two. Lower bound (min) of the mesh range. Upper bound (max) of the mesh range.454545#(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05: 67The convolution kernel. Its parameters are as follows:Scaling factor, 1/nh Bandwidth, h&A point at which to sample the input, pOne sample value, v7)The width of the convolution kernel used.8Points from the range of a Sample.;/Bandwidth estimator for an Epanechnikov kernel.<*Bandwidth estimator for a Gaussian kernel.=KCompute the optimal bandwidth from the observed data for the given kernel.This function uses an estimate based on the standard deviation of a sample (due to Deheuvels), which performs reasonably well for unimodal distributions but leads to oversmoothing for more complex ones.>_Choose a uniform range of points at which to estimate a sample's probability density function.mIf you are using a Gaussian kernel, multiply the sample's bandwidth by 3 before passing it to this function.aIf this function is passed an empty vector, it returns values of positive and negative infinity.?@Epanechnikov kernel for probability density function estimation.@<Gaussian kernel for probability density function estimation.AeKernel density estimator, providing a non-parametric way of estimating the PDF of a random variable.B}A helper for creating a simple kernel density estimation function with automatically chosen bandwidth and estimation points.CSimple Epanechnikov kernel density estimator. Returns the uniformly spaced points from the sample range at which the density function was estimated, and the estimates at those points.DSimple Gaussian kernel density estimator. Returns the uniformly spaced points from the sample range at which the density function was estimated, and the estimates at those points.6789:;<=>Number of points to select, nSample bandwidth, h Input data?@AKernel function Bandwidth, h Sample dataPoints at which to estimateBBandwidth functionKernel functionDBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others)%Number of points at which to estimate sample dataC%Number of points at which to estimate Data sampleD%Number of points at which to estimate Data sampleEFG6789:;<=>?@ABCDCD89:>7=;<6?@AB6789:;<=>?@ABCDEFG$(c) 2009, 2010 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone05: NO(n) Collect the n simple powers of a sample.XFunctions computed over a sample's simple powers require at least a certain number (or order) of powers to be collected.To compute the kth P , at least k$ simple powers must be collected.For the Q&, at least 2 simple powers are needed.For T#, we need at least 3 simple powers.For U(, at least 4 simple powers are required.*This function is subject to stream fusion.O5The order (number) of simple powers collected from a sample.P Compute the k_th central moment of a sample. The central moment is also known as the moment about the mean.QvMaximum likelihood estimate of a sample's variance. Also known as the population variance, where the denominator is n4. This is the second central moment of the sample.CThis is less numerically robust than the variance function in the 12o module, but the number is essentially free to compute if you have already collected a sample's simple powers. Requires M with O at least 2.RhStandard deviation. This is simply the square root of the maximum likelihood estimate of the variance.ShUnbiased estimate of a sample's variance. Also known as the sample variance, where the denominator is n-1. Requires M with O at least 2.TZCompute the skewness of a sample. This is a measure of the asymmetry of its distribution.*A sample with negative skew is said to be  left-skewedU. Most of its mass is on the right of the distribution, with the tail on the left. Eskewness . powers 3 $ U.to [1,100,101,102,103] ==> -1.497681449918257*A sample with positive skew is said to be  right-skewed. ?skewness . powers 3 $ U.to [1,2,3,4,100] ==> 1.4975367033335198*A sample's skewness is not defined if its Q is zero. Requires M with O at least 3.UCompute the excess kurtosis of a sample. This is a measure of the "peakedness" of its distribution. A high kurtosis indicates that the sample's variance is due more to infrequent severe deviations than to frequent modest deviations.1A sample's excess kurtosis is not defined if its Q is zero. Requires M with O at least 4.V'The number of elements in the original Sample-. This is the sample's zeroth simple power.W$The sum of elements in the original Sample,. This is the sample's first simple power.X0The arithmetic mean of elements in the original Sample.?This is less numerically robust than the mean function in the 12o module, but the number is essentially free to compute if you have already collected a sample's simple powers.MNn, the number of powers, where n >= 2.OPQRSTUVWXYZ[ MNOPQRSTUVWX MNOVWXQRSPTUMNOPQRSTUVWXYZ[%None:aGeneric form of Pearson chi squared tests for binned data. Data sample is supplied in form of tuples (observed quantity, expected number of events). Both must be positive.ap-valueNumber of additional degrees of freedom. One degree of freedom is due to the fact that the are N observation in total and accounted for automatically.Observation and expectation.aaa&(c) 2011 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNoneb9Check that sample could be described by distribution. E means distribution is not compatible with data for given p-value.SThis test uses Marsaglia-Tsang-Wang exact alogorithm for calculation of p-value.c Variant of b' which uses CFD in form of function.dTwo sample Kolmogorov-Smirnov test. It tests whether two data samples could be described by the same distribution without making any assumptions about it.8This test uses approxmate formula for computing p-value.e!Calculate Kolmogorov's statistic Df for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0.f!Calculate Kolmogorov's statistic Df for given cumulative distribution function (CDF) and data sample. If sample is empty returns 0.g!Calculate Kolmogorov's statistic DB for two data samples. If either of samples is empty returns 0.hPCalculate cumulative probability function for Kolmogorov's distribution with n< parameters or probability of getting value smaller than d with n-elements sample.PIt uses algorithm by Marsgalia et. al. and provide at least 7-digit accuracy.b Distributionp-value Data samplecCDF of distributionp-value Data sampledp-valueSample 1Sample 2e CDF functionSamplef DistributionSampleg First sample Second samplehSize of the sampleD value bcdefgh bcdefghbcdefgh'(c) 2014 Danny NavarroBSD3bos@serpentine.com experimentalportableNoneiKruskal-Wallis ranking.EAll values are replaced by the absolute rank in the combined samples.The samples and values need not to be ordered but the values in the result are ordered. Assigned ranks (ties are given their average rank).jThe Kruskal-Wallis Test.8In textbooks the output value is usually represented by K or H*. This function already does the ranking.k:Calculates whether the Kruskal-Wallis test is significant.It uses  Chi-Squaredc distribution for aproximation as long as the sizes are larger than 5. Otherwise the test returns .loPerform Kruskal-Wallis Test for the given samples and required significance. For additional information check j". This is just a helper function.ijkThe samples' size(The p-value at which to test (e.g. 0.05) K value from kruskallWallislijklijklijkl((c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNonemThe Wilcoxon Rank Sums Test.This test calculates the sum of ranks for the given two samples. The samples are ordered, and assigned ranks (ties are given their average rank), then these ranks are summed for each sample.The return value is (W , W ) where W is the sum of ranks of the first sample and W is the sum of ranks of the second sample. This test is trivially transformed into the Mann-Whitney U test. You will probably want to use ne and the related functions for testing significance, but this function is exposed for completeness.nThe Mann-Whitney U Test.This is sometimes known as the Mann-Whitney-Wilcoxon U test, and confusingly many sources state that the Mann-Whitney U test is the same as the Wilcoxon's rank sum test (which is provided as mI). The Mann-Whitney U is a simple transform of Wilcoxon's rank sum test.bAgain confusingly, different sources state reversed definitions for U and U , so it is worth being explicit about what this function returns. Given two samples, the first, xs , of size n and the second, xs , of size n , this function returns (U , U ) where U = W - (n (n +1))/2 and U = W - (n (n +1))/2, where (W , W ) is the return value of wilcoxonRankSums xs1 xs2.Some sources instead state that U and U should be the other way round, often expressing this using U ' = n n - U (since U + U = n n ).FAll of which you probably don't care about if you just feed this into p.ocCalculates the critical value of Mann-Whitney U for the given sample sizes and significance level.(This function returns the exact calculated value of U for all sample sizes; it does not use the normal approximation at all. Above sample size 20 it is generally recommended to use the normal approximation instead, but this function will calculate the higher critical values if you need them.The algorithm to generate these values is a faster, memoised version of the simple unoptimised generating function given in section 2 of "The Mann Whitney Wilcoxon Distribution Using Linked Lists"p:Calculates whether the Mann Whitney U test is significant.aIf both sample sizes are less than or equal to 20, the exact U critical value (as calculated by oZ) is used. If either sample is larger than 20, the normal approximation is used instead.If you use a one-tailed test, the test indicates whether the first sample is significantly larger than the second. If you want the opposite, simply reverse the order in both the sample size and the (U , U ) pairs.q{Perform Mann-Whitney U Test for two samples and required significance. For additional information check documentation of n and p". This is just a helper function.One-tailed test checks whether first sample is significantly larger than second. Two-tailed whether they are significantly different.mnoThe sample size>The p-value (e.g. 0.05) for which you want the critical value.The critical value (of U).p0Perform one-tailed test (see description above).=The samples' size from which the (U ,U ) values were derived.(The p-value at which to test (e.g. 0.05)The (U , U ) values from n.Return 3 if the sample was too small to make a decision.q0Perform one-tailed test (see description above).(The p-value at which to test (e.g. 0.05) First sample Second sampleReturn 3 if the sample was too small to make a decision. mnopq qnopmmnopq)(c) 2010 Neil BrownBSD3bos@serpentine.com experimentalportableNone uThe coefficients for x^0, x^1, x^2, etc, in the expression prod_{r=1}^s (1 + x^r). See the Mitic paper for details.We can define: f(1) = 1 + x f(r) = (1 + x^r)*f(r-1) = f(r-1) + x^r * f(r-1) The effect of multiplying the equation by x^r is to shift all the coefficients by r down the list.1This list will be processed lazily from the head.soTests whether a given result from a Wilcoxon signed-rank matched-pairs test is significant at the given level.hThis function can perform a one-tailed or two-tailed test. If the first parameter to this function is r, the test is performed two-tailed to check if the two samples differ significantly. If the first parameter is m, the check is performed one-tailed to decide whether the first sample (i.e. the first sample you passed to rL) is greater than the second sample (i.e. the second sample you passed to r). If you wish to perform a one-tailed test in the opposite direction, you can either pass the parameters in a different order to rX, or simply swap the values in the resulting pair before passing them to this function.t6Obtains the critical value of T to compare against, given a sample size and a p-value (significance level). Your T value must be less than or equal to the return of this function in order for the test to work out significant. If there is a Nothing return, the sample size is too small to make a decision.wilcoxonSignificant tests the return value of r for you, so you should use wilcoxonSignificant for determining test results. However, this function is useful, for example, for generating lookup tables for Wilcoxon signed rank critical values.The return values of this function are generated using the method detailed in the paper "Critical Values for the Wilcoxon Signed Rank Statistic", Peter Mitic, The Mathematica Journal, volume 6, issue 3, 1996, which can be found here:  Phttp://www.mathematica-journal.com/issue/v6i3/article/mitic/contents/63mitic.pdf. According to that paper, the results may differ from other published lookup tables, but (Mitic claims) the values obtained by this function will be the correct ones.uWorks out the significance level (p-value) of a T value, given a sample size and a T value from the Wilcoxon signed-rank matched-pairs test.See the notes on wilcoxonCriticalValue for how this is calculated.vThe Wilcoxon matched-pairs signed-rank test. The samples are zipped together: if one is longer than the other, both are truncated to the the length of the shorter sample.For one-tailed test it tests whether first sample is significantly greater than the second. For two-tailed it checks whether they significantly differCheck r and s for additional information.r  s8Perform one- or two-tailed test (see description below).;The sample size from which the (T+,T-) values were derived.(The p-value at which to test (e.g. 0.05)The (T+, T-) values from r.Return 3 if the sample was too small to make a decision.tThe sample size>The p-value (e.g. 0.05) for which you want the critical value.WThe critical value (of T), or Nothing if the sample is too small to make a decision.uThe sample size3The value of T for which you want the significance.The significance (p-value).vPerform one-tailed test.(The p-value at which to test (e.g. 0.05) First sample Second sampleReturn 3 if the sample was too small to make a decision. rstuv vrsutr  stuv*(c) 2009 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone:woCompute the autocovariance of a sample, i.e. the covariance of the sample against a shifted version of itself.x{Compute the autocorrelation function of a sample, and the upper and lower bounds of confidence intervals for each element.NoteX: The calculation of the 95% confidence interval assumes a stationary Gaussian process.wxwxwxwx 3456789:;<=>?@ABCDEFGHIJKLMNOOPPQRSTUVWXYZ[\]^_`abc,defgehijklmnopqrstuvwxyz{|}~2` a b     z  y           !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-../0123456789:;<=>?@@ABCDEFGHIJKLMNO P Q R S!T!U!"V"W#X#Y#Z#Z#[#\#]#^#_#`#a#b#c#d#e#f#g#h#i#j#k#l#m$n$o$p$$$$$$$q$r$$s$t$u$v$w$x$y$z%{&|&}&~&&&&''''((((()))))**+---o----------------//r           000000000  H)1  #$n'())*statistics-0.13.3.0-4cjYwUsSjEQGDMfnb5oeqeStatistics.Test.TypesStatistics.TransformStatistics.Matrix.TypesStatistics.Matrix.MutableStatistics.FunctionStatistics.Math.RootFindingStatistics.TypesStatistics.Correlation.KendallStatistics.SampleStatistics.MatrixStatistics.Matrix.AlgorithmsStatistics.CorrelationStatistics.DistributionStatistics.Distribution.Beta Statistics.Distribution.BinomialStatistics.Distribution.Poisson%Statistics.Distribution.CauchyLorentz"Statistics.Distribution.ChiSquared#Statistics.Distribution.ExponentialStatistics.Distribution.Gamma!Statistics.Distribution.Geometric&Statistics.Distribution.HypergeometricStatistics.Distribution.Normal!Statistics.Distribution.Transform Statistics.Distribution.StudentTStatistics.Distribution.Uniform%Statistics.Distribution.FDistributionStatistics.QuantileStatistics.Distribution.LaplaceStatistics.ResamplingStatistics.Resampling.BootstrapStatistics.RegressionStatistics.Sample.HistogramStatistics.Sample.KernelDensity&Statistics.Sample.KernelDensity.SimpleStatistics.Sample.PowersStatistics.Test.ChiSquared!Statistics.Test.KolmogorovSmirnovStatistics.Test.KruskalWallisStatistics.Test.MannWhitneyUStatistics.Test.WilcoxonTStatistics.AutocorrelationStatistics.InternalStatistics.Function.Comparison(Statistics.Distribution.Poisson.InternalStatistics.ConstantsStatistics.Sample.InternalStatistics.Test.Internal StatisticsSample TestResult SignificantNotSignificantTestType OneTailed TwoTailed significant$fToJSONTestResult$fFromJSONTestResult$fToJSONTestType$fFromJSONTestType $fEqTestType $fOrdTestType$fShowTestType$fDataTestType$fGenericTestType$fEqTestResult$fOrdTestResult$fShowTestResult$fDataTestResult$fGenericTestResultCDdctdct_idctidct_ifftfftMMatrixMatrixrowscolsexponent_vectorMVectorVectordebug $fShowMatrix $fEqMatrix replicatethaw unsafeFreeze unsafeNew unsafeRead unsafeWrite unsafeModifybounds unsafeBounds immutablywithinRoot NotBracketed SearchFailedfromRootridders$fAlternativeRoot$fApplicativeRoot$fMonadPlusRoot $fMonadRoot $fFunctorRoot $fBinaryRoot $fToJSONRoot$fFromJSONRoot$fEqRoot $fReadRoot $fShowRoot $fDataRoot $fGenericRootWeights EstimatorMeanVarianceVarianceUnbiasedStdDevFunctionWeightedSamplekendallsortgsortsortBy partialSortindicesindexedminMaxnextHighestPowerOfTwosquareforrforrangemean welfordMean meanWeighted harmonicMean geometricMean centralMomentcentralMomentsskewnesskurtosisvariancevarianceUnbiased meanVariancemeanVarianceUnbstdDevvarianceWeighted fastVariancefastVarianceUnbiased fastStdDev covariance correlationpairfromList fromRowLists fromVectorfromRows fromColumnstoVectortoList toRowListstoRows toColumnsgenerate generateSymidentdiag dimensionmultiply multiplyVpowercenternormcolumnrow unsafeIndexmaphasNaN transposeqrpearsonpearsonMatByRowspearmanspearmanMatByRow DiscreteGengenDiscreteVarContGen genContVarEntropyentropy MaybeEntropy maybeEntropy MaybeVariance maybeVariance maybeStdDev MaybeMean maybeMean ContDistrdensityquantile logDensity DiscreteDistr probabilitylogProbability Distribution cumulativecomplCumulative genContinousfindRootsumProbabilitiesBetaDistributionbdAlphabdBeta betaDistrimproperBetaDistr$fContGenBetaDistribution$fContDistrBetaDistribution$fMaybeEntropyBetaDistribution$fEntropyBetaDistribution$fMaybeVarianceBetaDistribution$fVarianceBetaDistribution$fMaybeMeanBetaDistribution$fMeanBetaDistribution$fDistributionBetaDistribution$fBinaryBetaDistribution$fToJSONBetaDistribution$fFromJSONBetaDistribution$fEqBetaDistribution$fReadBetaDistribution$fShowBetaDistribution$fDataBetaDistribution$fGenericBetaDistributionBinomialDistributionbdTrials bdProbabilitybinomial"$fMaybeEntropyBinomialDistribution$fEntropyBinomialDistribution#$fMaybeVarianceBinomialDistribution$fMaybeMeanBinomialDistribution$fVarianceBinomialDistribution$fMeanBinomialDistribution#$fDiscreteDistrBinomialDistribution"$fDistributionBinomialDistribution$fBinaryBinomialDistribution$fToJSONBinomialDistribution$fFromJSONBinomialDistribution$fEqBinomialDistribution$fReadBinomialDistribution$fShowBinomialDistribution$fDataBinomialDistribution$fGenericBinomialDistributionPoissonDistribution poissonLambdapoisson!$fMaybeEntropyPoissonDistribution$fEntropyPoissonDistribution"$fMaybeVariancePoissonDistribution$fMaybeMeanPoissonDistribution$fMeanPoissonDistribution$fVariancePoissonDistribution"$fDiscreteDistrPoissonDistribution!$fDistributionPoissonDistribution$fBinaryPoissonDistribution$fToJSONPoissonDistribution$fFromJSONPoissonDistribution$fEqPoissonDistribution$fReadPoissonDistribution$fShowPoissonDistribution$fDataPoissonDistribution$fGenericPoissonDistributionCauchyDistributioncauchyDistribMediancauchyDistribScalecauchyDistributionstandardCauchy $fMaybeEntropyCauchyDistribution$fEntropyCauchyDistribution$fContGenCauchyDistribution$fContDistrCauchyDistribution $fDistributionCauchyDistribution$fBinaryCauchyDistribution$fToJSONCauchyDistribution$fFromJSONCauchyDistribution$fEqCauchyDistribution$fShowCauchyDistribution$fReadCauchyDistribution$fDataCauchyDistribution$fGenericCauchyDistribution ChiSquared chiSquaredNDF chiSquared$fContGenChiSquared$fMaybeEntropyChiSquared$fEntropyChiSquared$fMaybeVarianceChiSquared$fMaybeMeanChiSquared$fVarianceChiSquared$fMeanChiSquared$fContDistrChiSquared$fDistributionChiSquared$fBinaryChiSquared$fToJSONChiSquared$fFromJSONChiSquared$fEqChiSquared$fReadChiSquared$fShowChiSquared$fDataChiSquared$fGenericChiSquaredExponentialDistributionedLambda exponentialexponentialFromSample $fContGenExponentialDistribution%$fMaybeEntropyExponentialDistribution $fEntropyExponentialDistribution&$fMaybeVarianceExponentialDistribution"$fMaybeMeanExponentialDistribution!$fVarianceExponentialDistribution$fMeanExponentialDistribution"$fContDistrExponentialDistribution%$fDistributionExponentialDistribution$fBinaryExponentialDistribution$fToJSONExponentialDistribution!$fFromJSONExponentialDistribution$fEqExponentialDistribution$fReadExponentialDistribution$fShowExponentialDistribution$fDataExponentialDistribution $fGenericExponentialDistributionGammaDistributiongdShapegdScale gammaDistrimproperGammaDistr$fContGenGammaDistribution$fMaybeEntropyGammaDistribution $fMaybeVarianceGammaDistribution$fMaybeMeanGammaDistribution$fMeanGammaDistribution$fVarianceGammaDistribution$fContDistrGammaDistribution$fDistributionGammaDistribution$fBinaryGammaDistribution$fToJSONGammaDistribution$fFromJSONGammaDistribution$fEqGammaDistribution$fReadGammaDistribution$fShowGammaDistribution$fDataGammaDistribution$fGenericGammaDistributionGeometricDistribution0 gdSuccess0GeometricDistribution gdSuccess geometric geometric0$fContGenGeometricDistribution0#$fDiscreteGenGeometricDistribution0$$fMaybeEntropyGeometricDistribution0$fEntropyGeometricDistribution0%$fMaybeVarianceGeometricDistribution0!$fMaybeMeanGeometricDistribution0 $fVarianceGeometricDistribution0$fMeanGeometricDistribution0%$fDiscreteDistrGeometricDistribution0$$fDistributionGeometricDistribution0$fBinaryGeometricDistribution0$fToJSONGeometricDistribution0 $fFromJSONGeometricDistribution0$fContGenGeometricDistribution"$fDiscreteGenGeometricDistribution#$fMaybeEntropyGeometricDistribution$fEntropyGeometricDistribution$$fMaybeVarianceGeometricDistribution $fMaybeMeanGeometricDistribution$fVarianceGeometricDistribution$fMeanGeometricDistribution$$fDiscreteDistrGeometricDistribution#$fDistributionGeometricDistribution$fBinaryGeometricDistribution$fToJSONGeometricDistribution$fFromJSONGeometricDistribution$fEqGeometricDistribution$fReadGeometricDistribution$fShowGeometricDistribution$fDataGeometricDistribution$fGenericGeometricDistribution$fEqGeometricDistribution0$fReadGeometricDistribution0$fShowGeometricDistribution0$fDataGeometricDistribution0$fGenericGeometricDistribution0HypergeometricDistributionhdMhdLhdKhypergeometric($fMaybeEntropyHypergeometricDistribution#$fEntropyHypergeometricDistribution)$fMaybeVarianceHypergeometricDistribution%$fMaybeMeanHypergeometricDistribution$$fVarianceHypergeometricDistribution $fMeanHypergeometricDistribution)$fDiscreteDistrHypergeometricDistribution($fDistributionHypergeometricDistribution"$fBinaryHypergeometricDistribution"$fToJSONHypergeometricDistribution$$fFromJSONHypergeometricDistribution$fEqHypergeometricDistribution $fReadHypergeometricDistribution $fShowHypergeometricDistribution $fDataHypergeometricDistribution#$fGenericHypergeometricDistributionNormalDistributionstandard normalDistrnormalFromSample$fContGenNormalDistribution $fMaybeEntropyNormalDistribution$fEntropyNormalDistribution$fVarianceNormalDistribution!$fMaybeVarianceNormalDistribution$fMeanNormalDistribution$fMaybeMeanNormalDistribution$fContDistrNormalDistribution $fDistributionNormalDistribution$fBinaryNormalDistribution$fToJSONNormalDistribution$fFromJSONNormalDistribution$fEqNormalDistribution$fReadNormalDistribution$fShowNormalDistribution$fDataNormalDistribution$fGenericNormalDistributionLinearTransformlinTransLocation linTransScale linTransDistr scaleAroundlinTransFixedPoint$fContGenLinearTransform$fEntropyLinearTransform$fMaybeEntropyLinearTransform$fVarianceLinearTransform$fMaybeVarianceLinearTransform$fMeanLinearTransform$fMaybeMeanLinearTransform$fContDistrLinearTransform$fDistributionLinearTransform$fFunctorLinearTransform$fBinaryLinearTransform$fToJSONLinearTransform$fFromJSONLinearTransform$fEqLinearTransform$fShowLinearTransform$fReadLinearTransform$fDataLinearTransform$fGenericLinearTransformStudentT studentTndfstudentTstudentTUnstandardized$fContGenStudentT$fMaybeEntropyStudentT$fEntropyStudentT$fMaybeVarianceStudentT$fMaybeMeanStudentT$fContDistrStudentT$fDistributionStudentT$fBinaryStudentT$fToJSONStudentT$fFromJSONStudentT $fEqStudentT$fShowStudentT$fReadStudentT$fDataStudentT$fGenericStudentTUniformDistributionuniformAuniformB uniformDistr$fContGenUniformDistribution!$fMaybeEntropyUniformDistribution$fEntropyUniformDistribution"$fMaybeVarianceUniformDistribution$fMaybeMeanUniformDistribution$fVarianceUniformDistribution$fMeanUniformDistribution$fContDistrUniformDistribution!$fDistributionUniformDistribution$fBinaryUniformDistribution$fToJSONUniformDistribution$fFromJSONUniformDistribution$fEqUniformDistribution$fReadUniformDistribution$fShowUniformDistribution$fDataUniformDistribution$fGenericUniformDistribution FDistributionfDistributionNDF1fDistributionNDF2 fDistribution$fContGenFDistribution$fMaybeEntropyFDistribution$fEntropyFDistribution$fMaybeVarianceFDistribution$fMaybeMeanFDistribution$fContDistrFDistribution$fDistributionFDistribution$fBinaryFDistribution$fToJSONFDistribution$fFromJSONFDistribution$fEqFDistribution$fShowFDistribution$fReadFDistribution$fDataFDistribution$fGenericFDistribution ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiasedLaplaceDistribution ldLocationldScalelaplacelaplaceFromSample$fContGenLaplaceDistribution!$fMaybeEntropyLaplaceDistribution$fEntropyLaplaceDistribution"$fMaybeVarianceLaplaceDistribution$fMaybeMeanLaplaceDistribution$fVarianceLaplaceDistribution$fMeanLaplaceDistribution$fContDistrLaplaceDistribution!$fDistributionLaplaceDistribution$fBinaryLaplaceDistribution$fToJSONLaplaceDistribution$fFromJSONLaplaceDistribution$fEqLaplaceDistribution$fReadLaplaceDistribution$fShowLaplaceDistribution$fDataLaplaceDistribution$fGenericLaplaceDistributionResample fromResampleresampleestimate jackknife jackknifeMeanjackknifeVarianceUnbjackknifeVariancejackknifeStdDevsplitGen$fBinaryResample$fToJSONResample$fFromJSONResample $fEqResample$fReadResample$fShowResample$fDataResample$fGenericResampleEstimateestPoint estLowerBound estUpperBoundestConfidenceLevelscale bootstrapBCA$fNFDataEstimate$fBinaryEstimate$fToJSONEstimate$fFromJSONEstimate $fEqEstimate$fReadEstimate$fShowEstimate$fDataEstimate$fGenericEstimate olsRegressolsrSquarebootstrapRegress histogram histogram_kdekde_Kernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDF$fBinaryPoints$fToJSONPoints$fFromJSONPoints $fEqPoints $fReadPoints $fShowPoints $fDataPoints$fGenericPointsPowerspowersordercountsum$fBinaryPowers$fToJSONPowers$fFromJSONPowers $fEqPowers $fReadPowers $fShowPowers $fDataPowers$fGenericPowerschi2testkolmogorovSmirnovTestkolmogorovSmirnovTestCdfkolmogorovSmirnovTest2kolmogorovSmirnovCdfDkolmogorovSmirnovDkolmogorovSmirnov2DkolmogorovSmirnovProbabilitykruskalWallisRank kruskalWalliskruskalWallisSignificantkruskalWallisTestwilcoxonRankSums mannWhitneyUmannWhitneyUCriticalValuemannWhitneyUSignificantmannWhitneyUtestwilcoxonMatchedPairSignedRankwilcoxonMatchedPairSignificant wilcoxonMatchedPairCriticalValuewilcoxonMatchedPairSignificancewilcoxonMatchedPairTestautocovarianceautocorrelationghc-prim GHC.TypesTrueinlinePerformIO dctWorker idctWorkermfftfihalvevectorOKDoublealyThm1 alyThm2UpperalyThm2 directEntropypoissonEntropyalyczipCoefficientsupperCoefficients4lowerCoefficients4upperCoefficients6lowerCoefficients6upperCoefficients8lowerCoefficients8upperCoefficients10lowerCoefficients10upperCoefficients12lowerCoefficients12 numOfTiesBy mergeSortmerge-math-functions-0.2.0.1-5a7TAmALZ356fcJFgrBqk5Numeric.MathFunctions.Constantsm_eulerMascheronim_ln_sqrt_2_pi m_epsilon m_1_sqrt_2 m_2_sqrt_pi m_sqrt_2_pim_sqrt_2 m_min_log m_max_logm_NaN m_neg_inf m_pos_inf m_max_expm_tinym_hugeMM robustSumVar&vector-0.11.0.0-6uB77qGCxR6GPLxI2sqsX3Data.Vector.GenericzipT1TVrobustSumVarWeightedfastVar^ avoidOverflow innerProductrank rankUnsorted splitByTagsRankrankCntrankValrankNumrankVecbaseGHC.BaseNothingGHC.ErrerrorPBDPDEDGDGD0HDND ndPdfDenom ndCdfDenomlogDensityUnscaledmodErrF _pdfFactorLDjackknifeVariance_dropAtpfxSumLpfxSumR singletonErr:<solvebalance errorShortsumWithalookup coefficientssummedCoefficients