"A      !"#$%&'()*+,-./012345678 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a bcdefghijklmnopqrstuvwxyz{|}~& ;Calculate rank of sample. Sample should be already sorted. Equivalence relation Vector to rank Split tagged vector Result of hypothesis testing #Data is compatible with hypothesis #Null hypothesis should be rejected 9Test type. Exact meaning depends on a specific test. But  generally it'4s tested whether some statistics is too big (small)  for ( or whether it too big or too small for  Significant if parameter is , not significant otherwiser !portable experimentalbos@serpentine.com=Just like unsafePerformIO, but we inline it. Big performance 9 gains as it exposes lots of things to further inlining. /Very  unsafe/;. In particular, you should do no memory allocation inside  an  block. On Hugs this is just unsafePerformIO. portable experimentalbos@serpentine.com $Discrete cosine transform (DCT-II). ?Discrete cosine transform, with complex coefficients (DCT-II). /Inverse discrete cosine transform (DCT-III). It' s inverse of   only up to scale parameter:  (idct . dct) x = (* lenngth x) =Inverse discrete cosine transform, with complex coefficients  (DCT-III).  Inverse fast Fourier transform. 3Radix-2 decimation-in-time fast Fourier transform.    "portable experimentalbos@serpentine.com#portable experimentalbos@serpentine.com Compare two ( values for approximate equality, using  Dawson' s method. ;The required accuracy is specified in ULPs (units of least D precision). If the two numbers differ by the given number of ULPs  or less, this function returns True. $Number of ULPs of accuracy desired. portable experimentalbos@serpentine.com?The result of searching for a root of a mathematical function. A root was successfully found. 2The search failed to converge to within the given 7 error tolerance after the given number of iterations. /The function does not have opposite signs when 8 evaluated at the lower and upper bounds of the search. AReturns either the result of a search for a root, or the default  value if the search failed. Default value. Result of search for a root. ;Use the method of Ridders to compute a root of a function. BThe function must have opposite signs when evaluated at the lower C and upper bounds of the search (i.e. the root must be bracketed). Absolute error tolerance. 'Lower and upper bounds for the search. Function to find the roots of. portable experimentalbos@serpentine.com Sort a vector. 'Sort a vector using a custom ordering. -Partially sort a vector, such that the least k elements will be  at the front.  The number k of least elements.  Return the indices of a vector. Zip a vector with its indices. 9Compute the minimum and maximum of a vector in one pass. 8Efficiently compute the next highest power of two for a A non-negative integer. If the given value is already a power of @ two, it is returned unchanged. If negative, zero is returned. portable experimentalbos@serpentine.com  Parameters a and b to the  function. O(n log n). Estimate the kth q-quantile of a sample, $ using the weighted average method. k, the desired quantile. q, the number of quantiles. x, the sample data. O(n log n). Estimate the kth q-quantile of a sample x, E using the continuous sample method with the given parameters. This = is the method used by most statistical software, such as R,  Mathematica, SPSS, and S.  Parameters a and b. k, the desired quantile. q, the number of quantiles. x, the sample data. O(n log n). Estimate the range between q-quantiles 1 and  q-1 of a sample x., using the continuous sample method with the  given parameters. @For instance, the interquartile range (IQR) can be estimated as  follows: 5 midspread medianUnbiased 4 (U.fromList [1,1,2,2,3])  ==> 1.333333  Parameters a and b. q, the number of quantiles. x, the sample data. !2California Department of Public Works definition, a=0, b=1. : Gives a linear interpolation of the empirical CDF. This / corresponds to method 4 in R and Mathematica. "Hazen's definition, a=0.5, b=0.5. This is claimed to be D popular among hydrologists. This corresponds to method 5 in R and  Mathematica. #9Definition used by the SPSS statistics application, with a=0,  b=0 (also known as Weibull'$s definition). This corresponds to  method 6 in R and Mathematica. $6Definition used by the S statistics application, with a=1,  b;=1. The interpolation points divide the sample range into n-1 @ intervals. This corresponds to method 7 in R and Mathematica. %Median unbiased definition, a=1/3, b=1/3. The resulting D quantile estimates are approximately median unbiased regardless of  the distribution of x). This corresponds to method 8 in R and  Mathematica. &Normal unbiased definition, a=3/8, b=3/8. An approximately B unbiased estimate if the empirical distribution approximates the = normal distribution. This corresponds to method 9 in R and  Mathematica.  !"#$%&  !"$#%&  !"#$%&portable experimentalbos@serpentine.com'O(n)& Compute a histogram over a data set. *The result consists of a pair of vectors: # The lower bound of each interval. . * The number of samples within the interval. AInterval (bin) sizes are uniform, and the upper and lower bounds $ are chosen automatically using the ) function. To specify $ these parameters directly, use the ( function. #Number of bins (must be positive). Sample data (cannot be empty). (O(n)& Compute a histogram over a data set. >Interval (bin) sizes are uniform, based on the supplied upper  and lower bounds. 6Number of bins. This value must be positive. A zero ( or negative value will cause an error. 6Lower bound on interval range. Sample data less than  this will cause an error. 7Upper bound on interval range. This value must not be : less than the lower bound. Sample data that falls above & the upper bound will cause an error.  Sample data. )O(n); Compute decent defaults for the lower and upper bounds of C a histogram, based on the desired number of bins and the range of  the sample data. $The upper and lower bounds used are  (lo-d, hi+d), where &d = (maximum sample - minimum sample) / ((bins - 1) * 2)#Number of bins (must be positive). Sample data (cannot be empty). '()'()'()portable experimentalbos@serpentine.com*BGaussian kernel density estimator for one-dimensional data, using  the method of Botev et al. -The result is a pair of vectors, containing: B The coordinates of each mesh point. The mesh interval is chosen C to be 20% larger than the range of the sample. (To specify the  mesh interval, use +.) ' Density estimates at each mesh point. ?The number of mesh points to use in the uniform discretization  of the interval  (min,max)#. If this value is not a power of 6 two, then it is rounded up to the next power of two. +BGaussian kernel density estimator for one-dimensional data, using  the method of Botev et al. -The result is a pair of vectors, containing: % The coordinates of each mesh point. ' Density estimates at each mesh point. ?The number of mesh points to use in the uniform discretization  of the interval  (min,max)#. If this value is not a power of 6 two, then it is rounded up to the next power of two.  Lower bound (min) of the mesh range.  Upper bound (max) of the mesh range. *+*+*+portable experimentalbos@serpentine.com ,-O(n) Collect the n simple powers of a sample.  Functions computed over a sample'#s simple powers require at least a  certain number (or order) of powers to be collected.  To compute the kth / , at least k simple powers  must be collected.  For the 0', at least 2 simple powers are needed.  For 3$, we need at least 3 simple powers.  For 4), at least 4 simple powers are required. +This function is subject to stream fusion. n, the number of powers, where n >= 2. .5The order (number) of simple powers collected from a sample. / Compute the k,th central moment of a sample. The central 4 moment is also known as the moment about the mean. 0'Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n . This is * the second central moment of the sample. BThis is less numerically robust than the variance function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers.  Requires , with . at least 2. 1;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. 2Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1.  Requires , with . at least 2. 3;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  0 skewness . powers 3 $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.  * skewness . powers 3 $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its 0 is zero.  Requires , with . at least 3. 4?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that the sample',s variance is due more to infrequent severe 0 deviations than to frequent modest deviations. A sample'(s excess kurtosis is not defined if its 0 is  zero.  Requires , with . at least 4. 5'The number of elements in the original Sample. This is the  sample's zeroth simple power. 6$The sum of elements in the original Sample. This is the  sample's first simple power. 70The arithmetic mean of elements in the original Sample. >This is less numerically robust than the mean function in the  Statistics.Sample/ module, but the number is essentially free to / compute if you have already collected a sample's simple powers. ,-./01234567 ,-.567012/34 ,-./01234567$portable experimentalbos@serpentine.com2An unchecked, non-integer-valued version of Loader's saddle point  algorithm.  portable experimentalbos@serpentine.com8>Weights for affecting the importance of elements of a sample. 9>A function that estimates a property of a sample, such as its  mean. :GSample with weights. First element of sample is data, second is weight ; Sample data. 89:;9;:889:; portable experimentalbos@serpentine.com<@A resample drawn randomly, with replacement, from a set of data B points. Distinct from a normal array to make it harder for your  humble author's brain to go wrong. =>?O(e*r*s)3 Resample a data set repeatedly, with replacement, 2 computing each estimate over the resampled data. >This function is expensive; it has to do work proportional to  e*r*s, where e( is the number of estimation functions, r is ) the number of resamples to compute, and s is the number of  original samples. ;To improve performance, this function will make use of all D available CPUs. At least with GHC 7.0, parallel performance seems @ best if the parallel garbage collector is disabled (RTS option  -qg). Estimation functions.  Number of resamples to compute. Original sample. @>Compute a statistical estimate repeatedly over a sample, each % time omitting a successive element.  Drop the kth element of a vector. <=>?@<=>@?<=>=>?@ portable experimentalbos@serpentine.comA-The Wilcoxon matched-pairs signed-rank test. OThe value returned is the pair (T+, T-). T+ is the sum of positive ranks (the M ranks of the differences where the first parameter is higher) whereas T- is ` the sum of negative ranks (the ranks of the differences where the second parameter is higher). I These values mean little by themselves, and should be combined with the wilcoxonSignificant 5 function in this module to get a meaningful result. UThe samples are zipped together: if one is longer than the other, both are truncated * to the the length of the shorter sample. -Note that: wilcoxonMatchedPairSignedRank == ((6x, y) -> (y, x)) . flip wilcoxonMatchedPairSignedRank ;The coefficients for x^0, x^1, x^2, etc, in the expression  p9rod_{r=1}^s (1 + x^r). See the Mitic paper for details. We can define:  f(1) = 1 + x  f(r) = (1 + x^r)*f(r-1)  = f(r-1) + x^r * f(r-1) ; The effect of multiplying the equation by x^r is to shift * all the coefficients by r down the list. 2This list will be processed lazily from the head. BLTests whether a given result from a Wilcoxon signed-rank matched-pairs test $ is significant at the given level. IThis function can perform a one-tailed or two-tailed test. If the first  parameter to this function is &, the test is performed two-tailed to K check if the two samples differ significantly. If the first parameter is  G, the check is performed one-tailed to decide whether the first sample & (i.e. the first sample you passed to A) is F greater than the second sample (i.e. the second sample you passed to  A-). If you wish to perform a one-tailed test N in the opposite direction, you can either pass the parameters in a different  order to A-, or simply swap the values in the resulting , pair before passing them to this function. 9Perform one- or two-tailed test (see description below). <The sample size from which the (T+,T-) values were derived. )The p-value at which to test (e.g. 0.05) The (T+, T-) values from A. Return  if the sample was too  small to make a decision. CHObtains the critical value of T to compare against, given a sample size H and a p-value (significance level). Your T value must be less than or H equal to the return of this function in order for the test to work out M significant. If there is a Nothing return, the sample size is too small to  make a decision. wilcoxonSignificant tests the return value of A  for you, so you should use wilcoxonSignificant for determining test results. N However, this function is useful, for example, for generating lookup tables + for Wilcoxon signed rank critical values. NThe return values of this function are generated using the method detailed in  the paper "6Critical Values for the Wilcoxon Signed Rank Statistic", Peter M Mitic, The Mathematica Journal, volume 6, issue 3, 1996, which can be found  here:  Phttp://www.mathematica-journal.com/issue/v6i3/article/mitic/contents/63mitic.pdf. Y According to that paper, the results may differ from other published lookup tables, but O (Mitic claims) the values obtained by this function will be the correct ones. The sample size ?The p-value (e.g. 0.05) for which you want the critical value. )The critical value (of T), or Nothing if / the sample is too small to make a decision. DHWorks out the significance level (p-value) of a T value, given a sample F size and a T value from the Wilcoxon signed-rank matched-pairs test. See the notes on wilcoxonCriticalValue for how this is calculated. The sample size 4The value of T for which you want the significance. The significance (p-value). E=The Wilcoxon matched-pairs signed-rank test. The samples are < zipped together: if one is longer than the other, both are 4 truncated to the the length of the shorter sample. CFor one-tailed test it tests whether first sample is significantly @ greater than the second. For two-tailed it checks whether they  significantly differ Check A and  B for additional information. Perform one-tailed test. )The p-value at which to test (e.g. 0.05)  First sample Second sample Return  if the sample was too  small to make a decision. ABCDE EABDCABCDE portable experimentalbos@serpentine.comF3Generate discrete random variates which have given  distribution. H is superclass because it's always possible 8 to generate real-valued variates from integer values GH3Generate discrete random variates which have given  distribution. IJ@Type class for distributions with variance. If distibution have ? finite variance for all valid parameter values it should be  instance of this type class. Minimal complete definition is K or L KLM;Type class for distributions with variance. If variance is , undefined for some parameter values both N and  O should return Nothing. Minimal complete definition is N or O NOP=Type class for distributions with mean. If distribution have ? finite mean for all valid values of parameters it should be  instance of this type class. QR(Type class for distributions with mean. S should return   if it'&s undefined for current value of data ST%Continuous probability distributuion U6Probability density function. Probability that random  variable X$ lies in the infinitesimal interval  [x,x+x ) equal to  density(x)"x V;Inverse of the cumulative distribution function. The value  x for which P(X"dx) = p. If probability is outside  of [0,1] range function should call  W#Discrete probability distribution. XProbability of n-th outcome. Y=Type class common to all distributions. Only c.d.f. could be 8 defined for both discrete and continous distributions. Z:Cumulative distribution function. The probability that a  random variable X is less or equal than x,  i.e. P(X"dx). [One'(s complement of cumulative distibution:  * complCumulative d x = 1 - cumulative d x It'%s useful when one is interested in P(X"ex) and < expression on the right side begin to lose precision. This ; function have default implementation but implementors are 3 encouraged to provide more precise implementation \Approximate the value of X for which P(x>X)=p. ?This method uses a combination of Newton-Raphson iteration and D bisection with the given guess as a starting point. The upper and < lower bounds specify the interval in which the probability  distribution reaches the value p.  Distribution  Probability p Initial guess Lower bound on interval Upper bound on interval ])Sum probabilities in inclusive interval. FGHIJKLMNOPQRSTUVWXYZ[\]YZ[WXTUVRSPQMNOJKLHIFG\]FGGHIIJKLKLMNONOPQQRSSTUVUVWXXYZ[Z[\] portable experimentalbos@serpentine.com ^The binomial distribution. _Number of trials. ` Probability. a:Construct binomial distribution. Number of trials must be + positive and probability must be in [0,1] range Number of trials.  Probability. ^_`a^_`a_`^_`_`aportable experimentalbos@serpentine.combCauchy-Lorentz distribution. c:Central value of Cauchy-Lorentz distribution which is its & mode and median. Distribution doesn't have mean so function  is named after median. d2Scale parameter of Cauchy-Lorentz distribution. It's : different from variance and specify half width at half  maximum (HWHM). eCauchy distribution Central point Scale parameter (FWHM) fbcdefbcdcdefbcdcdefportable experimentalbos@serpentine.comgChi-squared distribution h!Get number of degrees of freedom iAConstruct chi-squared distribution. Number of degrees of freedom  must be positive. ghigihghiportable experimentalbos@serpentine.com jStudent-T distribution klmjklmjklmkljklklmportable experimentalbos@serpentine.comnThe gamma distribution. oShape parameter, k. pScale parameter, . q@Create gamma distribution. Both shape and scale parameters must  be positive. Shape parameter. k Scale parameter, . nopqnopqopnopopqportable experimentalbos@serpentine.comrstCreate Poisson distribution. rstrstsrsstportable experimentalbos@serpentine.comuvwCreate geometric distribution.  Success rate uvwuvwvuvvwportable experimentalbos@serpentine.com xyz{|m l k xyz{|xyz{|yz{xyz{yz{|portable experimentalbos@serpentine.com}Student-T distribution  ~FCreate Student-T distribution. Number of parameters must be positive.    }~}~~}~~portable experimentalbos@serpentine.comUniform distribution  Create uniform distribution. @Generic form of Pearson chi squared tests for binned data. Data < sample is supplied in form of tuples (observed quantity, 6 expected number of events). Both must be positive. p-value  Number of additional degrees of " freedom. One degree of freedom # is due to the fact that the are  N observation in total and  accounted for automatically. Observation and expectation. portable experimentalbos@serpentine.com(Check that sample could be described by  distribution. & means distribution is not compatible  with data for given p-value. 9This test uses Marsaglia-Tsang-Wang exact alogorithm for  calculation of p-value.  Distribution p-value  Data sample  Variant of  which uses CFD in form of  function. CDF of distribution p-value  Data sample >Two sample Kolmogorov-Smirnov test. It tests whether two data ? samples could be described by the same distribution without $ making any assumptions about it. 9This test uses approxmate formula for computing p-value. p-value  Sample 1  Sample 2 Calculate Kolmogorov' s statistic D for given cumulative C distribution function (CDF) and data sample. If sample is empty  returns 0.  CDF function Sample Calculate Kolmogorov' s statistic D for given cumulative C distribution function (CDF) and data sample. If sample is empty  returns 0.  Distribution Sample Calculate Kolmogorov' s statistic D for two data samples. If ) either of samples is empty returns 0.  First sample Second sample 8Calculate cumulative probability function for Kolmogorov's  distribution with n, parameters or probability of getting value  smaller than d with n-elements sample. <It uses algorithm by Marsgalia et. al. and provide at least  7-digit accuracy. Size of the sample D value   %portable experimentalbos@serpentine.com  !portable experimentalbos@serpentine.com"#$%&'O(n)8 Range. The difference between the largest and smallest  elements of a sample. O(n)$ Arithmetic mean. This uses Welford's algorithm to provide @ numerical stability, using a single pass over the sample data. O(n)< Arithmetic mean for weighted sample. It uses a single-pass ( algorithm analogous to the one used by . O(n)< Harmonic mean. This algorithm performs a single pass over  the sample. O(n); Geometric mean of a sample containing no negative values.  Compute the k3th central moment of a sample. The central moment - is also known as the moment about the mean. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation.  Compute the kth and j th central moments of a sample. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. ;Compute the skewness of a sample. This is a measure of the  asymmetry of its distribution. *A sample with negative skew is said to be  left-skewed . Most of D its mass is on the right of the distribution, with the tail on the  left.  % skewness $ U.to [1,100,101,102,103]  ==> -1.497681449918257 *A sample with positive skew is said to be  right-skewed.   skewness $ U.to [1,2,3,4,100]  ==> 1.4975367033335198 A sample'!s skewness is not defined if its  is zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. ?Compute the excess kurtosis of a sample. This is a measure of  the " peakedness"1 of its distribution. A high kurtosis indicates  that more of the sample''s variance is due to infrequent severe : deviations, rather than more frequent modest deviations. A sample'(s excess kurtosis is not defined if its  is  zero. EThis function performs two passes over the sample, so is not subject  to stream fusion. @For samples containing many values very close to the mean, this E function is subject to inaccuracy due to catastrophic cancellation. ('Maximum likelihood estimate of a sample's variance. Also known 6 as the population variance, where the denominator is n. Unbiased estimate of a sample's variance. Also known as the + sample variance, where the denominator is n-1. ACalculate mean and maximum likelihood estimate of variance. This @ function should be used if both mean and variance are required ) since it will calculate mean only once. 7Calculate mean and unbiased estimate of variance. This @ function should be used if both mean and variance are required ) since it will calculate mean only once. ;Standard deviation. This is simply the square root of the $ unbiased estimate of the variance. ).Weighted variance. This is biased estimation. *'Maximum likelihood estimate of a sample' s variance. Unbiased estimate of a sample' s variance. ;Standard deviation. This is simply the square root of the . maximum likelihood estimate of the variance. +:;;:portable experimentalbos@serpentine.com ,-./0$Create an exponential distribution.  (scale) parameter. BCreate exponential distribution from sample. No tests are made to ( check whether it truly is exponential. portable experimentalbos@serpentine.com The normal distribution. 12345JStandard normal distribution with mean equal to 0 and variance equal to 1 ,Create normal distribution from parameters. CIMPORTANT: prior to 0.10 release second parameter was variance not  standard deviation. Mean of distribution #Standard deviation of distribution 4Create distribution using parameters estimated from A sample. Variance is estimated using maximum likelihood method  (biased estimation). 6789portable experimentalbos@serpentine.comThe Wilcoxon Rank Sums Test. NThis test calculates the sum of ranks for the given two samples. The samples Q are ordered, and assigned ranks (ties are given their average rank), then these # ranks are summed for each sample. The return value is (W , W  ) where W ) is the sum of ranks of the first sample  and W O is the sum of ranks of the second sample. This test is trivially transformed > into the Mann-Whitney U test. You will probably want to use  R and the related functions for testing significance, but this function is exposed  for completeness. The Mann-Whitney U Test. AThis is sometimes known as the Mann-Whitney-Wilcoxon U test, and L confusingly many sources state that the Mann-Whitney U test is the same as  the Wilcoxon'&s rank sum test (which is provided as ). 5 The Mann-Whitney U is a simple transform of Wilcoxon's rank sum test. EAgain confusingly, different sources state reversed definitions for U   and U B, so it is worth being explicit about what this function returns. ! Given two samples, the first, xs  , of size n  and the second, xs ,  of size n , this function returns (U , U )  where U  = W  - (n (n +1))/2  and U  = W  - (n (n +1))/2,  where (W , W ) is the return value of wilcoxonRankSums xs1 xs2. !Some sources instead state that U  and U & should be the other way round, often  expressing this using U ' = n n  - U  (since U  + U  = n n ). All of which you probably don'(t care about if you just feed this into . ECalculates the critical value of Mann-Whitney U for the given sample  sizes and significance level. LThis function returns the exact calculated value of U for all sample sizes; N it does not use the normal approximation at all. Above sample size 20 it is R generally recommended to use the normal approximation instead, but this function = will calculate the higher critical values if you need them. LThe algorithm to generate these values is a faster, memoised version of the > simple unoptimised generating function given in section 2 of "The Mann Whitney ) Wilcoxon Distribution Using Linked Lists" The sample size ?The p-value (e.g. 0.05) for which you want the critical value. The critical value (of U). :;Calculates whether the Mann Whitney U test is significant. NIf both sample sizes are less than or equal to 20, the exact U critical value  (as calculated by  ) is used. If either sample is ; larger than 20, the normal approximation is used instead. MIf you use a one-tailed test, the test indicates whether the first sample is Q significantly larger than the second. If you want the opposite, simply reverse - the order in both the sample size and the (U , U  ) pairs. 1Perform one-tailed test (see description above).  The samples' size from which the (U ,U ) values were derived. )The p-value at which to test (e.g. 0.05) The (U , U ) values from . Return  if the sample was too  small to make a decision. 9Perform Mann-Whitney U Test for two samples and required A significance. For additional information check documentation of   and . This is just a helper  function. DOne-tailed test checks whether first sample is significantly larger C than second. Two-tailed whether they are significantly different. 1Perform one-tailed test (see description above). )The p-value at which to test (e.g. 0.05)  First sample Second sample Return  if the sample was too small to  make a decision.  &portable experimentalbos@serpentine.comABCDEportable experimentalbos@serpentine.com ;<.A point and interval estimate computed via an 9. Point estimate. >Lower bound of the estimate interval (i.e. the lower bound of  the confidence interval). >Upper bound of the estimate interval (i.e. the upper bound of  the confidence interval). .Confidence level of the confidence intervals. 7Multiply the point, lower bound, and upper bound in an   by the given value. Value to multiply by. =BBias-corrected accelerated (BCA) bootstrap. This adjusts for both 2 bias and skewness in the resampled distribution. Confidence level  Sample data  Estimators Resampled data portable experimentalbos@serpentine.com8The convolution kernel. Its parameters are as follows:  Scaling factor, 1/nh  Bandwidth, h ' A point at which to sample the input, p  One sample value, v *The width of the convolution kernel used. Points from the range of a Sample. 0Bandwidth estimator for an Epanechnikov kernel. +Bandwidth estimator for a Gaussian kernel. =Compute the optimal bandwidth from the observed data for the  given kernel. DThis function uses an estimate based on the standard deviation of a ? sample (due to Deheuvels), which performs reasonably well for D unimodal distributions but leads to oversmoothing for more complex  ones. >Choose a uniform range of points at which to estimate a sample's  probability density function. 7If you are using a Gaussian kernel, multiply the sample' s bandwidth * by 3 before passing it to this function. AIf this function is passed an empty vector, it returns values of ! positive and negative infinity. Number of points to select, n Sample bandwidth, h  Input data AEpanechnikov kernel for probability density function estimation. =Gaussian kernel for probability density function estimation. <Kernel density estimator, providing a non-parametric way of * estimating the PDF of a random variable. Kernel function  Bandwidth, h  Sample data Points at which to estimate BA helper for creating a simple kernel density estimation function < with automatically chosen bandwidth and estimation points. Bandwidth function Kernel function EBandwidth scaling factor (3 for a Gaussian kernel, 1 for all others) &Number of points at which to estimate  sample data ;Simple Epanechnikov kernel density estimator. Returns the D uniformly spaced points from the sample range at which the density < function was estimated, and the estimates at those points. &Number of points at which to estimate  Data sample ASimple Gaussian kernel density estimator. Returns the uniformly C spaced points from the sample range at which the density function 3 was estimated, and the estimates at those points. &Number of points at which to estimate  Data sample >portable experimentalbos@serpentine.com?Compute the autocovariance of a sample, i.e. the covariance of 1 the sample against a shifted version of itself. @Compute the autocorrelation function of a sample, and the upper < and lower bounds of confidence intervals for each element. Note;: The calculation of the 95% confidence interval assumes a  stationary Gaussian process. ?'()*+,-./01234#566789:;<=>?@ABBCDEFGHIJKLMNOPQRSTUVWXYZ[\ ] ^ _ ` a a b c d e f g h i j k l m n U V o p q r \ s t u v w x y z { | } ~  N\TXYUWV   !Q$y     y { \ U.{vw {vw  v{w  y{ U\y{{vw !""##$$%&'(){|vw*\V+,v{|w-#./01statistics-0.10.1.0Statistics.Test.TypesStatistics.TransformStatistics.FunctionStatistics.Math.RootFindingStatistics.QuantileStatistics.Sample.HistogramStatistics.Sample.KernelDensityStatistics.Sample.PowersStatistics.TypesStatistics.ResamplingStatistics.Test.WilcoxonTStatistics.Distribution Statistics.Distribution.Binomial%Statistics.Distribution.CauchyLorentz"Statistics.Distribution.ChiSquared%Statistics.Distribution.FDistributionStatistics.Distribution.GammaStatistics.Distribution.Poisson!Statistics.Distribution.Geometric&Statistics.Distribution.Hypergeometric Statistics.Distribution.StudentTStatistics.Distribution.UniformStatistics.Test.ChiSquared!Statistics.Test.KolmogorovSmirnovStatistics.Sample#Statistics.Distribution.ExponentialStatistics.Distribution.NormalStatistics.Test.MannWhitneyUStatistics.Resampling.Bootstrap&Statistics.Sample.KernelDensity.SimpleStatistics.AutocorrelationStatistics.Test.InternalStatistics.InternalStatistics.MathStatistics.Function.Comparison(Statistics.Distribution.Poisson.InternalStatistics.ConstantsStatistics.Test.NonParametric TestResultNotSignificant SignificantTestType TwoTailed OneTailed significantCDdctdct_idctidct_ifftfftwithinRoot SearchFailed NotBracketedfromRootridderssortsortBy partialSortindicesindexedminMaxnextHighestPowerOfTwo ContParam weightedAvg continuousBy midspreadcadpwhazenspsssmedianUnbiasednormalUnbiased histogram histogram_rangekdekde_Powerspowersorder centralMomentvariancestdDevvarianceUnbiasedskewnesskurtosiscountsummeanWeights EstimatorWeightedSampleSampleResample fromResampleresample jackknifewilcoxonMatchedPairSignedRankwilcoxonMatchedPairSignificant wilcoxonMatchedPairCriticalValuewilcoxonMatchedPairSignificancewilcoxonMatchedPairTest DiscreteGengenDiscreteVarContGen genContVarVariance MaybeVariance maybeVariance maybeStdDevMean MaybeMean maybeMean ContDistrdensityquantile DiscreteDistr probability Distribution cumulativecomplCumulativefindRootsumProbabilitiesBinomialDistributionbdTrials bdProbabilitybinomialCauchyDistributioncauchyDistribMediancauchyDistribScalecauchyDistributionstandardCauchy ChiSquared chiSquaredNDF chiSquared FDistributionfDistributionNDF1fDistributionNDF2 fDistributionGammaDistributiongdShapegdScale gammaDistrPoissonDistribution poissonLambdapoissonGeometricDistribution gdSuccess geometricHypergeometricDistributionhdMhdLhdKhypergeometricStudentT studentTndfstudentTUniformDistribution uniformDistrchi2testkolmogorovSmirnovTestkolmogorovSmirnovTestCdfkolmogorovSmirnovTest2kolmogorovSmirnovCdfDkolmogorovSmirnovDkolmogorovSmirnov2DkolmogorovSmirnovProbability meanWeighted harmonicMean geometricMeancentralMoments meanVariancemeanVarianceUnbvarianceWeighted fastVariancefastVarianceUnbiased fastStdDevExponentialDistributionedLambda exponentialexponentialFromSampleNormalDistributionstandard normalDistrnormalFromSamplewilcoxonRankSums mannWhitneyUmannWhitneyUCriticalValuemannWhitneyUSignificantmannWhitneyUtestEstimateestPoint estLowerBound estUpperBoundestConfidenceLevelscale bootstrapBCAKernel BandwidthPoints fromPointsepanechnikovBW gaussianBW bandwidth choosePointsepanechnikovKernelgaussianKernel estimatePDF simplePDFepanechnikovPDF gaussianPDFautocovarianceautocorrelationRankrankCntrankValrankNumrankVecrank splitByTagsghc-prim GHC.TypesTrueinlinePerformIOmfftfihalvemath-functions-0.1.1.0Numeric.SpecFunctionschoose stirlingError logFactorial factoriallog2log1pinvIncompleteBetaincompleteBeta_incompleteBetalogBetainvIncompleteGammaincompleteGamma logGammaLlogGammaNumeric.Polynomial.ChebyshevchebyshevBroucke chebyshevNumeric.SpecFunctions.Extrabd0DoubleMMdropAt coefficientssummedCoefficientsbase Data.MaybeNothingPGHC.ErrerrorBDF _pdfFactorsqrGDPDHDMatrix avoidOverflowmatrixMultiply matrixPower matrixCenterformodifyNumeric.MathFunctions.Constantsm_NaN m_neg_inf m_pos_infm_ln_sqrt_2_pi m_epsilon m_1_sqrt_2 m_2_sqrt_pi m_sqrt_2_pim_sqrt_2 m_max_expm_tinym_hugeT1TV robustSumVarrobustSumVarWeightedfastVar^EDND ndPdfDenom ndCdfDenomalookup:<estimate errorShort