h$OJ+      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                              (c) 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 1math-functions(Calculate relative error of two numbers: \frac{|a - b|}{\max(|a|,|b|)} It lies in [0,1) interval for numbers with same sign and (1,2] for numbers with different sign. If both arguments are zero or negative zero function returns 0. If at least one argument is transfinite it returns NaNmath-functions.Check that relative error between two numbers a and b. If  returns NaN it returns False.math-functions)Add N ULPs (units of least precision) to Double number.math-functionsMeasure distance between two Doubles in ULPs (units of least precision). Note that it's different from abs (ulpDelta a b), since it returns correct result even when  overflows.math-functions$Measure signed distance between two Double8s in ULPs (units of least precision). Note that unlike  it can overflow.  >>> ulpDelta 1 (1 + m_epsilon) 1math-functions Compare two 9 values for approximate equality, using Dawson's method.The required accuracy is specified in ULPs (units of least precision). If the two numbers differ by the given number of ULPs or less, this function returns True.math-functionseps( relative error should be in [0,1) rangemath-functionsamath-functionsbmath-functions#Number of ULPs of accuracy desired.(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportable Safe-Inferred math-functions#Largest representable finite value. math-functions5The smallest representable positive normalized value. math-functions The largest  x such that 2**(x)-1) is approximately representable as a . math-functionsPositive infinity. math-functionsNegative infinity. math-functions Not a number.math-functions!Maximum possible finite value of log xmath-functions)Logarithm of smallest normalized double ( )math-functions sqrt 2math-functions  sqrt (2 * pi)math-functions  2 / sqrt pimath-functions  1 / sqrt 2math-functions The smallest   such that 1 +  D 1.math-functions sqrt m_epsilonmath-functions log(sqrt((2*pi))math-functions*Euler@Mascheroni constant ( = 0.57721...)  (c) 2012 Aleksey KhudyakovBSD3bos@serpentine.com experimentalportableNoneHmath-functionsEvaluate polynomial using Horner's method. Coefficients starts from lowest. In pseudocode: 1evaluateOddPolynomial x [1,2,3] = 1 + 2*x + 3*x^2math-functionsEvaluate polynomial with only even powers using Horner's method. Coefficients starts from lowest. In pseudocode: 3evaluateOddPolynomial x [1,2,3] = 1 + 2*x^2 + 3*x^4math-functionsEvaluate polynomial with only odd powers using Horner's method. Coefficients starts from lowest. In pseudocode: 5evaluateOddPolynomial x [1,2,3] = 1*x + 2*x^3 + 3*x^5math-functionsxmath-functions Coefficientsmath-functionsxmath-functions Coefficientsmath-functionsxmath-functions Coefficients(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone?$math-functionsEvaluate a Chebyshev polynomial of the first kind. Uses Clenshaw's algorithm.math-functionsEvaluate a Chebyshev polynomial of the first kind. Uses Broucke's ECHEB algorithm, and his convention for coefficient handling. It treat 0th coefficient different so =chebyshev x [a0,a1,a2...] == chebyshevBroucke [2*a0,a1,a2...]math-functionsParameter of each function.math-functions:Coefficients of each polynomial term, in increasing order.math-functionsParameter of each function.math-functions:Coefficients of each polynomial term, in increasing order.0(c) 2011 Bryan O'Sullivan, 2018 Alexey KhudyakovBSD3bos@serpentine.com experimentalportableNone 35678$ math-functionsSteps for Newton iterations!math-functionsNormal Newton-Raphson update. Parameters are: old guess, new guess"math-functionsBisection fallback when Newton-Raphson iteration doesn't work. Parameters are bracket on root#math-functions Root is found$math-functionsRoot is not bracketed%math-functionsParameters for > root finding'math-functions*Maximum number of iterations. Default = 50(math-functionsError tolerance for root approximation. Default is relative error 4, where  is machine precision)math-functions+Single Ridders step. It's a bracket of root*math-functions1Ridders step. Parameters are bracket for the root+math-functionsBisection step. It's fallback which is taken when Ridders update takes us out of bracket,math-functions Root found-math-functionsRoot is not bracketed.math-functionsParameters for > root finding0math-functions+Maximum number of iterations. Default = 1001math-functionsError tolerance for root approximation. Default is relative error 4, where  is machine precision.2math-functionsThe result of searching for a root of a mathematical function.8math-functionsThe function does not have opposite signs when evaluated at the lower and upper bounds of the search.9math-functionsThe search failed to converge to within the given error tolerance after the given number of iterations.:math-functionsA root was successfully found.;math-functionsReturns either the result of a search for a root, or the default value if the search failed.<math-functionsCheck that two values are approximately equal. In addition to specification values are considered equal if they're within 1ulp of precision. No further improvement could be done anyway.=math-functions%Find root in lazy list of iterations.>math-functionsUse the method of Ridders[Ridders1979] to compute a root of a function. It doesn't require derivative and provide quadratic convergence (number of significant digits grows quadratically with number of iterations).The function must have opposite signs when evaluated at the lower and upper bounds of the search (i.e. the root must be bracketed). If there's more that one root in the bracket iteration will converge to some root in the bracket.?math-functions,List of iterations for Ridders methods. See ># for documentation of parameters@math-functions/Solve equation using Newton-Raphson iterations.This method require both initial guess and bounds for root. If Newton step takes us out of bounds on root function reverts to bisection.Amath-functionsList of iteration for Newton-Raphson algorithm. See documentation for @ for meaning of parameters.;math-functionsDefault value.math-functionsResult of search for a root.=math-functionsMaximummath-functionsError tolerance>math-functionsParameters for algorithms. def provides reasonable defaultsmath-functionsBracket for rootmath-functionsFunction to find roots@math-functionsParameters for algorithm. def provide reasonable defaults.math-functions Triple of *(low bound, initial guess, upper bound). If initial guess if out of bracket middle of bracket is taken as approximationmath-functionsFunction to find root of. It returns pair of function value and its first derivative" !"#$%&'()*+,-./01234567:89;<=>?@A"7:89;456<23=./01>?)*+,-%&'(@A !"#$(c) 2016 Alexey KhudyakovBSD3-alexey.skladnoy@gmail.com, bos@serpentine.com experimentalportable Safe-Inferred) nmath-functionsInfinite series. It's represented as opaque state and step function.pmath-functionsenumSequenceFrom x generate sequence: a_n = x + n qmath-functionsenumSequenceFromStep x d generate sequence: a_n = x + nd rmath-functions Analog of  for sequence.smath-functionsCalculate sum of series \sum_{i=0}^\infty a_i Summation is stopped when' a_{n+1} < \varepsilon\sum_{i=0}^n a_i where  is machine precision ()tmath-functionsCalculate sum of series \sum_{i=0}^\infty x^ia_i Calculation is stopped when next value in series is less than sum.umath-functionsConvert series to infinite listvmath-functionsEvaluate continued fraction using modified Lentz algorithm. Sequence contain pairs (a[i],b[i]) which form following expression: b_0 + \frac{a_1}{b_1+\frac{a_2}{b_2+\frac{a_3}{b_3 + \cdots}}} Modified Lentz algorithm is described in Numerical recipes 5.2 "Evaluation of Continued Fractions"wmath-functions%Elementwise operations with sequencesxmath-functions%Elementwise operations with sequences nopqrstuv nopqrstuv None*, %(c) 2009, 2011, 2012 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone>{math-functionsError function. \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp(-t^2) dt Function limits are: \begin{aligned} &\operatorname{erf}(-\infty) &=& -1 \\ &\operatorname{erf}(0) &=& \phantom{-}\,0 \\ &\operatorname{erf}(+\infty) &=& \phantom{-}\,1 \\ \end{aligned} |math-functionsComplementary error function.6 \operatorname{erfc}(x) = 1 - \operatorname{erf}(x) Function limits are: \begin{aligned} &\operatorname{erf}(-\infty) &=&\, 2 \\ &\operatorname{erf}(0) &=&\, 1 \\ &\operatorname{erf}(+\infty) &=&\, 0 \\ \end{aligned} }math-functions Inverse of {.~math-functions Inverse of |.math-functions/Compute the logarithm of the gamma function, (x).; \Gamma(x) = \int_0^{\infty}t^{x-1}e^{-t}\,dt = (x - 1)! This implementation uses Lanczos approximation. It gives 14 or more significant decimal digits, except around x = 1 and x' = 2, where the function goes to zero.4Returns D if the input is outside of the range (0 < x D 1e305).math-functions Synonym for . Retained for compatibilitymath-functionsCompute the log gamma correction factor for Stirling approximation for x D 10. This correction factor is suitable for an alternate (but less numerically accurate) definition of : \log\Gamma(x) = \frac{1}{2}\log(2\pi) + (x-\frac{1}{2})\log x - x + \operatorname{logGammaCorrection}(x) math-functions:Compute the normalized lower incomplete gamma function (z,x). Normalization means that (z,D)=1 \gamma(z,x) = \frac{1}{\Gamma(z)}\int_0^{x}t^{z-1}e^{-t}\,dt Uses Algorithm AS 239 by Shea.math-functionsInverse incomplete gamma function. It's approximately inverse of  for the same z/. So following equality approximately holds: -invIncompleteGamma z . incompleteGamma z D idmath-functions3Compute the natural logarithm of the beta function. B(a,b) = \int_0^1 t^{a-1}(1-t)^{b-1}\,dt = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)} math-functions%Regularized incomplete beta function.? I(x;a,b) = \frac{1}{B(a,b)} \int_0^x t^{a-1}(1-t)^{b-1}\,dt Uses algorithm AS63 by Majumder and Bhattachrjee and quadrature approximation for large p and q.math-functions.Regularized incomplete beta function. Same as 9 but also takes logarithm of beta function as parameter.math-functionsCompute inverse of regularized incomplete beta function. Uses initial approximation from AS109, AS64 and Halley method to solve equation.math-functionsCompute sinc function sin(x)/xmath-functionsCompute log(1+x)-x:math-functionsO(log n)4 Compute the logarithm in base 2 of the given value.math-functionsCompute the factorial function n!. Returns +D if the input is above 170 (above which the result cannot be represented by a 64-bit ).math-functionsCompute the natural logarithm of the factorial function. Gives 16 decimal digits of precision.math-functionsCalculate the error term of the Stirling approximation. This is only defined for non-negative values. \operatorname{stirlingError}(n) = \log(n!) - \log(\sqrt{2\pi n}\frac{n}{e}^n) math-functions)Quickly compute the natural logarithm of n  k, with no checking.Less numerically stable: exp $ lg (n+1) - lg (k+1) - lg (n-k+1) where lg = logGamma . fromIntegralmath-functions2Calculate binomial coefficient using exact formulamath-functions.Compute logarithm of the binomial coefficient.math-functions!Compute the binomial coefficient n `` k. For values of k > 50, this uses an approximation for performance reasons. The approximation is accurate to 12 decimal places in the worst caseExample: 7 `choose` 3 == 35math-functions Compute (x=), the first logarithmic derivative of the gamma function. \psi(x) = \frac{d}{dx} \ln \left(\Gamma(x)\right) = \frac{\Gamma'(x)}{\Gamma(x)} Uses Algorithm AS 103 by Bernardo, based on Minka's C implementation.}math-functionsp D [-1,1]~math-functionsp D [0,2]math-functionsz D (0,D)math-functionsx D (0,D)math-functionsz D (0,D)math-functionsp D [0,1]math-functionsa > 0math-functionsb > 0math-functionsa > 0math-functionsb > 0math-functionsx, must lie in [0,1] rangemath-functions%logarithm of beta function for given p and qmath-functionsa > 0math-functionsb > 0math-functionsx, must lie in [0,1] rangemath-functionsa > 0math-functionsb > 0math-functionsx D [0,1]4{|}~(c) 2009, 2011 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone@math-functionsEvaluate the deviance term x log(x/np) + np - x.math-functions.Compute the logarithm of the gamma function (x&). Uses Algorithm AS 245 by Macleod.Gives an accuracy of 10-12 significant decimal digits, except for small regions around x = 1 and x = 2, where the function goes to zero. For greater accuracy, use  logGammaL.4Returns D if the input is outside of the range (0 < x D 1e305).math-functions xmath-functions np%(c) 2009, 2011, 2012 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNoneA]{|}~{|}~ (c) 2014 Bryan O'SullivanBSD3bos@serpentine.com experimentalportableNone 3?Imath-functionsSecond-order Kahan-Babuka summation. This is more computationally costly than Kahan-Babuka-Neumaier summation, running at about a third the speed. Its advantage is that it can lose less precision (in admittedly obscure cases).This method compensates for error in both the sum and the first-order compensation term, hence the use of "second order" in the name.math-functionsKahan-Babuka-Neumaier summation. This is a little more computationally costly than plain Kahan summation, but is always at least as accurate.math-functionsKahan summation. This is the least accurate of the compensated summation methods. In practice, it only beats naive summation for inputs with large magnitude. Kahan summation can be less; accurate than naive summation for small-magnitude inputs.This summation method is included for completeness. Its use is not recommended. In practice, ' is both 30% faster and more accurate.math-functions0A class for summation of floating point numbers.math-functionsThe identity for summation.math-functionsAdd a value to a sum.math-functionsSum a collection of values. Example: foo =   [1,2,3]math-functions!Return the result of a Kahan sum.math-functions2Return the result of a Kahan-Babuka-Neumaier sum.math-functions2Return the result of an order-2 Kahan-Babuka sum.math-functionsO(n) Sum a vector of values.math-functionsO(n)1 Sum a vector of values using pairwise summation.)This approach is perhaps 10% faster than , but has poorer bounds on its error growth. Instead of having roughly constant error regardless of the size of the input vector, in the worst case its accumulated error grows with O(log n).math-functionsmath-functionsmath-functionsmath-functionsmath-functionsmath-functions   !"#$%&'()*+,-../0122345567899:;<=>?@ABCADEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwwxyz{|}~                                                                                                   -math-functions-0.3.4.2-58VLxRxhvDgL5rM8ecKJZINumeric.SpecFunctions Numeric.MathFunctions.ComparisonNumeric.MathFunctions.ConstantsNumeric.PolynomialNumeric.Polynomial.ChebyshevNumeric.RootFindingNumeric.SeriesNumeric.SpecFunctions.Extra Numeric.SumNumeric.SpecFunctions.CompatNumeric.SpecFunctions.Internalbase GHC.Floatexpm1log1p relativeErroreqRelErraddUlps ulpDistanceulpDeltawithinm_hugem_tiny m_max_exp m_pos_inf m_neg_infm_NaN m_max_log m_min_logm_sqrt_2 m_sqrt_2_pi m_2_sqrt_pi m_1_sqrt_2 m_epsilon m_sqrt_epsm_ln_sqrt_2_pim_eulerMascheronievaluatePolynomialevaluateEvenPolynomialevaluateOddPolynomialevaluatePolynomialLevaluateEvenPolynomialLevaluateOddPolynomialL chebyshevchebyshevBroucke NewtonStepNewtonBisection NewtonRootNewtonNoBracket NewtonParam newtonMaxIter newtonTol RiddersStep RiddersBisect RiddersRootRiddersNoBracket RiddersParamriddersMaxIter riddersTol IterationStep matchRoot ToleranceRelTolAbsTolRoot NotBracketed SearchFailedfromRootwithinTolerancefindRootriddersriddersIterations newtonRaphsonnewtonRaphsonIterations$fAlternativeRoot$fMonadPlusRoot $fMonadRoot$fApplicativeRoot $fFunctorRoot $fNFDataRoot$fDefaultRiddersParam$fIterationStepRiddersStep$fNFDataRiddersStep$fDefaultNewtonParam$fIterationStepNewtonStep$fNFDataNewtonStep$fEqNewtonStep$fReadNewtonStep$fShowNewtonStep$fDataNewtonStep$fGenericNewtonStep$fEqNewtonParam$fReadNewtonParam$fShowNewtonParam$fDataNewtonParam$fGenericNewtonParam$fEqRiddersStep$fReadRiddersStep$fShowRiddersStep$fDataRiddersStep$fGenericRiddersStep$fEqRiddersParam$fReadRiddersParam$fShowRiddersParam$fDataRiddersParam$fGenericRiddersParam $fEqTolerance$fReadTolerance$fShowTolerance$fDataTolerance$fGenericTolerance$fEqRoot $fReadRoot $fShowRoot $fDataRoot$fFoldableRoot$fTraversableRoot $fGenericRootSequenceenumSequenceFromenumSequenceFromStep scanSequence sumSeriessumPowerSeriessequenceToListevalContFractionB$fFractionalSequence $fNumSequence$fApplicativeSequence$fFunctorSequenceerferfcinvErfinvErfclogGamma logGammaLlogGammaCorrectionincompleteGammainvIncompleteGammalogBetaincompleteBetaincompleteBeta_invIncompleteBetasinclog1pmxlog2 factorial logFactorial stirlingError logChooseFast chooseExact logChoosechoosedigammabd0 logGammaAS245KB2SumKBNSumKahanSum Summationzeroaddsumkahankbnkb2 sumVector pairwiseSum$fSummationDouble$fSemigroupKahanSum$fMonoidKahanSum$fNFDataKahanSum$fSummationKahanSum$fVectorVectorKahanSum$fMVectorMVectorKahanSum$fUnboxKahanSum$fSemigroupKBNSum$fMonoidKBNSum$fNFDataKBNSum$fSummationKBNSum$fVectorVectorKBNSum$fMVectorMVectorKBNSum $fUnboxKBNSum$fSemigroupKB2Sum$fMonoidKB2Sum$fNFDataKB2Sum$fSummationKB2Sum$fVectorVectorKB2Sum$fMVectorMVectorKB2Sum $fUnboxKB2Sum $fEqKB2Sum $fShowKB2Sum $fDataKB2Sum $fEqKBNSum $fShowKBNSum $fDataKBNSum $fEqKahanSum$fShowKahanSum$fDataKahanSumghc-prim GHC.TypesDoubleIntGHC.ListscanlL guessInvErfcinvErfcHalleyStep lgamma1_15tableLogGamma_1_15PtableLogGamma_1_15Q lgamma15_2tableLogGamma_15_2PtableLogGamma_15_2Q lgamma2_3tableLogGamma_2_3PtableLogGamma_2_3Q lgammaSmall lanczosApprox tableLanczos evalRatioincompleteBetaApproxincompleteBetaWorkerinvIncompleteBetaWorkerinvIncBetaGuesscoefWcoefY trigamma1modErrfactorialTable