h&EB      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ Safe-Inferred)*regression-simpleSymmetric 33 matrix.regression-simple 33 matrix.regression-simple3d vector. Strict triple of s.-Also used to represent quadratic polynomial: V3 a b c = a x^2 + b x + c.regression-simpleSymmetric 2x2 matrix.regression-simple 22 matrix. regression-simple2d vector. Strict pair of s.*Also used to represent linear polynomial: V2 a b  = a x + b. regression-simpleInverseregression-simple Determinantregression-simple#Multiplication of different things.regression-simpleIdentityregression-simpleAdditionregression-simpleSolve linear equation.zerosLin (V2 1 2)-2.0regression-simpleSolve quadratic equation.zerosQuad (V3 2 0 (-1)).Right (-0.7071067811865476,0.7071067811865476)zerosQuad (V3 2 0 1)Left ((-0.0) :+ (-0.7071067811865476),(-0.0) :+ 0.7071067811865476)&Double root is not treated separately:zerosQuad (V3 1 0 0)Right (-0.0,0.0)zerosQuad (V3 1 (-2) 1)Right (1.0,1.0)regression-simpleFind an optima point.optimaQuad (V3 1 (-2) 0)1.0 compare tozerosQuad (V3 1 (-2) 0)Right (0.0,2.0)"regression-simpleM22 1 2 3 4 `mult` eye @M22M22 1.0 2.0 3.0 4.0    76 Safe-Inferred Nregression-simpleKBN summation accumulator.Rregression-simpleKBN summation algorithm.sumKBN (replicate 10 0.1)1.03Data.List.foldl' (+) 0 (replicate 10 0.1) :: Double0.9999999999999999sumKBN [1, 1e100, 1, -1e100]2.0,Data.List.foldl' (+) 0 [1, 1e100, 1, -1e100]0.0regression-simpleGeneralized version of R.Sregression-simpleAdd a  to N accumulator.NOPQRSNOQPRS Safe-Inferred %&)*A8Vregression-simple!Quadratic regression accumulator.Xregression-simplenYregression-simple\sum w_iZregression-simple \sum x_i [regression-simple \sum x_i^2 \regression-simple \sum x_i^3 ]regression-simple \sum x_i^4 ^regression-simple \sum y_i _regression-simple \sum x_i y_i `regression-simple\sum x_i^2 y_i aregression-simple \sum y_i^2 bregression-simpleLinear regression accumulator.dregression-simpleneregression-simple\sum w_ifregression-simple \sum x_i gregression-simple \sum x_i^2 hregression-simple \sum y_i iregression-simple \sum x_i y_i jregression-simple \sum y_i^2 kregression-simpleResult of a curve fit.mregression-simplefit parametersnregression-simpleasympotic standard errors, assuming a good fitoregression-simplenumber of degrees of freedompregression-simplesum of squares of residualsqregression-simpleLinear regression.%let input1 = [(0, 1), (1, 3), (2, 5)]PP $ linear id input1V2 2.0000 1.00000let input2 = [(0.1, 1.2), (1.3, 3.1), (1.9, 4.9), (3.0, 7.1), (4.1, 9.0)]PP $ linear id input2V2 2.0063 0.88685rregression-simpleLike q but returns complete k.?To get confidence intervals you should multiply the errors by quantile (studentT (n - 2)) ci' from  statistics package or similar. For big n using value 1 gives 68% interval and using value 2 gives 95% confidence interval. See  https://en.wikipedia.org/wiki/Student%27s_t-distribution#Table_of_selected_values (quantile> calculates one-sided values, you need two-sided, thus adjust ci value).The first input is perfect fit:let fit = linearFit id input1PP fit6Fit (V2 2.0000 1.00000) (V2 0.00000 0.00000) 1 0.00000The second input is quite good:PP $ linearFit id input26Fit (V2 2.0063 0.88685) (V2 0.09550 0.23826) 3 0.25962But the third input isn't so much, standard error of a slope parameter is 20%..let input3 = [(0, 2), (1, 3), (2, 6), (3, 11)]PP $ linearFit id input34Fit (V2 3.0000 1.00000) (V2 0.63246 1.1832) 2 4.0000sregression-simpleWeighted linear regression.let input2 = [(0.1, 1.2), (1.3, 3.1), (1.9, 4.9), (3.0, 7.1), (4.1, 9.0)]PP $ linearFit id input26Fit (V2 2.0063 0.88685) (V2 0.09550 0.23826) 3 0.25962let input2w = [(0.1, 1.2, 1), (1.3, 3.1, 1), (1.9, 4.9, 1), (3.0, 7.1, 1/4), (4.1, 9.0, 1/4)]!PP $ linearWithWeights id input2w6Fit (V2 2.0060 0.86993) (V2 0.12926 0.23696) 3 0.22074tregression-simple Linear regression with y-errors.let input2y = [(0.1, 1.2, 0.12), (1.3, 3.1, 0.31), (1.9, 4.9, 0.49), (3.0, 7.1, 0.71), (4.1, 9.0, 1.9)]&let fit = linearWithYerrors id input2yPP fit5Fit (V2 1.9104 0.98302) (V2 0.13006 0.10462) 3 2.0930When we know actual y-errors, we can calculate the Q-value using  statistics package:8import qualified Statistics.Distribution as S8import qualified Statistics.Distribution.ChiSquared as S6S.cumulative (S.chiSquared (fitNDF fit)) (fitWSSR fit)0.446669639443138 or using math-functions.import Numeric.SpecFunctions (incompleteGamma)incompleteGamma (fromIntegral (fitNDF fit) / 2) (fitWSSR fit / 2)0.446669639443138It is not uncommon to deem acceptable on equal terms any models with, say, Q > 0.001. If Q is too large, too near to 1 is most likely caused by overestimating the y-errors.uregression-simple0Iterative linear regression with x and y errors.Orear, J. (1982). Least squares when both variables have uncertainties. American Journal of Physics, 50(10), 912@916. doi:10.1119/1.12972let input2xy = [(0.1, 1.2, 0.01, 0.12), (1.3, 3.1, 0.13, 0.31), (1.9, 4.9, 0.19, 0.49), (3.0, 7.1, 0.3, 0.71), (4.1, 9.0, 0.41, 1.9)]0let fit :| fits = linearWithXYerrors id input2xyFirst fit is done using t:PP fit5Fit (V2 1.9104 0.98302) (V2 0.13006 0.10462) 3 2.0930After that the effective variance is used to refine the fit, just a few iterations is often enough:PP $ take 3 fits5Fit (V2 1.9092 0.99251) (V2 0.12417 0.08412) 3 1.29925Fit (V2 1.9092 0.99250) (V2 0.12418 0.08414) 3 1.29985Fit (V2 1.9092 0.99250) (V2 0.12418 0.08414) 3 1.2998vregression-simpleCalculate linear fit from b.wregression-simpleQuadratic regression.%let input1 = [(0, 1), (1, 3), (2, 5)]quadratic id input1V3 0.0 2.0 1.0let input2 = [(0.1, 1.2), (1.3, 3.1), (1.9, 4.9), (3.0, 7.1), (4.1, 9.0)]PP $ quadratic id input2V3 (-0.00589) 2.0313 0.87155.let input3 = [(0, 2), (1, 3), (2, 6), (3, 11)]PP $ quadratic id input3V3 1.00000 0.00000 2.0000xregression-simpleLike w but returns complete k.PP $ quadraticFit id input2Fit (V3 (-0.00589) 2.0313 0.87155) (V3 0.09281 0.41070 0.37841) 2 0.25910PP $ quadraticFit id input3Fit (V3 1.00000 0.00000 2.0000) (V3 0.00000 0.00000 0.00000) 1 0.00000yregression-simpleWeighted quadratic regression.let input2w = [(0.1, 1.2, 1), (1.3, 3.1, 1), (1.9, 4.9, 1), (3.0, 7.1, 1/4), (4.1, 9.0, 1/4)]$PP $ quadraticWithWeights id input2wFit (V3 0.02524 1.9144 0.91792) (V3 0.10775 0.42106 0.35207) 2 0.21484zregression-simple#Quadratic regression with y-errors.let input2y = [(0.1, 1.2, 0.12), (1.3, 3.1, 0.31), (1.9, 4.9, 0.49), (3.0, 7.1, 0.71), (4.1, 9.0, 0.9)]$PP $ quadraticWithYerrors id input2yFit (V3 0.08776 1.6667 1.0228) (V3 0.10131 0.31829 0.11917) 2 1.5398{regression-simple3Iterative quadratic regression with x and y errors.Orear, J. (1982). Least squares when both variables have uncertainties. American Journal of Physics, 50(10), 912@916. doi:10.1119/1.12972|regression-simpleCalculate quadratic fit from V.}regression-simple5Levenberg@Marquardt for functions with one parameter.See $ for examples, this is very similar.For example we can fit f = x \mapsto \beta x + 1, its derivative is \partial_\beta f = x \mapsto x.&let scale a (x, y) = (y, a * x + 1, x)1PP $ NE.last $ levenbergMarquardt1 scale 1 input2Fit 1.9685 0.04735 4 0.27914Not bad, but worse then linear fit which fits the intercept point too.~regression-simple} with weights.regression-simple} with Y-errors.regression-simple} with XY-errors.regression-simple6Levenberg@Marquardt for functions with two parameters.3You can use this sledgehammer to do a a linear fit:0let lin (V2 a b) (x, y) = (y, a * x + b, V2 x 1)We can then use  to find a fit:,PP $ levenbergMarquardt2 lin (V2 1 1) input24Fit (V2 1.00000 1.00000) (V2 1.0175 2.5385) 3 29.4703Fit (V2 1.2782 1.4831) (V2 0.57784 1.4416) 3 9.50414Fit (V2 1.7254 1.4730) (V2 0.18820 0.46952) 3 1.00826Fit (V2 1.9796 0.95226) (V2 0.09683 0.24157) 3 0.266876Fit (V2 2.0060 0.88759) (V2 0.09550 0.23826) 3 0.259626Fit (V2 2.0063 0.88685) (V2 0.09550 0.23826) 3 0.25962This is the same result what r returns:PP $ linearFit id input26Fit (V2 2.0063 0.88685) (V2 0.09550 0.23826) 3 0.25962Using AD You can use ad" to calculate derivatives for you.5import qualified Numeric.AD.Mode.Reverse.Double as AD We need a (7) homogenic triple to represent the two parameters and x:>data H3 a = H3 a a a deriving (Functor, Foldable, Traversable)Then we define a function ad can operate with:"let linearF (H3 a b x) = a * x + b1which we can use to fit the curve in generic way:let lin' (V2 a b) (x, y) = case AD.grad' linearF (H3 a b x) of (f, H3 da db _f') -> (y, f, V2 da db)-PP $ levenbergMarquardt2 lin' (V2 1 1) input24Fit (V2 1.00000 1.00000) (V2 1.0175 2.5385) 3 29.4703Fit (V2 1.2782 1.4831) (V2 0.57784 1.4416) 3 9.50414Fit (V2 1.7254 1.4730) (V2 0.18820 0.46952) 3 1.00826Fit (V2 1.9796 0.95226) (V2 0.09683 0.24157) 3 0.266876Fit (V2 2.0060 0.88759) (V2 0.09550 0.23826) 3 0.259626Fit (V2 2.0063 0.88685) (V2 0.09550 0.23826) 3 0.25962Non-polynomial exampleWe can fit other curves too, for example an example from Wikipedia https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm#Example0let rateF (H3 vmax km s) = (vmax * s) / (km + s)let rateF' (V2 vmax km) (x, y) = case AD.grad' rateF (H3 vmax km x) of (f, H3 vmax' km' _) -> (y, f, V2 vmax' km')let input = zip [0.038,0.194,0.425,0.626,1.253,2.500,3.740] [0.050,0.127,0.094,0.2122,0.2729,0.2665,0.3317]2PP $ levenbergMarquardt2 rateF' (V2 0.9 0.2) input6Fit (V2 0.90000 0.20000) (V2 0.43304 0.43936) 5 1.44557Fit (V2 0.61786 0.36360) (V2 0.23270 0.50259) 5 0.267307Fit (V2 0.39270 0.49787) (V2 0.05789 0.24170) 5 0.012377Fit (V2 0.36121 0.54525) (V2 0.04835 0.23315) 5 0.007857Fit (V2 0.36168 0.55530) (V2 0.04880 0.23790) 5 0.007847Fit (V2 0.36182 0.55620) (V2 0.04885 0.23826) 5 0.007847Fit (V2 0.36184 0.55626) (V2 0.04885 0.23829) 5 0.007849We get the same result as in the article: 0.362 and 0.5562The algorithm terminates when a scaling parameter \lambda becomes larger than 1e20 or smaller than 1e-20, or relative WSSR change is smaller than 1e-10, or sum-of-squared-residuals candidate becomes NaN (i.e. when it would start to produce garbage). You may want to terminate sooner, Numerical Recipes suggest to stop when WSSR decreases by a neglible amount absolutely or fractionally.regression-simple with weights.Because  is an iterative algorithm, not only we can use it to fit curves with known y-errors ('), but also with both x and y-errors ().regression-simple with Y-errors.regression-simple with XY-errors.regression-simple8Levenberg@Marquardt for functions with three parameters.See $ for examples, this is very similar.let quad (V3 a b c) (x, y) = (y, a * x * x + b * x + c, V3 (x * x) x 1)9PP $ NE.last $ levenbergMarquardt3 quad (V3 2 2 2) input3Fit (V3 1.00000 (-0.00000) 2.0000) (V3 0.00000 0.00000 0.00000) 1 0.00000(Same as quadratic fit, just less direct:PP $ quadraticFit id input3Fit (V3 1.00000 0.00000 2.0000) (V3 0.00000 0.00000 0.00000) 1 0.00000regression-simple with weights.regression-simple with Y-errors.regression-simple with XY-errors.regression-simple All-zeroes b.regression-simple"Add a point to linreg accumulator.regression-simple+Add a weighted point to linreg accumulator.regression-simple All-zeroes V.regression-simple#Add a point to quadreg accumulator.regression-simple,Add a weighted point to quadreg accumulator.regression-simpleConvert V to b.Using this we can try quadratic and linear fits with a single data scan.regression-simple!Levenberg-Marquard stop conditionuregression-simple x_i, y_i, \delta x_i, \delta y_iregression-simpledata{regression-simple x_i, y_i, \delta x_i, \delta y_iregression-simpledata}regression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \partial_\beta f(\beta, x_i)regression-simpleinitial parameter, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration results~regression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \partial_\beta f(\beta, x_i), w_iregression-simpleinitial parameter, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \partial_\beta f(\beta, x_i), \delta y_iregression-simpleinitial parameter, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \partial_\beta f(\beta, x_i), \partial_x f(\beta, x_i), \delta x_i, \delta y_iregression-simpleinitial parameter, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i)regression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), w_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), \delta y_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), \partial_x f(\beta, x_i), \delta x_i, \delta y_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i)regression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), w_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), \delta y_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simple\beta, d_i \mapsto y_i, f(\beta, x_i), \nabla_\beta f(\beta, x_i), \partial_x f(\beta, x_i), \delta x_i, \delta y_iregression-simpleinitial parameters, \beta_0regression-simpledata, dregression-simple#non-empty list of iteration resultsregression-simplexregression-simpleyregression-simplexregression-simpleyregression-simplewregression-simplexregression-simpleyregression-simplexregression-simpleyregression-simplew> VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~>qrstuvbcdefghijwxyz{|VWXYZ[\]^_`a}~klmnop        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLLMNOPQRSSTUVWXYZ[\]^^_`abcdeffghijklmnopqrstuvwxyz{|}~.regression-simple-0.2.1-HVPToHjZ8ZH3OlgjhkA21wMath.Regression.Simple.LinAlg Numeric.KBNMath.Regression.SimpleSM33M33V3SM22M22V2InvinvDetdetMultmultEyeeyeAddzeroaddzerosLin zerosQuad optimaQuad $fAddDouble $fEyeDouble$fMultDoubleDoubleDouble $fDetDouble $fInvDouble$fMultDoubleV2V2$fAddV2 $fNFDataV2$fMultM22M22M22 $fMultM22V2V2$fMultDoubleM22M22$fInvM22$fDetM22$fEyeM22$fAddM22 $fNFDataM22$fMultSM22V2V2$fMultDoubleSM22SM22 $fInvSM22 $fDetSM22 $fEyeSM22 $fAddSM22 $fNFDataSM22$fMultDoubleV3V3$fAddV3 $fNFDataV3 $fMultM33V3V3$fMultDoubleM33M33$fInvM33$fDetM33$fEyeM33$fAddM33 $fNFDataM33$fMultSM33V3V3$fMultDoubleSM33SM33 $fInvSM33 $fDetSM33 $fEyeSM33 $fAddSM33 $fNFDataSM33$fEqSM33 $fShowSM33$fEqM33 $fShowM33$fEqV3$fShowV3$fEqSM22 $fShowSM22$fEqM22 $fShowM22$fEqV2$fShowV2KBNgetKBNzeroKBNsumKBNaddKBN $fNFDataKBN $fShowKBN QuadRegAccqra_nqra_wqra_xqra_x2qra_x3qra_x4qra_yqra_xyqra_x2yqra_y2 LinRegAcclra_nlra_wlra_xlra_x2lra_ylra_xylra_y2Fit fitParams fitErrorsfitNDFfitWSSRlinear linearFitlinearWithWeightslinearWithYerrorslinearWithXYerrors linearFit' quadratic quadraticFitquadraticWithWeightsquadraticWithYerrorsquadraticWithXYerrors quadraticFit'levenbergMarquardt1levenbergMarquardt1WithWeightslevenbergMarquardt1WithYerrorslevenbergMarquardt1WithXYerrorslevenbergMarquardt2levenbergMarquardt2WithWeightslevenbergMarquardt2WithYerrorslevenbergMarquardt2WithXYerrorslevenbergMarquardt3levenbergMarquardt3WithWeightslevenbergMarquardt3WithYerrorslevenbergMarquardt3WithXYerrors zeroLinRegAcc addLinReg addLinRegWzeroQuadRegAcc addQuadReg addQuadRegWquadRegAccToLin $fNFDataFit$fNFDataLinRegAcc$fNFDataQuadRegAcc$fShowQuadRegAcc$fShowLinRegAcc $fShowFit $fShowLM3Acc $fShowLM2Acc $fShowLM1Accghc-prim GHC.TypesDouble sumKBNWithbaseData.Traversable TraversablelmStop