Maintainer | Kiet Lam <ktklam9@gmail.com> |
---|---|

Safe Haskell | Safe-Infered |

The Levenberg-Marquardt algorithm is a minimization algorithm for functions expressed as a sum of squared errors

This can be used for curve-fitting, multidimensional function optimization, or neural networks training

# Documentation

type Function = Vector Double -> Vector DoubleSource

Type that represents the function that can calculate the residues

type Jacobian = Vector Double -> Matrix DoubleSource

Type that represents the function that can calculate the jacobian matrix of the residue with respect to each parameter

:: Function | Multi-dimensional function that will return a vector of residues |

-> Jacobian | The function that calculate the Jacobian matrix of each residue with respect to each parameter |

-> Vector Double | The initial guess for the parameter |

-> Double | Dampening constant (usually lambda in most literature) |

-> Double | Dampening update value (usually beta in most literature) |

-> Double | The precision desired |

-> Int | The maximum iteration |

-> (Vector Double, Matrix Double) | Returns the optimal parameter and the matrix path |

Evolves the parameter x for f(x) = sum-square(e(x)) so that f(x) will be minimized, where:

f = real-valued error function, e(x) = {e1(x),e2(x),..,eN(x)}, where e1(x) = the residue at the vector x

NOTE: eN(x) is usually represented as (sample - hypothesis(x))

e.g.: In training neural networks, hypothesis(x) would be the network's output for a training set, and sample would be the expected output for that training set

NOTE: The dampening constant(lambda) should be set to 0.01 and the dampening update value (beta) should be set to be 10