Maintainer | Kiet Lam <ktklam9@gmail.com> |
---|

This module provides training algorithms to train a neural network given training data.

User should only use LBFGS though because it uses custom bindings to the C-library liblbfgs

GSL's multivariate minimization algorithms are known to be inefficient http://www.alglib.net/optimization/lbfgsandcg.php#header6 and LBFGS outperforms them on many (of my) tests

- data TrainingAlgorithm
- = GradientDescent
- | ConjugateGradient
- | BFGS
- | LBFGS

- trainNetwork :: TrainingAlgorithm -> Cost -> GradientFunction -> Network -> Double -> Int -> Matrix Double -> Matrix Double -> Network

# Documentation

data TrainingAlgorithm Source

The types of training algorithm to use

NOTE: These are all batch training algorithms

GradientDescent | hmatrix's binding to GSL |

ConjugateGradient | hmatrix's binding to GSL |

BFGS | hmatrix's binding to GSL |

LBFGS | home-made binding to liblbfgs |

:: TrainingAlgorithm | The training algorithm to use |

-> Cost | The cost model of the neural network |

-> GradientFunction | The function that can calculate the gradients vector |

-> Network | The network to be trained |

-> Double | The precision of the training with regards to the cost function |

-> Int | The maximum number of iterations |

-> Matrix Double | The input matrix |

-> Matrix Double | The expected output matrix |

-> Network | Returns the trained network |

Train the neural network given a training algorithm, the training parameters and the training data