Safe Haskell | None |
---|---|
Language | Haskell2010 |
Stochastic gradient descent implementation using mutable vectors for efficient update of the parameters vector. A user is provided with the immutable version of parameters vector so he is able to compute the gradient outside the IO/ST monad. Currently only the Gaussian priors are implemented.
This is a preliminary version of the SGD library and API may change in future versions.
Synopsis
Documentation
SGD parameters controlling the learning process.
sgdArgsDefault :: SgdArgs Source #
Default SGD parameter values.
:: SgdArgs | SGD parameter values |
-> (Para -> x -> Grad) | Gradient for dataset element |
-> Dataset x | Dataset |
-> Para | Starting point |
-> Para | SGD result |
Pure version of the stochastic gradient descent method.
:: PrimMonad m | |
=> SgdArgs | SGD parameter values |
-> (Para -> Int -> m ()) | Notification run every update |
-> (Para -> x -> Grad) | Gradient for dataset element |
-> Dataset x | Dataset |
-> Para | Starting point |
-> m Para | SGD result |
Monadic version of the stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.
module Numeric.SGD.Grad