Safe Haskell | None |
---|
Stochastic gradient descent implementation using mutable vectors for efficient update of the parameters vector. A user is provided with the immutable version of parameters vector so he is able to compute the gradient outside the IO/ST monad. Currently only the Gaussian priors are implemented.
This is a preliminary version of the SGD library and API may change in future versions.
- data SgdArgs = SgdArgs {}
- sgdArgsDefault :: SgdArgs
- type Dataset x = Vector x
- type Para = Vector Double
- sgd :: SgdArgs -> (Para -> x -> Grad) -> Dataset x -> Para -> Para
- sgdM :: (Applicative m, PrimMonad m) => SgdArgs -> (Para -> Int -> m ()) -> (Para -> x -> Grad) -> Dataset x -> Para -> m Para
- module Numeric.SGD.Grad
Documentation
SGD parameters controlling the learning process.
sgdArgsDefault :: SgdArgsSource
Default SGD parameter values.
:: SgdArgs | SGD parameter values |
-> (Para -> x -> Grad) | Gradient for dataset element |
-> Dataset x | Dataset |
-> Para | Starting point |
-> Para | SGD result |
Pure version of the stochastic gradient descent method.
:: (Applicative m, PrimMonad m) | |
=> SgdArgs | SGD parameter values |
-> (Para -> Int -> m ()) | Notification run every update |
-> (Para -> x -> Grad) | Gradient for dataset element |
-> Dataset x | Dataset |
-> Para | Starting point |
-> m Para | SGD result |
Monadic version of the stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.
module Numeric.SGD.Grad