Safe Haskell | None |
---|---|
Language | Haskell98 |
Stochastic gradient descent using mutable vectors for efficient parameter update. This module is intended for use with sparse features. If you use dense feature vectors (as arise e.g. in deep learning), have a look at Numeric.SGD.
Currently only the Gaussian regularization is implemented.
SGD with momentum is known to converge faster than vanilla SGD. It's implementation can be found in Numeric.SGD.Sparse.Momentum.
Documentation
SGD parameters controlling the learning process.
sgdArgsDefault :: SgdArgs Source #
Default SGD parameter values.
:: SgdArgs | SGD parameter values |
-> (Para -> Int -> IO ()) | Notification run every update |
-> (Para -> x -> Grad) | Gradient for dataset element |
-> DataSet x | Dataset |
-> Para | Starting point |
-> IO Para | SGD result |
A stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.
module Numeric.SGD.Sparse.Grad
module Numeric.SGD.DataSet