sgd-0.8.0.3: Stochastic gradient descent library

Safe HaskellNone
LanguageHaskell2010

Numeric.SGD.Sparse

Description

Stochastic gradient descent using mutable vectors for efficient parameter update. This module is intended for use with sparse features. If you use dense feature vectors (as arise e.g. in deep learning), have a look at Numeric.SGD.

Currently only the Gaussian regularization is implemented.

SGD with momentum is known to converge faster than vanilla SGD. It's implementation can be found in Numeric.SGD.Sparse.Momentum.

Synopsis

Documentation

data SgdArgs Source #

SGD parameters controlling the learning process.

Constructors

SgdArgs 

Fields

sgdArgsDefault :: SgdArgs Source #

Default SGD parameter values.

type Para = Vector Double Source #

Vector of parameters.

sgd Source #

Arguments

:: SgdArgs

SGD parameter values

-> (Para -> Int -> IO ())

Notification run every update

-> (Para -> x -> Grad)

Gradient for dataset element

-> DataSet x

Dataset

-> Para

Starting point

-> IO Para

SGD result

A stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.