sgd-0.2.2: Stochastic gradient descent

Safe HaskellNone

Numeric.SGD

Description

Stochastic gradient descent implementation using mutable vectors for efficient update of the parameters vector. A user is provided with the immutable version of parameters vector so he is able to compute the gradient outside the IO/ST monad. Currently only the Gaussian priors are implemented.

This is a preliminary version of the SGD library and API may change in future versions.

Synopsis

Documentation

data SgdArgs Source

SGD parameters controlling the learning process.

Constructors

SgdArgs 

Fields

batchSize :: Int

Size of the batch

regVar :: Double

Regularization variance

iterNum :: Double

Number of iterations

gain0 :: Double

Initial gain parameter

tau :: Double

After how many iterations over the entire dataset the gain parameter is halved

sgdArgsDefault :: SgdArgsSource

Default SGD parameter values.

type Dataset x = Vector xSource

Dataset with elements of x type.

type Para = Vector DoubleSource

Vector of parameters.

sgdSource

Arguments

:: SgdArgs

SGD parameter values

-> (Para -> x -> Grad)

Gradient for dataset element

-> Dataset x

Dataset

-> Para

Starting point

-> Para

SGD result

Pure version of the stochastic gradient descent method.

sgdMSource

Arguments

:: PrimMonad m 
=> SgdArgs

SGD parameter values

-> (Para -> Int -> m ())

Notification run every update

-> (Para -> x -> Grad)

Gradient for dataset element

-> Dataset x

Dataset

-> Para

Starting point

-> m Para

SGD result

Monadic version of the stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.