sgd-0.3: Stochastic gradient descent

Safe HaskellNone

Numeric.SGD

Description

Stochastic gradient descent implementation using mutable vectors for efficient update of the parameters vector. A user is provided with the immutable vector of parameters so he is able to compute the gradient outside of the IO monad. Currently only the Gaussian priors are implemented.

This is a preliminary version of the SGD library and API may change in future versions.

Synopsis

Documentation

data SgdArgs Source

SGD parameters controlling the learning process.

Constructors

SgdArgs 

Fields

batchSize :: Int

Size of the batch

regVar :: Double

Regularization variance

iterNum :: Double

Number of iterations

gain0 :: Double

Initial gain parameter

tau :: Double

After how many iterations over the entire dataset the gain parameter is halved

sgdArgsDefault :: SgdArgsSource

Default SGD parameter values.

type Para = Vector DoubleSource

Vector of parameters.

sgdSource

Arguments

:: SgdArgs

SGD parameter values

-> (Para -> Int -> IO ())

Notification run every update

-> (Para -> x -> Grad)

Gradient for dataset element

-> Dataset x

Dataset

-> Para

Starting point

-> IO Para

SGD result

A stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.