sgd-0.2.3: Stochastic gradient descent

Safe HaskellNone
LanguageHaskell2010

Numeric.SGD

Description

Stochastic gradient descent implementation using mutable vectors for efficient update of the parameters vector. A user is provided with the immutable version of parameters vector so he is able to compute the gradient outside the IO/ST monad. Currently only the Gaussian priors are implemented.

This is a preliminary version of the SGD library and API may change in future versions.

Synopsis

Documentation

data SgdArgs Source #

SGD parameters controlling the learning process.

Constructors

SgdArgs 

Fields

sgdArgsDefault :: SgdArgs Source #

Default SGD parameter values.

type Dataset x = Vector x Source #

Dataset with elements of x type.

type Para = Vector Double Source #

Vector of parameters.

sgd Source #

Arguments

:: SgdArgs

SGD parameter values

-> (Para -> x -> Grad)

Gradient for dataset element

-> Dataset x

Dataset

-> Para

Starting point

-> Para

SGD result

Pure version of the stochastic gradient descent method.

sgdM Source #

Arguments

:: PrimMonad m 
=> SgdArgs

SGD parameter values

-> (Para -> Int -> m ())

Notification run every update

-> (Para -> x -> Grad)

Gradient for dataset element

-> Dataset x

Dataset

-> Para

Starting point

-> m Para

SGD result

Monadic version of the stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.