sgd: Stochastic gradient descent library

[ bsd3, library, math ] [ Propose Tags ]

Import Numeric.SGD to use the library.


[Skip to Readme]
Versions [faq] 0.1.0, 0.2.0, 0.2.1, 0.2.2, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.4.0, 0.4.0.1, 0.5.0.0, 0.6.0.0, 0.7.0.0, 0.7.0.1, 0.8.0.0, 0.8.0.1, 0.8.0.2
Change log changelog
Dependencies base (>=4.7 && <5), binary (>=0.5 && <0.9), bytestring (>=0.9 && <0.11), containers (>=0.4 && <0.7), data-default (==0.7.*), deepseq (>=1.3 && <1.5), filepath (>=1.3 && <1.5), hmatrix (==0.19.*), logfloat (>=0.12 && <0.14), monad-par (>=0.3.4 && <0.4), mtl (>=2.0 && <2.3), parallel (==3.2.*), pipes (==4.3.*), primitive (>=0.5 && <0.7), random (>=1.0 && <1.2), random-shuffle (>=0.0.4 && <0.1), temporary (>=1.1 && <1.4), vector (>=0.10 && <0.13) [details]
License BSD-3-Clause
Copyright 2012-2019 Jakub Waszczuk
Author Jakub Waszczuk
Maintainer waszczuk.kuba@gmail.com
Category Math
Home page https://github.com/kawu/sgd#readme
Bug tracker https://github.com/kawu/sgd/issues
Source repo head: git clone https://github.com/kawu/sgd
Uploaded by JakubWaszczuk at Fri May 24 11:26:33 UTC 2019
Distributions NixOS:0.8.0.2
Downloads 7120 total (317 in the last 30 days)
Rating (no votes yet) [estimated by rule of succession]
Your Rating
  • λ
  • λ
  • λ
Status Hackage Matrix CI
Docs available [build log]
Last success reported on 2019-05-24 [all 1 reports]

Modules

[Index] [Quick Jump]

Downloads

Maintainer's Corner

For package maintainers and hackage trustees


Readme for sgd-0.8.0.2

[back to package description]

Haskell stochastic gradient descent library

Stochastic gradient descent (SGD) is a method for optimizing a global objective function defined as a sum of smaller, differentiable functions. In each iteration of SGD the gradient is calculated based on a subset of the training dataset. In Haskell, this process can be simply represented as a fold over a of subsequent dataset subsets (singleton elements in the extreme).

However, it can be beneficial to select the subsequent subsets randomly (e.g., shuffle the entire dataset before each pass). Moreover, the dataset can be large enough to make it impractical to store it all in memory. Hence, the sgd library adopts a pipe-based interface in which SGD takes the form of a process consuming dataset subsets (the so-called mini-batches) and producing a stream of output parameter values.

The sgd library implements several SGD variants (SGD with momentum, AdaDelta, Adam) and handles heterogeneous parameter representations (vectors, maps, custom records, etc.). It can be used in combination with automatic differentiation libraries (ad, backprop), which can be used to automatically calculate the gradient of the objective function.

Look at the hackage repository for a library documentation.