| Copyright | (c) Alexander Ignatyev 2016-2017 |
|---|---|
| License | BSD-3 |
| Stability | experimental |
| Portability | POSIX |
| Safe Haskell | None |
| Language | Haskell2010 |
MachineLearning.Classification.OneVsAll
Description
One-vs-All Classification.
- data MinimizeMethod
- predict :: Matrix -> [Vector] -> Vector
- learn :: MinimizeMethod -> R -> Int -> Regularization -> Int -> Matrix -> Vector -> [Vector] -> ([Vector], [Matrix])
- calcAccuracy :: Vector -> Vector -> R
- data Regularization
Documentation
data MinimizeMethod Source #
Constructors
| GradientDescent R | Gradient descent, takes alpha. Requires feature normalization. |
| MinibatchGradientDescent Int Int R | Minibacth Gradietn Descent, takes seed, batch size and alpha |
| ConjugateGradientFR R R | Fletcher-Reeves conjugate gradient algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
| ConjugateGradientPR R R | Polak-Ribiere conjugate gradient algorithm. takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
| BFGS2 R R | Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
predict :: Matrix -> [Vector] -> Vector Source #
One-vs-All Classification prediction function. Takes a matrix of features X and a list of vectors theta, returns predicted class number assuming that class numbers start at 0.
Arguments
| :: MinimizeMethod | (e.g. BFGS2 0.1 0.1) |
| -> R | epsilon, desired precision of the solution; |
| -> Int | maximum number of iterations allowed; |
| -> Regularization | regularization parameter lambda; |
| -> Int | number of labels |
| -> Matrix | matrix X; |
| -> Vector | vector y |
| -> [Vector] | initial theta list; |
| -> ([Vector], [Matrix]) | solution vector and optimization path. |
Learns One-vs-All Classification