mltool-0.1.0.1: Machine Learning Toolbox

Copyright(c) Alexander Ignatyev, 2016-2017
LicenseBSD-3
Stabilityexperimental
PortabilityPOSIX
Safe HaskellNone
LanguageHaskell2010

MachineLearning.Classification.OneVsAll

Description

One-vs-All Classification.

Synopsis

Documentation

data MinimizeMethod Source #

Constructors

GradientDescent R

Gradient descent, takes alpha. Requires feature normalization.

MinibatchGradientDescent Int Int R

Minibacth Gradietn Descent, takes seed, batch size and alpha

ConjugateGradientFR R R

Fletcher-Reeves conjugate gradient algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

ConjugateGradientPR R R

Polak-Ribiere conjugate gradient algorithm. takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

BFGS2 R R

Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

predict :: Matrix -> [Vector] -> Vector Source #

One-vs-All Classification prediction function. Takes a matrix of features X and a list of vectors theta, returns predicted class number assuming that class numbers start at 0.

learn Source #

Arguments

:: MinimizeMethod

(e.g. BFGS2 0.1 0.1)

-> R

epsilon, desired precision of the solution;

-> Int

maximum number of iterations allowed;

-> Regularization

regularization parameter lambda;

-> Int

number of labels

-> Matrix

matrix X;

-> Vector

vector y

-> [Vector]

initial theta list;

-> ([Vector], [Matrix])

solution vector and optimization path.

Learns One-vs-All Classification

calcAccuracy :: Vector -> Vector -> R Source #

Calculates accuracy of Classification predictions. Takes vector expected y and vector predicted y. Returns number from 0 to 1, the closer to 1 the better accuracy. Suitable for both Classification Types: Binary and Multiclass.

data Regularization Source #

Regularization

Constructors

RegNone

No regularization

L2 R

L2 Regularization