mltool-0.2.0.1: Machine Learning Toolbox

Copyright(c) Alexander Ignatyev 2016-2018.
LicenseBSD-3
Stabilityexperimental
PortabilityPOSIX
Safe HaskellNone
LanguageHaskell2010

MachineLearning.Regression

Description

 

Synopsis

Documentation

class Model a where Source #

Minimal complete definition

hypothesis, cost, gradient

Methods

hypothesis :: a -> Matrix -> Vector -> Vector Source #

Hypothesis function, a.k.a. score function (for lassifition problem) Takes X (m x n) and theta (n x 1), returns y (m x 1).

cost :: a -> Regularization -> Matrix -> Vector -> Vector -> R Source #

Cost function J(Theta), a.k.a. loss function. It takes regularizarion parameter, matrix X (m x n), vector y (m x 1) and vector theta (n x 1).

gradient :: a -> Regularization -> Matrix -> Vector -> Vector -> Vector Source #

Gradient function. It takes regularizarion parameter, X (m x n), y (m x 1) and theta (n x 1). Returns vector of gradients (n x 1).

data MinimizeMethod Source #

Constructors

GradientDescent R

Gradient descent, takes alpha. Requires feature normalization.

MinibatchGradientDescent Int Int R

Minibacth Gradietn Descent, takes seed, batch size and alpha

ConjugateGradientFR R R

Fletcher-Reeves conjugate gradient algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

ConjugateGradientPR R R

Polak-Ribiere conjugate gradient algorithm. takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

BFGS2 R R

Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).

minimize Source #

Arguments

:: Model a 
=> MinimizeMethod 
-> a

model (Least Squares, Logistic Regression etc)

-> R

epsilon, desired precision of the solution

-> Int

maximum number of iterations allowed

-> Regularization

regularization parameter

-> Matrix

X

-> Vector

y

-> Vector

initial solution, theta

-> (Vector, Matrix)

solution vector and optimization path

Returns solution vector (theta) and optimization path. Optimization path's row format: [iter number, cost function value, theta values...]

normalEquation :: Matrix -> Vector -> Vector Source #

Normal equation using inverse, does not require feature normalization It takes X and y, returns theta.

normalEquation_p :: Matrix -> Vector -> Vector Source #

Normal equation using pseudo inverse, requires feature normalization It takes X and y, returns theta.

data Regularization Source #

Regularization

Constructors

RegNone

No regularization

L2 R

L2 Regularization