hfann-0.4.2: Haskell binding to the FANN library

Portabilityportable
Stabilityexperimental
Maintainerolivier.boudry@gmail.com
Safe HaskellNone

HFANN.Train

Contents

Description

The Fast Artificial Neural Network Library (FANN) is a free open source neural network library written in C with support for both fully connected and sparsely connected networks (http://leenissen.dk/fann/).

HFANN is a Haskell interface to this library.

Synopsis

Training

trainSource

Arguments

:: FannPtr

The ANN to be trained

-> [FannType]

The input

-> [FannType]

The expected output

-> IO () 

Train the Neural Network on the given input and output values

trainEpoch :: FannPtr -> TrainDataPtr -> IO FloatSource

Train one epoch with a set of training data

Train one epoch with the given training data. One epoch is where all the training data is considered exactly once.

The function returns the MSE error as it is calculated either before or during the actual training. This is not the actual MSE after the training epoch but since calculating this will require to go through the entire training set once more it is more adequate to use this value during training.

The training algorithm used by this function is chosen by the setTrainingAlgorithm function.

See also: trainOnData, testData

trainOnFileSource

Arguments

:: FannPtr

The ANN to be trained

-> String

The path to the training data file

-> Int

The max number of epochs to train

-> Int

The number of epochs between reports

-> Double

The desired error

-> IO () 

Train the Neural Network on the given data file

trainOnDataSource

Arguments

:: FannPtr

The ANN to be trained

-> TrainDataPtr

The training data

-> Int

The max number of epochs to train

-> Int

The number of epochs between reports

-> Double

The desired error

-> IO () 

Train the Neural Network on a training dataset.

Instead of printing out reports every "epochs between reports", a callback function can be called (see setCallback)

A value of zero in the epochs between reports means no reports should be printed.

testDataSource

Arguments

:: FannPtr

The ANN to be used

-> TrainDataPtr

The training data

-> IO CFloat

The error value

Test ANN on training data

This function will run the ANN on the training data and return the error value. It can be used to validate the check the quality of the ANN on some test data.

testSource

Arguments

:: FannPtr

The ANN to be tested

-> [FannType]

The input

-> [FannType]

The expected output

-> IO [FannType] 

Test the Neural Network on the given input and output values

getMSESource

Arguments

:: FannPtr

The ANN

-> IO Float

The mean square error

Get the mean square error from the ANN

This value is calculated during training or testing, and can therefore sometimes be a bit off if the weights have been changed since the last calculation of the value.

resetMSE :: FannPtr -> IO ()Source

Reset the mean square error from the network.

This function also resets the number of bits that fail.

getBitFailSource

Arguments

:: FannPtr

The ANN

-> IO Int

The number of fail bits

Get the number of fail bits

The number of fail bits means the number of output neurons which differ more than the bit fail limit (see getBitFailLimit, setBitFailLimit).

This value is reset by resetMSE and updated by the same functions which also updates the MSE value testData, trainEpoch.

Training data manipulation

withTrainDataSource

Arguments

:: String

The path to the training data file

-> (TrainDataPtr -> IO a)

A function using the training data

-> IO a

The return value

Read training data from file and run the given function on that data.

loadTrainDataSource

Arguments

:: String

Path to the data file

-> IO TrainDataPtr

The loaded training data

Reads training data from a file.

The file must be formatted like:

 num_records num_input num_output
 inputdata separated by space
 outputdata separated by space

 ...
 ...

 inputdata separated by space
 outputdata separated by space

See also: trainOnData, destroyTrain, saveTrain

destroyTrainDataSource

Arguments

:: TrainDataPtr

The data to destroy

-> IO () 

Destroy training data

Destroy training data and properly deallocates the memory.

Be sure to use this function after finished using the training data unless the training data is part of a withTrainData call.

shuffleTrainDataSource

Arguments

:: TrainDataPtr

The data to randomly reorder

-> IO () 

Shuffles training data, randomizing the order.

This is recomended for incremental training, while it has no influence during batch training

scaleInputTrainDataSource

Arguments

:: TrainDataPtr

The data to be scaled

-> FannType

The minimum bound

-> FannType

The maximum bound

-> IO () 

Scales the inputs in the training data to the specified range.

See also: scaleOutputData, scaleTrainData

scaleOutputTrainDataSource

Arguments

:: TrainDataPtr

The data to be scaled

-> FannType

The minimum bound

-> FannType

The maximum bound

-> IO () 

Scales the output in the training data to the specified range.

See also: scaleInputData, scaleTrainData

scaleTrainDataSource

Arguments

:: TrainDataPtr

The data to be scaled

-> FannType

The minimum bound

-> FannType

The maximum bound

-> IO () 

Scales the inputs and outputs in the training data to the specified range.

See also: scaleOutputData, scaleInputData

mergeTrainDataSource

Arguments

:: TrainDataPtr

training data set 1

-> TrainDataPtr

training data set 2

-> IO TrainDataPtr

a copy of the merged data sets 1 and 2

Merges two training data sets into a new one.

duplicateTrainDataSource

Arguments

:: TrainDataPtr

The training data

-> IO TrainDataPtr

A new copy

Returns an exact copy of a training data set.

subsetTrainData :: TrainDataPtr -> Int -> Int -> IO TrainDataPtrSource

Returns a copy of a subset of the training data, starting at the given offset and taking the given count of elements.

 len <- trainDataLength tdata
 newtdata <- subsetTrainData tdata 0 len

Will do the same as duplicateTrainData

See also: trainDataLength

trainDataLength :: TrainDataPtr -> IO IntSource

Returns the number of training patterns in the training data.

getTrainDataInputNodesCount :: TrainDataPtr -> IO IntSource

Returns the number of input nodes in the training data

getTrainDataOutputNodesCount :: TrainDataPtr -> IO IntSource

Returns the number of output nodes in the training data

saveTrainData :: TrainDataPtr -> String -> IO ()Source

Save the training structure to a file with the format as specified in loadTrainData

See also loadTrainData

Parameters

getTrainingAlgorithmSource

Arguments

:: FannPtr

The ANN

-> IO TrainAlgorithm

The training algorithm

Return the training algorithm. This training algorithm is used by trainOnData and associated functions.

Note that this algorithm is also used during cascadeTrainOnData although only fannTrainRPROP and fannTrainQuickProp is allowed during cascade training.

See also: setTrainingAlgorithm, TrainAlgorithm

setTrainingAlgorithmSource

Arguments

:: FannPtr

The ANN

-> TrainAlgorithm

The training algorithm

-> IO () 

Set the training algorithm.

See also: getTrainingAlgorithm, TrainingAlgorithm

getLearningRate :: FannPtr -> IO FloatSource

Return the learning rate.

The learning rate is used to determine how aggressive the training should be for some of the training algorithms (fannTrainIncremental, fannTrainBatch, fannTrainQuickProp).

Note that it is not used in fannTrainRPROP.

The default learning rate is 0.7.

See also: setLearningRate, setTrainingAlgorithm

setLearningRate :: FannPtr -> Float -> IO ()Source

Set the learning rate.

See getLearningRate for more information about the learning rate.

See also: getLearingRate

getLearningMomentum :: FannPtr -> IO FloatSource

Return the learning momentum.

The learning momentum can be used to speed up the fannTrainIncremental training algorithm.

A too high momentum will however not benefit training. Setting momentum to 0 will be the same as not using the momentum parameter. The recommended value for this parameter is between 0.0 and 1.0.

The default momentum is 0.

See also: setLearningMomentum, setTrainingAlgorithm

setLearningMomentum :: FannPtr -> Float -> IO ()Source

Set the learning momentum.

More info available in getLearningMomentum.

setActivationFunctionSource

Arguments

:: FannPtr

The ANN

-> ActivationFunction

The activation function

-> Int

The layer

-> Int

The neuron

-> IO () 

Set the activation function for the neuron specified in layer specified, counting the input layer as layer 0.

It is not possible to set activation functions for the neurons in the input layer.

When choosing an activation function it is important to note that the activation function have different range. In fannSigmoid is in the 0 .. 1 range while fannSigmoidSymmetric is in the -1 .. 1 range and fannLinear is unbound.

The default activation function is fannSigmoidStepwise.

See also: setActivationFunctionLayer, setActivationFunctionHidden, setActivationFunctionOutput, setActivationSteepness

setActivationFunctionLayerSource

Arguments

:: FannPtr

The ANN

-> ActivationFunction

The activation function

-> Int

The layer

-> IO () 

Set the activation function for all neurons of a given layer, counting the input layer as layer 0.

It is not possible to set an activation function for the neurons in the input layer.

See also: setActivationFunction, setActivationFunctionHidden, setActivationFunctionOutput, setActivationSteepnessLayer

setActivationFunctionHiddenSource

Arguments

:: FannPtr

The ANN

-> ActivationFunction

The Activation Function

-> IO () 

Set the activation function for all the hidden layers.

See also: setActivationFunction, setActivationFunctionLayer, setActivationFunctionOutput

setActivationFunctionOutputSource

Arguments

:: FannPtr

The ANN

-> ActivationFunction

The Activation Function

-> IO () 

Set the activation function for the output layer.

See also: setActivationFunction, setActivationFunctionLayer, setActivationFunctionHidden

setActivationSteepnessSource

Arguments

:: FannPtr

The ANN

-> FannType

The steepness

-> Int

The layer

-> Int

The neuron

-> IO () 

Set the activation steepness of the specified neuron in the specified layer, counting the input layer as 0.

It is not possible to set activation steepness for the neurons in the input layer.

The steepness of an activation function says something about how fast the activation function goes from the minimum to the maximum. A high value for the activation function will also give a more agressive training.

When training networks where the output values should be at the extremes (usually 0 and 1, depending on the activation function), a steep activation can be used (e.g. 1.0).

The default activation steepness is 0.5

See also: setActivationSteepnessLayer, setActivationSteepnessHidden, setActivationSteepnessOutput, setActivationFunction

setActivationSteepnessLayerSource

Arguments

:: FannPtr

The ANN

-> FannType

The steepness

-> Int

The layer

-> IO () 

Set the activation steepness for all of the neurons in the given layer, counting the input layer as layer 0.

It is not possible to set the activation steepness for the neurons in the input layer.

See also: setActivationSteepness, setActivationSteepnessHidden, setActivationSteepnessOutput, setActivationFunction.

setActivationSteepnessHiddenSource

Arguments

:: FannPtr

The ANN

-> FannType

The steepness

-> IO () 

Set the activation steepness of all the nodes in all hidden layers.

See also: setActivationSteepness, setActivationSteepnessLayer, setActivationSteepnessOutput, setActivationFunction

setActivationSteepnessOutputSource

Arguments

:: FannPtr

The ANN

-> FannType

The steepness

-> IO () 

Set the activation steepness of all the nodes in all output layer.

See also: setActivationSteepness, setActivationSteepnessLayer, setActivationSteepnessHidden, setActivationFunction

getTrainErrorFunctionSource

Arguments

:: FannPtr

The ANN

-> IO ErrorFunction

The error function

Return the error function used during training.

The error function is described in ErrorFunction

The default error function is errorFunctionTanH

See also: setTrainErrorFunction

setTrainErrorFunctionSource

Arguments

:: FannPtr

The ANN

-> ErrorFunction

The error function

-> IO () 

Set the error function used during training.

The error function is described in ErrorFunction

See also: getTrainErrorFunction

getTrainStopFunction :: FannPtr -> IO StopFunctionSource

Returns the stop function used during training.

The stop function is described in StopFunction

The default stop function is stopFunctionMSE

See also: setTrainStopFunction, setBitFailLimit

setTrainStopFunction :: FannPtr -> StopFunction -> IO ()Source

Set the stop function used during training.

The stop function is described in StopFunction

The default stop function is stopFunctionMSE

See also: getTrainStopFunction, getBitFailLimit

getBitFailLimit :: FannPtr -> IO FannTypeSource

Returns the bit fail limit used during training.

The bit fail limit is used during training where the StopFunction is set stopFunctionBit.

The limit is the maximum accepted difference between the desired output and the actual output during training. Each output that diverges more than this is counted as an error bit.

This difference is divided by two when dealing with symmetric activation functions, so that symmetric and not symmetric activation functions can use the same limit.

The default bit fail limit is 0.35.

See also: setBitFailLimit

setBitFailLimit :: FannPtr -> FannType -> IO ()Source

Set the bit fail limit used during training.

See also: getBitFailLimit

setCallback :: FannPtr -> CallbackType -> IO ()Source

Set the callback function to be used for reporting and to stop training

The callback function will be called based on the "Epoch between reports" defined frequency.

The type of the callback function is:

 callback :: FannPtr      -- The ANN being trained
          -> TrainDataPtr -- The training data in use
          -> Int          -- Max number of epochs
          -> Int          -- Number of epochs between reports
          -> Float        -- Desired error
          -> Int          -- Current epoch
          -> Bool         -- True to terminate training, False to continue

getQuickPropDecay :: FannPtr -> IO FloatSource

Returns the quickprop decay

The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training.

This is used to make sure that the weights do not become too high during training.

The default decay is -0.0001

See also: setQuickPropDecay

setQuickPropDecay :: FannPtr -> Float -> IO ()Source

Sets the quickprop decay factor

See also: getQuickPropDecay

getQuickPropMu :: FannPtr -> IO FloatSource

Returns the quickprop mu factor

The mu factor is used to increase and decrease the step-size during quickprop training. The mu factor should always be above 1, since it would otherwise decrease the step-size when it was supposed to increase it.

The default mu factor is 1.75

See also: setQuickPropMu

setQuickPropMu :: FannPtr -> Float -> IO ()Source

Sets the quickprop mu factor

See also: getQuickPropMu

getRPROPIncreaseFactor :: FannPtr -> IO FloatSource

Returns the RPROP increase factor

The RPROP increase factor is a value larger than 1, which is used to increase the step-size during RPROP training.

The default increase factor is 1.2

See also: setRPROPIncreaseFactor

setRPROPIncreaseFactor :: FannPtr -> Float -> IO ()Source

Sets the RPROP increase factor

See also: getRPROPIncreaseFactor

getRPROPDecreaseFactor :: FannPtr -> IO FloatSource

Returns the RPROP decrease factor

The RPROP decrease factor is a value larger than 1, which is used to decrease the step-size during RPROP training.

The default decrease factor is 0.5

See also: setRPROPDecreaseFactor

setRPROPDecreaseFactor :: FannPtr -> Float -> IO ()Source

Sets the RPROP decrease factor

See also: getRPROPDecreaseFactor

getRPROPDeltaMin :: FannPtr -> IO FloatSource

Returns the RPROP delta min factor

The delta min factor is a small positive number determining how small the minimum step-size may be.

The default value delta min is 0.0

See also: setRPROPDeltaMin

setRPROPDeltaMin :: FannPtr -> Float -> IO ()Source

Sets the RPROP delta min

See also: getRPROPDeltaMin

getRPROPDeltaMax :: FannPtr -> IO FloatSource

Returns the RPROP delta max factor

The delta max factor is a positive number determining how large the maximum step-size may be.

The default value delta max is 50.0

See also: setRPROPDeltaMax

setRPROPDeltaMax :: FannPtr -> Float -> IO ()Source

Sets the RPROP delta max

See also: getRPROPDeltaMax