@7vl      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkportable experimentalolivier.boudry@gmail.com'The type of the Activation Function enumeration #Stop function used during training <stopFunctionMSE - stop criteria is Mean Square Error value. = stopFunctionBit - stop criteria is number of bits that fail See getBitFailLimit, setBitFailLimit. HThe bits are counted in all of the training data, so this number can be * higher than the number of training data. %Error function used during training. 6errorFunctionLinear - Standard linear error function. K errorFunctionTanH - Tanh error function, usually better but can require a J lower learning rate. This error function agressively target outputs that L differ much from the desired, while not targetting output that only differ  a little that much. JThe tanh function is not recommended for cascade training and incremental  training. The type of the Training Algorithm enumeration The C callback function type #The Haskell Callback function type .A pointer to the training data structure type )Data type of the training data structure  A pointer to an ANN structure The ANN structure A pointer to the C input/ output type  The C input/ output type BThis is the data type used in the C library to represent the input/output  data. The Haskell input/ output type <This is the data type used in Haskell to represent the input/ output data. l HCreate a callback function to be used during training for reporting and  to stop the training !"#$%&  !"#$%&   !"#$%&   !"#$%portable experimentalolivier.boudry@gmail.comm.Destroy the Neural Network, releasing memory. L (fann_destroy belongs to Fann.Base but is duplicated here so it does not  have to be exported) nLoad a saved ANN from a file oSave an ANN to a file &2Save an Artificial Neural Network (ANN) to a file The ANN to be saved #The path of the file to be created 'GLoad an ANN and call the given function with the ANN as argument. Once  finished, destroy the ANN. (The path to the file containing the ANN  A function to be run on the ANN )The return value from the given function pLoad a saved ANN from a file &'&'&'portable experimentalolivier.boudry@gmail.comq"Return the number of output nodes r'Return the total number of connections s!Return the total number of nodes t!Return the number of input nodes u3Randomize the weights to values in the given range v5Run the trained Neural Network with a specific input wJCreate a sparse not fully connected Neural Network with shortcuts between  layers x3Create a sparse not fully connected Neural Network y1Create a standard fully connected Neural Network z.Destroy the Neural Network, releasing memory. The ANN to destroy (Print the ANN parameters )Print the ANN connections *:Initialize the weights using Widrow + Nguyens algorithm. LThis function behaves similarly to fann_randomize_weights. It will use the M algorithm developed by Derrick Nguyen and Bernard Widrow to set the weights F in such a way as to speed up training. This technique is not always J successful, and in some cases can be less efficient than a purely random  initialization. JThe algorithm requires access to the range of the input data (ie, largest K and smallest input), and therefore accepts a second argument, data, which > is the training data that will be used to train the network. The ANN (The training data used to calibrate the  weights +1Run the trained Neural Network on provided input The ANN A list of inputs A list of outputs ,BCreate a new standard fully connected Neural Network and call the , given function with the ANN as argument. - When finished destroy the Neural Network. <The structure of the ANN is given by the first parameter. It's an E Int list giving the number of nodes per layer from input layer to  output layer.  Example: [2,3,1]8 would describe an ANN with 2 nodes in the input layer, ? one hidden layer of 3 nodes and 1 node in the output layer. IThe function provided as second argument will be called with the created  ANN as parameter. The ANN structure A function using the ANN The return value -DCreate a new sparse not fully connected Neural Network and call the K given function with the ANN as argument. When finished destroy the ANN. The ratio of connections The ANN structure A function using the ANN The return value .ECreate a new sparse not fully connected Neural Network with shortcut H connections between layers and call the given function with the ANN 9 as argument. When finished destroy the Neural Network The ANN structure A function using the ANN The return value //Randomize weights to values in the given range IWeights in a newly created ANN are already initialized to random values. G You can use this function if you want to customize the random weights  upper and lower bounds. The ANN min and max bounds for weights  initialization 07Return the number of input nodes of the Neural Network The ANN The number of input nodes 18Return the number of output nodes of the Neural Network The ANN The number of output nodes 27Return the total number of nodes of the Neural Network The ANN The number of nodes 3=Return the total number of connections of the Neural Network The ANN The number of connections {5Create a new standard fully connected Neural Network |3Create a sparse not fully connected Neural Network }ACreate a sparse not fully connected Neural Network with shortcut  connections between layers ()*+,-./0123 ,-./*+)(1023 ()*+,-./0123portable experimentalolivier.boudry@gmail.comc~Get the number of fail bits (Read the mean square error from the ANN +Set the callback function used in training &Train the ANN on loaded training data Load training data from file Get the RPROP delta max factor Set the RPROP delta max factor Get the RPROP delta min factor Set the RPROP delta min factor Get the RPROP decrease factor Set the RPROP decrease factor Get the RPROP increase factor Set the RPROP increase factor Set the quickprop mu factor Get the quickprop mu factor Set the quickprop decay Get the quickprop decay Set the bit fail limit Get the bit fail limit DSet the activation steepness for all the nodes in the output layers DSet the activation steepness for all the nodes in the hidden layers @Set the activation steepness for all the nodes in a given layer @Set the activation steepness of a given neuron in a given layer /Set the output nodes group activation function /Set the hidden nodes group activation function 7Set the activation function for all neurons in a layer +Set the activation function for one neuron Set the learning momentum Return the learning momentum Set the learning rate. Return the learning rate. !Save the training data to a file 8Returns the number of output nodes in the training data 7Returns the number of input nodes in the training data >Returns the number of training patterns in the training data. 0Returns a copy of a subset of the training data KScales the inputs and outputs in the training data to the specified range. @Scales the outputs in the training data to the specified range. ?Scales the inputs in the training data to the specified range. 0Train the Neural Network on the given data file Train one epoch Test one iteration Train one iteration 4.Reset the mean square error from the network. 8This function also resets the number of bits that fail. 5Test ANN on training data IThis function will run the ANN on the training data and return the error L value. It can be used to validate the check the quality of the ANN on some  test data. The ANN to be used The training data The error value 6,Set the stop function used during training. "The stop function is described in  The default stop function is stopFunctionMSE  See also:  7, [ 70Returns the stop function used during training. "The stop function is described in  The default stop function is stopFunctionMSE  See also:  6, \ 8-Set the error function used during training. #The error function is described in   See also:  9 The ANN The error function 90Return the error function used during training. #The error function is described in  The default error function is errorFunctionTanH  See also:  8 The ANN The error function :Set the training algorithm.  See also:  ;, TrainingAlgorithm The ANN The training algorithm ;BReturn the training algorithm. This training algorithm is used by  C and associated functions. -Note that this algorithm is also used during cascadeTrainOnData although F only fannTrainRPROP and fannTrainQuickProp is allowed during cascade  training.  See also:  :,  The ANN The training algorithm <.Returns an exact copy of a training data set. The training data  A new copy =.Merges two training data sets into a new one. training data set 1 training data set 2 'a copy of the merged data sets 1 and 2 >/Shuffles training data, randomizing the order. GThis is recomended for incremental training, while it has no influence  during batch training The data to randomly reorder ?Destroy training data ;Destroy training data and properly deallocates the memory. KBe sure to use this function after finished using the training data unless  the training data is part of a E call. The data to destroy @>Train the Neural Network on the given input and output values The ANN to be trained  The input The expected output A=Test the Neural Network on the given input and output values The ANN to be tested  The input The expected output B0Train the Neural Network on the given data file The ANN to be trained #The path to the training data file "The max number of epochs to train %The number of epochs between reports The desired error C0Train the Neural Network on a training dataset. &Instead of printing out reports every "epochs between reports" , a callback  function can be called (see i) A value of zero in the epochs between reports means no reports should be  printed. The ANN to be trained The training data "The max number of epochs to train %The number of epochs between reports The desired error D,Train one epoch with a set of training data ITrain one epoch with the given training data. One epoch is where all the + training data is considered exactly once. HThe function returns the MSE error as it is calculated either before or K during the actual training. This is not the actual MSE after the training H epoch but since calculating this will require to go through the entire E training set once more it is more adequate to use this value during  training. >The training algorithm used by this function is chosen by the  : function.  See also:  C, 5 EFRead training data from file and run the given function on that data. #The path to the training data file #A function using the training data The return value F!Reads training data from a file. !The file must be formatted like:  " num_records num_input num_output  inputdata separated by space  outputdata separated by space   ...  ...   inputdata separated by space  outputdata separated by space  See also:  C,  destroyTrain,  saveTrain Path to the data file The loaded training data G?Scales the inputs in the training data to the specified range.  See also:  scaleOutputData, I The data to be scaled The minimum bound The maximum bound H?Scales the output in the training data to the specified range.  See also:  scaleInputData, I The data to be scaled The minimum bound The maximum bound IKScales the inputs and outputs in the training data to the specified range.  See also:  scaleOutputData, scaleInputData The data to be scaled The minimum bound The maximum bound JGReturns a copy of a subset of the training data, starting at the given 0 offset and taking the given count of elements.   len <- trainDataLength tdata ) newtdata <- subsetTrainData tdata 0 len Will do the same as <  See also:  K K>Returns the number of training patterns in the training data. L7Returns the number of input nodes in the training data M8Returns the number of output nodes in the training data NFSave the training structure to a file with the format as specified in  F  See also  F OReturn the learning rate. MThe learning rate is used to determine how aggressive the training should be & for some of the training algorithms (fannTrainIncremental,  fannTrainBatch, fannTrainQuickProp). Note that it is not used in fannTrainRPROP. "The default learning rate is 0.7.  See also:  P, : PSet the learning rate. BSee getLearningRate for more information about the learning rate.  See also:  getLearingRate QReturn the learning momentum. 2The learning momentum can be used to speed up the fannTrainIncremental  training algorithm. KA too high momentum will however not benefit training. Setting momentum to I 0 will be the same as not using the momentum parameter. The recommended 2 value for this parameter is between 0.0 and 1.0. The default momentum is 0.  See also:  R, : RSet the learning momentum. More info available in Q. SISet the activation function for the neuron specified in layer specified, & counting the input layer as layer 0. LIt is not possible to set activation functions for the neurons in the input  layer. FWhen choosing an activation function it is important to note that the . activation function have different range. In  fannSigmoid is in the 0 .. 1 L range while fannSigmoidSymmetric is in the -1 .. 1 range and fannLinear is  unbound. 8The default activation function is fannSigmoidStepwise.  See also:  T, U,  V, W The ANN The activation function  The layer  The neuron TGSet the activation function for all neurons of a given layer, counting  the input layer as layer 0. HIt is not possible to set an activation function for the neurons in the  input layer.  See also:  S, U,  V, X The ANN The activation function  The layer U7Set the activation function for all the hidden layers.  See also:  S, T,  V The ANN The Activation Function V2Set the activation function for the output layer.  See also:  S, T,  U The ANN The Activation Function WFSet the activation steepness of the specified neuron in the specified ' layer, counting the input layer as 0. LIt is not possible to set activation steepness for the neurons in the input  layer. KThe steepness of an activation function says something about how fast the H activation function goes from the minimum to the maximum. A high value G for the activation function will also give a more agressive training. IWhen training networks where the output values should be at the extremes M (usually 0 and 1, depending on the activation function), a steep activation  can be used (e.g. 1.0). (The default activation steepness is 0.5  See also:  X, Y,  Z, S The ANN The steepness  The layer  The neuron XHSet the activation steepness for all of the neurons in the given layer, & counting the input layer as layer 0. JIt is not possible to set the activation steepness for the neurons in the  input layer.  See also:  W, Y,  Z, S. The ANN The steepness  The layer YDSet the activation steepness of all the nodes in all hidden layers.  See also:  W, X,  Z, S The ANN The steepness ZCSet the activation steepness of all the nodes in all output layer.  See also:  W, X,  Y, S The ANN The steepness [1Returns the bit fail limit used during training. 5The bit fail limit is used during training where the  is set  stopFunctionBit. HThe limit is the maximum accepted difference between the desired output L and the actual output during training. Each output that diverges more than " this is counted as an error bit. IThis difference is divided by two when dealing with symmetric activation M functions, so that symmetric and not symmetric activation functions can use  the same limit. $The default bit fail limit is 0.35.  See also:  \ \-Set the bit fail limit used during training.  See also:  [ ]Returns the quickprop decay IThe decay is a small negative valued number which is the factor that the L weights should become smaller in each iteration during quickprop training. IThis is used to make sure that the weights do not become too high during  training. The default decay is -0.0001  See also:  ^ ^ Sets the quickprop decay factor  See also:  ] _ Returns the quickprop mu factor NThe mu factor is used to increase and decrease the step-size during quickprop  training. O The mu factor should always be above 1, since it would otherwise decrease the 0 step-size when it was supposed to increase it. The default mu factor is 1.75  See also:  ` `Sets the quickprop mu factor  See also:  _ a"Returns the RPROP increase factor EThe RPROP increase factor is a value larger than 1, which is used to / increase the step-size during RPROP training. #The default increase factor is 1.2  See also:  b bSets the RPROP increase factor  See also:  a c"Returns the RPROP decrease factor EThe RPROP decrease factor is a value larger than 1, which is used to / decrease the step-size during RPROP training. #The default decrease factor is 0.5  See also:  d dSets the RPROP decrease factor  See also:  c e#Returns the RPROP delta min factor JThe delta min factor is a small positive number determining how small the  minimum step-size may be. #The default value delta min is 0.0  See also:  f fSets the RPROP delta min  See also:  e g#Returns the RPROP delta max factor DThe delta max factor is a positive number determining how large the  maximum step-size may be. $The default value delta max is 50.0  See also:  h hSets the RPROP delta max  See also:  g iHSet the callback function to be used for reporting and to stop training 2The callback function will be called based on the "Epoch between reports"  defined frequency. &The type of the callback function is: 3 callback :: FannPtr -- The ANN being trained 6 -> TrainDataPtr -- The training data in use 2 -> Int -- Max number of epochs > -> Int -- Number of epochs between reports + -> Float -- Desired error + -> Int -- Current epoch K -> Bool -- True to terminate training, False to continue j'Get the mean square error from the ANN GThis value is calculated during training or testing, and can therefore H sometimes be a bit off if the weights have been changed since the last  calculation of the value. The ANN The mean square error kGet the number of fail bits MThe number of fail bits means the number of output neurons which differ more  than the bit fail limit (see [, \). This value is reset by 4) and updated by the same functions which  also updates the MSE value 5, D. The ANN The number of fail bits 8456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijk8@DBC5Aj4kEF?>GHI=<JKLMN;:OPQRSTUVWXYZ9876[\i]^_`abcdefgh8456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkportable experimentalolivier.boudry@gmail.coml  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijk       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}q~ hfann-0.3 HFANN.DataHFANN.IO HFANN.Base HFANN.TrainHFANNActivationFunction StopFunction ErrorFunctionTrainAlgorithm CCallbackType CallbackType TrainDataPtr TrainDataFannPtrFann CFannTypePtr CFannTypeFannTypetrainIncremental trainBatch trainRPROPtrainQuickProperrorFunctionLinearerrorFunctionTanHstopFunctionMSEstopFunctionBitactivationLinearactivationThresholdactivationThresholdSymmetricactivationSigmoidactivationSigmoidStepwiseactivationSigmoidSymmetric"activationSigmoidSymmetricStepwiseactivationGaussianactivationGaussianSymmetricactivationGaussianStepwiseactivationElliotactivationElliotSymmetricactivationLinearPieceactivationLinearPieceSymmetric fannCallbacksaveFann withSavedFannprintParametersprintConnections initWeightsrunFannwithStandardFannwithSparseFannwithShortcutFannrandomizeWeightsgetInputNodesCountgetOutputNodesCountgetTotalNodesCountgetConnectionsCountresetMSEtestDatasetTrainStopFunctiongetTrainStopFunctionsetTrainErrorFunctiongetTrainErrorFunctionsetTrainingAlgorithmgetTrainingAlgorithmduplicateTrainDatamergeTrainDatashuffleTrainDatadestroyTrainDatatraintest trainOnFile trainOnData trainEpoch withTrainData loadTrainDatascaleInputTrainDatascaleOutputTrainDatascaleTrainDatasubsetTrainDatatrainDataLengthgetTrainDataInputNodesCountgetTrainDataOutputNodesCount saveTrainDatagetLearningRatesetLearningRategetLearningMomentumsetLearningMomentumsetActivationFunctionsetActivationFunctionLayersetActivationFunctionHiddensetActivationFunctionOutputsetActivationSteepnesssetActivationSteepnessLayersetActivationSteepnessHiddensetActivationSteepnessOutputgetBitFailLimitsetBitFailLimitgetQuickPropDecaysetQuickPropDecaygetQuickPropMusetQuickPropMugetRPROPIncreaseFactorsetRPROPIncreaseFactorgetRPROPDecreaseFactorsetRPROPDecreaseFactorgetRPROPDeltaMinsetRPROPDeltaMingetRPROPDeltaMaxsetRPROPDeltaMax setCallbackgetMSE getBitFail mkCallback destroyFannf_fann_create_from_file f_fann_save loadSavedFannf_fann_get_num_outputf_fann_get_total_connectionsf_fann_get_total_neuronsf_fann_get_num_inputf_fann_randomize_weights f_fann_runf_fann_create_shortcut_arrayf_fann_create_sparse_arrayf_fann_create_standard_arraycreateStandardFanncreateSparseFanncreateShortcutFannf_fann_get_bit_failf_fann_get_MSEf_fann_set_callbackf_fann_train_on_dataf_fann_read_train_from_filef_fann_get_rprop_delta_maxf_fann_set_rprop_delta_maxf_fann_get_rprop_delta_minf_fann_set_rprop_delta_min f_fann_get_rprop_decrease_factor f_fann_set_rprop_decrease_factor f_fann_get_rprop_increase_factor f_fann_set_rprop_increase_factorf_fann_set_quickprop_muf_fann_get_quickprop_muf_fann_set_quickprop_decayf_fann_get_quickprop_decayf_fann_set_bit_fail_limitf_fann_get_bit_fail_limit&f_fann_set_activation_steepness_output&f_fann_set_activation_steepness_hidden%f_fann_set_activation_steepness_layerf_fann_set_activation_steepness%f_fann_set_activation_function_output%f_fann_set_activation_function_hidden$f_fann_set_activation_function_layerf_fann_set_activation_functionf_fann_set_learning_momentumf_fann_get_learning_momentumf_fann_set_learning_ratef_fann_get_learning_ratef_fann_save_trainf_fann_num_output_train_dataf_fann_num_input_train_dataf_fann_length_train_dataf_fann_subset_train_dataf_fann_scale_train_dataf_fann_scale_output_train_dataf_fann_scale_input_train_dataf_fann_train_on_filef_fann_train_epoch f_fann_test f_fann_train