úÎpÇl¯4      !"#$%&'()*+,-./012 3  456789:;<=>>> >Kiet Lam <ktklam9@gmail.com>3The representation of an Artificial Neural Network !The activation function for each  neuron !The derivative of the activation  function The regularization constant 'The vector of the weights between each  layer of the neural network The architecture of the neural  networks. #e.g., a network of an architecture ' of 2-3-1 would have an architecture  representation of [2,3,1] ,NOTE: The library will automatically create 1 a bias neuron in each layer, so you do not need  to state them explicitly  ,Get the list of matrices of weights between % each layer. This can be more useful ) than the barebone vector representation  of the weights    Kiet Lam <ktklam9@gmail.com>&The type to represent a function that # can calculate the gradient vector & of the weights of the neural network =NOTE: Must be supplied a function to calculate the cost, the ; cost derivative of the output neurons, the neural network 2 the input matrix, and the expected output matrix The cost function The cost derivative The neural network The input matrix The expected output matrix Returns the gradient vector  of the weights 3Type that represents the cost function derivative.  on the output nodes  The neural networks of interest 'The matrix of inputs where the ith row  is the ith training set +The matrix of calculated outputs where the # ith row is the ith training set )The matrix of expected outputs where the ) ith row is the ith expected output of  of the training set &Returns the matrix of the derivatives # of the cost of the output nodes # compared to the expected matrix "Type that represents the function : that can calculate the total cost of the neural networks K given the neural networks, the input matrix and an expected output matrix  The neural networks of interest $The input matrix, where the ith row ) is the input vector of a training set &The expected output matrix, where the ) ith row is the expected output vector  of a training set 1Returns the cost of the calculated output vector ) from the neural network and the given  expected output vector (Type that represents the error function & between the calculated output vector  and the expected output vector The calculated output vector The expected output vector !Returns the error of how far off % the calculated vector is from the  expected vector ?Type that represents the derivative of the activation function ?NOTE: The derivative can be non-trivial and must be continuous -Type that represents the activation function Kiet Lam <ktklam9@gmail.com>$Forward propagate to get the network' s output The neural network of interest The input vector (The output vector of the output neurons Kiet Lam <ktklam9@gmail.com>@Calculate the analytical gradient of the weights of the network  by using backpropagation 0NOTE: This should only be used as a last resort 0 if for some reason (bugs?) the backpropagation , algorithm does not give you good gradients -The numerical algorithm requires two forward 3 propagations, while the backpropagation algorithm + only requires one, so this is more costly 1Also, analytical gradients almost always perform ! better than numerical gradients $User must provide an epsilon value. 5 Make sure to use a very small value for the epsilon  for more accurate gradients  The epsilon Returns a gradient function ! that calculates the numerical  gradients of the weights ?@AKiet Lam <ktklam9@gmail.com> Represents the cost model  of the Neural Network The logistic cost The mean-squared cost "Gets the cost function associated  with the cost model $Gets the cost derivative associated  with the cost model BCDEFGHKiet Lam <ktklam9@gmail.com> Represents the activation of # each neuron in the neural network !+The hyperbolic tangent activation function " The sigmoid activation function #:Get the activation function associated with an activation $:Get the derivative function associated with an activation IJKL !"#$ "!#$ "!!"#$ Kiet Lam <ktklam9@gmail.com>  !"#$Kiet Lam <ktklam9@gmail.com> %'The types of training algorithm to use .NOTE: These are all batch training algorithms &home-made binding to liblbfgs 'hmatrix's binding to GSL (hmatrix's binding to GSL )hmatrix's binding to GSL MN*5Train the neural network given a training algorithm, / the training parameters and the training data The training algorithm to use %The cost model of the neural network $The function that can calculate the  gradients vector The network to be trained +The precision of the training with regards  to the cost function !The maximum number of iterations The input matrix The expected output matrix Returns the trained network O%&'()*%)('&*%)('&&'()*Kiet Lam <ktklam9@gmail.com>++Generic neural network model for expansion ,-The cost model of the model .+The neural network to be used for modeling /1Initialize neural network model with the weights  randomized within [-1.0,1.0] $The activation model of each neuron %The cost model of the output neurons # compared to the expected output  The architecture of the network - e.g., a 2-3-1 architecture would be [2,3,1] The regularization constant 1 should be 0 if you do not want regularization The random generator Returns the initialized model 0Get the output of the model The model of interest $The input vector to the input layer  The output of the network model 1@Train the model given the parameters and the training algorithm The model to be trained "The training algorithm to be used 'The precision to train with regards to  the cost function &The maximum amount of epochs to train The input matrix The expected output matrix Returns the trained model +,-./01+,-./01+,-.,-./01 Kiet Lam <ktklam9@gmail.com>2!This is a general neural network + model that can be used for classification ' or regression using HyperbolicTangent , as the activation model and MeanSquared as  the cost model 'The architecture of the neural network The regularization constant The random generator &Returns the initialized general model 222 Kiet Lam <ktklam9@gmail.com>3Make a neural network model ( that should be used for classification + using the Sigmoid as the activation model  and Logistic as the cost model 'The architecture of the neural network The regularization constant The random generator 'Returns the initialized classification  model 333Kiet Lam <ktklam9@gmail.com> +,-./0123P !"#$%&'()*+,-./01234567899:;<=> ? @ A B C D E F G H I J KLMNOPQRSTUVWXYZ[\]HaskellNN-0.1.3 AI.Network AI.SignaturesAI.Calculation.NetworkOutputAI.Calculation.GradientsAI.Calculation.CostAI.Calculation.Activation AI.TrainingAI.Model.GenericModelAI.Model.GeneralAI.Model.ClassificationAI.Training.Internal.LBFGSAuxAI.Training.InternalAI.CalculationAI.ModelNetwork activation derivativelambdaweights architecture toActivation toDerivativetoLambda toWeightstoWeightMatricestoArchitecture setActivation setDerivative setLambda setWeightssetArchitectureGradientFunctionCostDerivative CostFunction ErrorFunctionDerivativeFunctionActivationFunction networkOutputbackpropagationnumericalGradientsCostLogistic MeanSquaredgetCostFunctiongetCostDerivative ActivationHyperbolicTangentSigmoid getActivation getDerivativeTrainingAlgorithmLBFGSBFGSConjugateGradientGradientDescent trainNetwork GenericModelcostnetinitializeModel getOutput trainModelinitializeGeneralinitializeClassificationTVVTVc_minimizeLBFGS mkListListFun mkListFunaux_LToLaux_LToDvecFuncToLFunc vecFuncToFuncminimizeLBFGS_aux minimizeLBFGSmapElementToVectormodifyElementAtif' generalCostgeneralErrorFunctiongetErrorFunctionmeanSquaredErrormeanSquaredDerivative logisticErrorlogisticDerivativesigmoid sigmoidDerivhTangent hTangentDerivvectorWeightToCostvectorWeightToGradients getTrainAlgo