US     (c) 2012 Gatlin JohnsonLGPLrokenrol@gmail.com experimentalGHCNoneMOur recurrent neural networkCreates a network with an adjacency matrix of your choosing, specified as an unboxed vector. You also must supply a vector of threshold values.oEvaluates a network with the specified function and list of inputs precisely one time step. This is used by J which is probably a more convenient interface for client applications.YIterates over a list of input vectors in sequence and computes one time step for each. /It's a simple, differentiable sigmoid function. 5number of total neurons neurons (input and otherwise)number of inputsflat weight matrix'threshold (bias) values for each neuron a new networkNetwork to evaluate inputvector of pre-existing stateactivation functionlist of inputsnew state vectorNetwork to evaluate inputslist of input listsactivation functionoutput state vector    (c) 2012 Alp MestanogullariBSD3alpmestan@gmail.com experimentalGHCNone 3=BKM List of  s 'Input vector and expected output vector VThe type of an activation function's derivative, mostly used for clarity in signatures IThe type of an activation function, mostly used for clarity in signatures/Our feed-forward neural network type. Note the ' instance, which means you can use   and !g in case you need to serialize your neural nets somewhere else than in a file (e.g over the network)the weight matrices,The following creates a neural network with n inputs and if l is [n1, n2, ...] the net will have n1 neurons on the first layer, n2 neurons on the second, and so on ending with k neurons on the output layer, with random weight matrices as a courtesy of ". createNetwork n l kCreates a neural network with exactly the weight matrices given as input here. We don't check that the numbers of rows/columns are compatible, etc. _Computes the output of the network on the given input vector with the given activation function#hComputes and keeps the output of all the layers of the neural network with the given activation functioncHandy operator to describe your learning set, avoiding unnecessary parentheses. It's just a synonym for '(,)'. Generally you'll load your learning set from a file, a database or something like that, but it can be nice for quickly playing with hnn or for simple problems where you manually specify your learning set. That is, instead of writing: samples :: Samples Double samples = [ (fromList [0, 0], fromList [0]) , (fromList [0, 1], fromList [1]) , (fromList [1, 0], fromList [1]) , (fromList [1, 1], fromList [0]) ]You can write: samples :: Samples Double samples = [ fromList [0, 0] --> fromList [0] , fromList [0, 1] --> fromList [1] , fromList [1, 0] --> fromList [1] , fromList [1, 1] --> fromList [0] ]Generic training function.The first argument is a predicate that will tell the backpropagation algorithm when to stop. The first argument to the predicate is the epoch, i.e the number of times the backprop has been executed on the samples. The second argument is the current networkk, and the third is the list of samples. You can thus combine these arguments to create your own criterion.For example, if you want to stop learning either when the network's quadratic error on the samples, using the tanh function, is below 0.01, or after 1000 epochs, whichever comes first, you could use the following predicate: \pred epochs net samples = if epochs == 1000 then True else quadError tanh net samples < 0.01You could even use  to print the error, to see how the error evolves while it's learning, or redirect this to a file from your shell in order to generate a pretty graphics and what not.The second argument (after the predicate) is the learning rate. Then come the activation function you want, its derivative, the initial neural network, and your training set. Note that we provide  and  for common use cases.TTrains the neural network with backpropagation the number of times specified by the $_ argument, using the given learning rate (second argument). yQuadratic error on the given training set using the given activation function. Useful to create your own predicates for .5Trains the neural network until the quadratic error (a) comes below the given value (first argument), using the given learning rate (second argument).Note: this can loop pretty much forever when you're using a bad architecture for the problem, or unappropriate activation functions.)The sigmoid function: 1 / (1 + exp (-x))?Derivative of the sigmoid function: sigmoid x * (1 - sigmoid x)Derivative of the  function from the Prelude.Loading a neural network from a file (uses zlib compression on top of serialization using the binary package). Will throw an exception if the file isn't there.kSaving a neural network to a file (uses zlib compression on top of serialization using the binary package). #%&'(   #%&'()       !"#!$%!$&'()*+,-./012 hnn-0.2.0.0AI.HNN.FF.NetworkAI.HNN.Recurrent.Network Debug.Tracetracebase GHC.FloattanhNetworkweightssizenInputsthresh createNetwork computeStepevalNetsigmoidSamplesSampleActivationFunctionDerivativeActivationFunctionmatricesfromWeightMatricesoutput--> trainUntil trainNTimes quadErrortrainUntilErrorBelowsigmoid'tanh' loadNetwork saveNetworkbinary-0.7.1.0Data.Binary.ClassBinary Data.Binaryencodedecodemwc-random-0.13.1.2System.Random.MWC uniformVectoroutputsghc-prim GHC.TypesIntdeltas updateNetwork backpropOnce$fBinaryNetwork