The Fast Artificial Neural Network Library (FANN) is a free open source neural network library written in C with support for both fully connected and sparsely connected networks (http://leenissen.dk/fann/).
HFANN is a Haskell interface to this library.
- withStandardFann :: [Int] -> (FannPtr -> IO a) -> IO a
- withSparseFann :: Float -> [Int] -> (FannPtr -> IO a) -> IO a
- withShortcutFann :: [Int] -> (FannPtr -> IO a) -> IO a
- randomizeWeights :: FannPtr -> (FannType, FannType) -> IO ()
- initWeights :: FannPtr -> TrainDataPtr -> IO ()
- runFann :: FannPtr -> [FannType] -> IO [FannType]
- printConnections :: FannPtr -> IO ()
- printParameters :: FannPtr -> IO ()
- getOutputNodesCount :: FannPtr -> IO Int
- getInputNodesCount :: FannPtr -> IO Int
- getTotalNodesCount :: FannPtr -> IO Int
- getConnectionsCount :: FannPtr -> IO Int
Create a new standard fully connected Neural Network and call the given function with the ANN as argument. When finished destroy the Neural Network.
The structure of the ANN is given by the first parameter. It's an Int list giving the number of nodes per layer from input layer to output layer.
[2,3,1] would describe an ANN with 2 nodes in the input layer,
one hidden layer of 3 nodes and 1 node in the output layer.
The function provided as second argument will be called with the created ANN as parameter.
The ratio of connections
The ANN structure
|-> (FannPtr -> IO a)|
A function using the ANN
|-> IO a|
The return value
Create a new sparse not fully connected Neural Network and call the given function with the ANN as argument. When finished destroy the ANN.
Create a new sparse not fully connected Neural Network with shortcut connections between layers and call the given function with the ANN as argument. When finished destroy the Neural Network
Randomize weights to values in the given range
Weights in a newly created ANN are already initialized to random values. You can use this function if you want to customize the random weights upper and lower bounds.
Initialize the weights using Widrow + Nguyen’s algorithm.
This function behaves similarly to fann_randomize_weights. It will use the algorithm developed by Derrick Nguyen and Bernard Widrow to set the weights in such a way as to speed up training. This technique is not always successful, and in some cases can be less efficient than a purely random initialization.
The algorithm requires access to the range of the input data (ie, largest and smallest input), and therefore accepts a second argument, data, which is the training data that will be used to train the network.
Run the trained Neural Network on provided input
Return the number of output nodes of the Neural Network
Return the number of input nodes of the Neural Network
Return the total number of nodes of the Neural Network