nn: A tiny neural network

[ ai, library, mit ] [ Propose Tags ]

Please see the README on Github at https://github.com/saschagrunert/nn#readme


[Skip to Readme]

Modules

[Index]

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.2.0
Dependencies base (>=4.7 && <5), random, split [details]
License MIT
Copyright 2018 Sascha Grunert
Author Sascha Grunert
Maintainer mail@saschagrunert.de
Category AI
Home page https://github.com/saschagrunert/nn#readme
Bug tracker https://github.com/saschagrunert/nn/issues
Source repo head: git clone https://github.com/saschagrunert/nn
Uploaded by deepinside at 2018-04-06T05:30:17Z
Distributions
Reverse Dependencies 1 direct, 0 indirect [details]
Downloads 802 total (8 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2018-04-14 [all 1 reports]

Readme for nn-0.2.0

[back to package description]

ηn

A tiny neural network 🧠

This small neural network is based on the backpropagation algorithm.

Usage

A minimal usage example would look like this:

import AI.Nn (new
             ,predict
             ,train)

main :: IO ()
main = do
  {- Creates a new network with two inputs,
     two hidden layers and one output -}
  network <- new [2, 2, 1]

  {- Train the network for a common logical AND,
     until the maximum error of 0.01 is reached -}
  let trainedNetwork = train 0.01 network [([0, 0], [0])
                                          ,([0, 1], [0])
                                          ,([1, 0], [0])
                                          ,([1, 1], [1])]

  {- Predict the learned values -}
  let r00 = predict trainedNetwork [0, 0]
  let r01 = predict trainedNetwork [0, 1]
  let r10 = predict trainedNetwork [1, 0]
  let r11 = predict trainedNetwork [1, 1]

  {- Print the results -}
  putStrLn $ printf "0 0 -> %.2f" (head r00)
  putStrLn $ printf "0 1 -> %.2f" (head r01)
  putStrLn $ printf "1 0 -> %.2f" (head r10)
  putStrLn $ printf "1 1 -> %.2f" (head r11)

The result should be something like:

0 0 -> -0.02
0 1 -> -0.02
1 0 -> -0.01
1 1 -> 1.00

Hacking

To start hacking simply clone this repository and make sure that stack is installed. Then simply hack around and build the project with:

> stack build --file-watch

Contributing

You want to contribute to this project? Wow, thanks! So please just fork it and send me a pull request.