neural-0.1.1.0: Neural Networks in native Haskell

Copyright(c) Lars Brünjes, 2016
LicenseMIT
Maintainerbrunjlar@gmail.com
Stabilityexperimental
Portabilityportable
Safe HaskellNone
LanguageHaskell2010
Extensions
  • ScopedTypeVariables
  • DataKinds
  • TypeOperators
  • ExplicitNamespaces
  • ExplicitForAll

Numeric.Neural.Layer

Description

This modules defines special "layer" components and convenience functions for the creation of such layers.

Synopsis

Documentation

type Layer i o = Component (Vector i Analytic) (Vector o Analytic) Source

A Layer i o is a component that maps a vector of length i to a vector of length j.

linearLayer :: forall i o. (KnownNat i, KnownNat o) => Layer i o Source

Creates a linear Layer, i.e. a layer that multiplies the input with a weight matrix and adds a bias to get the output.

Random initialization follows the recommendation from chapter 3 of the online book Neural Networks and Deep Learning by Michael Nielsen.

layer :: (KnownNat i, KnownNat o) => (Analytic -> Analytic) -> Layer i o Source

Creates a Layer as a combination of a linear layer and a non-linear activation function.

tanhLayer :: (KnownNat i, KnownNat o) => Layer i o Source

This is simply layer, specialized to tanh-activation. Output values are all in the interval [0,1].

logisticLayer :: (KnownNat i, KnownNat o) => Layer i o Source

This is simply layer, specialized to the logistic function as activation. Output values are all in the interval [-1,1].

reLULayer :: (KnownNat i, KnownNat o) => Layer i o Source

This is simply layer, specialized to the rectified linear unit activation function. Output values are all non-negative.

softmax :: (Floating a, Functor f, Foldable f) => f a -> f a Source

The softmax function normalizes a vector, so that all entries are in [0,1] with sum 1. This means the output entries can be interpreted as probabilities.