Copyright  (c) Amy de Buitléir 20122016 

License  BSDstyle 
Maintainer  amy@nualeargais.ie 
Stability  experimental 
Portability  portable 
Safe Haskell  Safe 
Language  Haskell2010 
A Kohonen Selforganising Map (SOM). A SOM maps input patterns onto a regular grid (usually twodimensional) where each node in the grid is a model of the input data, and does so using a method which ensures that any topological relationships within the input data are also represented in the grid. This implementation supports the use of nonnumeric patterns.
In layman's terms, a SOM can be useful when you you want to discover the underlying structure of some data. A tutorial is available at https://github.com/mhwombat/som/wiki.
NOTES:
 Version 5.0 fixed a bug in the
function. If you usedecayingGaussian
(which uses this function), your SOM should now learn more quickly.defaultSOM
 The
gaussian
function has been removed because it is not as useful for SOMs as I originally thought. It was originally designed to be used as a factor in a learning function. However, in most cases the user will want to introduce a time decay into the exponent, rather than simply multiply by a factor.
References:
 Kohonen, T. (1982). Selforganized formation of topologically correct feature maps. Biological Cybernetics, 43 (1), 59–69.
 data SOM t d gm x k p = SOM {
 gridMap :: gm p
 learningRate :: t > d > x
 difference :: p > p > x
 makeSimilar :: p > x > p > p
 counter :: t
 toGridMap :: GridMap gm p => SOM t d gm x k p > gm p
 decayingGaussian :: Floating x => x > x > x > x > x > x > x > x
 stepFunction :: (Num d, Fractional x, Eq d) => x > t > d > x
 constantFunction :: x > t > d > x
 trainNeighbourhood :: (Grid (gm p), GridMap gm p, Index (BaseGrid gm p) ~ Index (gm p), Num t, Num x, Num d) => SOM t d gm x k p > Index (gm p) > p > SOM t d gm x k p
Construction
data SOM t d gm x k p Source #
A SelfOrganising Map (SOM).
Although SOM
implements GridMap
, most users will only need the
interface provided by Data.Datamining.Clustering.Classifier
. If
you chose to use the GridMap
functions, please note:
 The functions
adjust
, andadjustWithKey
do not increment the counter. You can do so manually withincrementCounter
.  The functions
map
andmapWithKey
are not implemented (they just return anerror
). It would be problematic to implement them because the input SOM and the output SOM would have to have the sameMetric
type.
SOM  

(GridMap gm p, (~) * k (Index (BaseGrid gm p)), Grid (gm p), GridMap gm x, (~) * k (Index (gm p)), (~) * k (Index (BaseGrid gm x)), Num t, Ord x, Num x, Num d) => Classifier (SOM t d gm) x k p Source #  
Foldable gm => Foldable (SOM t d gm x k) Source #  
(Foldable gm, GridMap gm p, Grid (BaseGrid gm p)) => GridMap (SOM t d gm x k) p Source #  
Generic (SOM t d gm x k p) Source #  
(NFData (gm p), NFData t) => NFData (SOM t d gm x k p) Source #  
Grid (gm p) => Grid (SOM t d gm x k p) Source #  
type BaseGrid (SOM t d gm x k) p Source #  
type Rep (SOM t d gm x k p) Source #  
type Direction (SOM t d gm x k p) Source #  
type Index (SOM t d gm x k p) Source #  
Deconstruction
toGridMap :: GridMap gm p => SOM t d gm x k p > gm p Source #
Extracts the grid and current models from the SOM.
A synonym for
.gridMap
Learning functions
decayingGaussian :: Floating x => x > x > x > x > x > x > x > x Source #
A typical learning function for classifiers.
returns a bell curveshaped
function. At time zero, the maximum learning rate (applied to the
BMU) is decayingGaussian
r0 rf w0 wf tfr0
, and the neighbourhood width is w0
. Over time the
bell curve shrinks and the learning rate tapers off, until at time
tf
, the maximum learning rate (applied to the BMU) is rf
,
and the neighbourhood width is wf
. Normally the parameters
should be chosen such that:
 0 < rf << r0 < 1
 0 < wf << w0
 0 < tf
where << means "is much smaller than" (not the Haskell <<
operator!)
stepFunction :: (Num d, Fractional x, Eq d) => x > t > d > x Source #
A learning function that only updates the BMU and has a constant learning rate.
constantFunction :: x > t > d > x Source #
A learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.
Advanced control
trainNeighbourhood :: (Grid (gm p), GridMap gm p, Index (BaseGrid gm p) ~ Index (gm p), Num t, Num x, Num d) => SOM t d gm x k p > Index (gm p) > p > SOM t d gm x k p Source #
Trains the specified node and the neighbourood around it to better
match a target.
Most users should use
, which automatically determines
the BMU and trains it and its neighbourhood.train