Copyright  (c) Amy de Buitléir 20122014 

License  BSDstyle 
Maintainer  amy@nualeargais.ie 
Stability  experimental 
Portability  portable 
Safe Haskell  SafeInferred 
Language  Haskell98 
A module containing private SOM
internals. Most developers should
use SOM
instead. This module is subject to change without notice.
 class LearningFunction f where
 type LearningRate f
 rate :: f > LearningRate f > LearningRate f > LearningRate f
 data DecayingGaussian a = DecayingGaussian a a a a a
 data StepFunction a = StepFunction a
 data ConstantFunction a = ConstantFunction a
 data SOM f t gm k p = SOM {
 gridMap :: gm p
 learningFunction :: f
 counter :: t
 currentLearningFunction :: (LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > LearningRate f > Metric p
 toGridMap :: GridMap gm p => SOM f t gm k p > gm p
 adjustNode :: (Pattern p, Grid g, k ~ Index g, Num t) => g > (t > Metric p) > p > k > k > p > p
 trainNeighbourhood :: (Pattern p, Grid (gm p), GridMap gm p, Index (BaseGrid gm p) ~ Index (gm p), LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > Index (gm p) > p > SOM f t gm k p
 incrementCounter :: Num t => SOM f t gm k p > SOM f t gm k p
 justTrain :: (Ord (Metric p), Pattern p, Grid (gm p), GridMap gm (Metric p), GridMap gm p, Index (BaseGrid gm (Metric p)) ~ Index (gm p), Index (BaseGrid gm p) ~ Index (gm p), LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > p > SOM f t gm k p
Documentation
class LearningFunction f where Source
A function used to adjust the models in a classifier.
type LearningRate f Source
rate :: f > LearningRate f > LearningRate f > LearningRate f Source
returns the learning rate for a node.
The parameter rate
f t df
is the learning function.
The parameter t
indicates how many patterns (or pattern
batches) have previously been presented to the classifier.
Typically this is used to make the learning rate decay over time.
The parameter d
is the grid distance from the node being
updated to the BMU (Best Matching Unit).
The output is the learning rate for that node (the amount by
which the node's model should be updated to match the target).
The learning rate should be between zero and one.
Fractional a => LearningFunction (ConstantFunction a)  
(Fractional a, Eq a) => LearningFunction (StepFunction a)  
(Floating a, Fractional a, Num a) => LearningFunction (DecayingGaussian a) 
data DecayingGaussian a Source
A typical learning function for classifiers.
returns a bell curveshaped
function. At time zero, the maximum learning rate (applied to the
BMU) is DecayingGaussian
r0 rf w0 wf tfr0
, and the neighbourhood width is w0
. Over time the
bell curve shrinks and the learning rate tapers off, until at time
tf
, the maximum learning rate (applied to the BMU) is rf
,
and the neighbourhood width is wf
. Normally the parameters
should be chosen such that:
 0 < rf << r0 < 1
 0 < wf << w0
 0 < tf
where << means "is much smaller than" (not the Haskell <<
operator!)
DecayingGaussian a a a a a 
Eq a => Eq (DecayingGaussian a)  
Show a => Show (DecayingGaussian a)  
Generic (DecayingGaussian a)  
(Floating a, Fractional a, Num a) => LearningFunction (DecayingGaussian a)  
type Rep (DecayingGaussian a)  
type LearningRate (DecayingGaussian a) = a 
data StepFunction a Source
A learning function that only updates the BMU and has a constant learning rate.
Eq a => Eq (StepFunction a)  
Show a => Show (StepFunction a)  
Generic (StepFunction a)  
(Fractional a, Eq a) => LearningFunction (StepFunction a)  
type Rep (StepFunction a)  
type LearningRate (StepFunction a) = a 
data ConstantFunction a Source
A learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.
Eq a => Eq (ConstantFunction a)  
Show a => Show (ConstantFunction a)  
Generic (ConstantFunction a)  
Fractional a => LearningFunction (ConstantFunction a)  
type Rep (ConstantFunction a)  
type LearningRate (ConstantFunction a) = a 
A SelfOrganising Map (SOM).
Although SOM
implements GridMap
, most users will only need the
interface provided by Data.Datamining.Clustering.Classifier
. If
you chose to use the GridMap
functions, please note:
 The functions
adjust
, andadjustWithKey
do not increment the counter. You can do so manually withincrementCounter
.  The functions
map
andmapWithKey
are not implemented (they just return anerror
). It would be problematic to implement them because the input SOM and the output SOM would have to have the sameMetric
type.
SOM  

(GridMap gm p, (~) * k (Index (BaseGrid gm p)), Pattern p, Grid (gm p), GridMap gm (Metric p), (~) * k (Index (gm p)), (~) * k (Index (BaseGrid gm (Metric p))), Ord (Metric p), LearningFunction f, (~) * (Metric p) (LearningRate f), Num (LearningRate f), Integral t) => Classifier (SOM f t gm) k p  
Foldable gm => Foldable (SOM f t gm k)  
(Foldable gm, GridMap gm p, Grid (BaseGrid gm p)) => GridMap (SOM f t gm k) p  
(Eq f, Eq t, Eq (gm p)) => Eq (SOM f t gm k p)  
(Show f, Show t, Show (gm p)) => Show (SOM f t gm k p)  
Generic (SOM f t gm k p)  
Grid (gm p) => Grid (SOM f t gm k p)  
type BaseGrid (SOM f t gm k) p = BaseGrid gm p  
type Rep (SOM f t gm k p)  
type Index (SOM f t gm k p) = Index (gm p)  
type Direction (SOM f t gm k p) = Direction (gm p) 
currentLearningFunction :: (LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > LearningRate f > Metric p Source
toGridMap :: GridMap gm p => SOM f t gm k p > gm p Source
Extracts the grid and current models from the SOM.
A synonym for
.gridMap
adjustNode :: (Pattern p, Grid g, k ~ Index g, Num t) => g > (t > Metric p) > p > k > k > p > p Source
trainNeighbourhood :: (Pattern p, Grid (gm p), GridMap gm p, Index (BaseGrid gm p) ~ Index (gm p), LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > Index (gm p) > p > SOM f t gm k p Source
Trains the specified node and the neighbourood around it to better
match a target.
Most users should use train
, which automatically determines
the BMU and trains it and its neighbourhood.
incrementCounter :: Num t => SOM f t gm k p > SOM f t gm k p Source
justTrain :: (Ord (Metric p), Pattern p, Grid (gm p), GridMap gm (Metric p), GridMap gm p, Index (BaseGrid gm (Metric p)) ~ Index (gm p), Index (BaseGrid gm p) ~ Index (gm p), LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Integral t) => SOM f t gm k p > p > SOM f t gm k p Source