som-6.4: Self-Organising Maps

Portabilityportable
Stabilityexperimental
Maintaineramy@nualeargais.ie
Safe HaskellSafe-Inferred

Data.Datamining.Clustering.SOMInternal

Description

A module containing private SOM internals. Most developers should use SOM instead. This module is subject to change without notice.

Synopsis

Documentation

data SOM gm k p Source

A Self-Organising Map (SOM).

Although SOM implements GridMap, most users will only need the interface provided by Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note:

  1. The functions adjust, and adjustWithKey do not increment the counter. You can do so manually with incrementCounter.
  2. The functions map and mapWithKey are not implemented (they just return an error). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.

Constructors

SOM 

Fields

sGridMap :: gm p
 
sLearningFunction :: Int -> Int -> Metric p
 
sCounter :: Int
 

Instances

(GridMap gm p, ~ * k (Index (BaseGrid gm p)), Pattern p, Grid (gm p), GridMap gm (Metric p), ~ * k (Index (gm p)), ~ * k (Index (BaseGrid gm (Metric p))), Ord (Metric p)) => Classifier (SOM gm) k p 
Foldable gm => Foldable (SOM gm k) 
(Foldable gm, GridMap gm p, Grid (BaseGrid gm p)) => GridMap (SOM gm k) p 
Generic (SOM gm k p) 
Grid (gm p) => Grid (SOM gm k p) 

toGridMap :: GridMap gm p => SOM gm k p -> gm pSource

Extracts the grid and current models from the SOM.

adjustNode :: (Pattern p, Grid g, k ~ Index g) => g -> (Int -> Metric p) -> p -> k -> k -> p -> pSource

trainNeighbourhood :: (Pattern p, Grid (gm p), GridMap gm p, Index (BaseGrid gm p) ~ Index (gm p)) => SOM gm k p -> Index (gm p) -> p -> SOM gm k pSource

Trains the specified node and the neighbourood around it to better match a target. Most users should use train, which automatically determines the BMU and trains it and its neighbourhood.

incrementCounter :: SOM gm k p -> SOM gm k pSource

counter :: SOM gm k p -> IntSource

setCounter :: Int -> SOM gm k p -> SOM gm k pSource

justTrain :: (Ord (Metric p), Pattern p, Grid (gm p), GridMap gm (Metric p), GridMap gm p, Index (BaseGrid gm (Metric p)) ~ Index (gm p), Index (BaseGrid gm p) ~ Index (gm p)) => SOM gm k p -> p -> SOM gm k pSource

defaultSOM :: Floating (Metric p) => gm p -> Metric p -> Metric p -> Metric p -> Metric p -> Int -> SOM gm k pSource

Creates a classifier with a default (bell-shaped) learning function. Usage is defaultSOM gm r0 rf w0 wf tf, where:

gm
The geometry and initial models for this classifier. A reasonable choice here is lazyGridMap g ps, where g is a HexHexGrid, and ps is a set of random patterns.
r0
See description in decayingGaussian2.
rf
See description in decayingGaussian2.
w0
See description in decayingGaussian2.
wf
See description in decayingGaussian2.
tf
See description in decayingGaussian2.

customSOM :: gm p -> (Int -> Int -> Metric p) -> SOM gm k pSource

Creates a classifier with a custom learning function. Usage is customSOM gm g, where:

gm
The geometry and initial models for this classifier. A reasonable choice here is lazyGridMap g ps, where g is a HexHexGrid, and ps is a set of random patterns.
f
A function used to adjust the models in the classifier. This function will be invoked with two parameters. The first parameter will indicate how many patterns (or pattern batches) have previously been presented to this classifier. Typically this is used to make the learning rate decay over time. The second parameter to the function is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.

decayingGaussian :: Floating a => a -> a -> Int -> Int -> Int -> aSource

Configures one possible learning function for classifiers. decayingGaussian r0 w0 tMax returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0, and the neighbourhood width is w0. Over time the neighbourhood width shrinks and the learning rate tapers off.

decayingGaussian2 :: Floating a => a -> a -> a -> a -> Int -> Int -> Int -> aSource

Configures a typical learning function for classifiers. decayingGaussian r0 rf w0 wf tf returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0, and the neighbourhood width is w0. Over time the bell curve shrinks and the learning rate tapers off, until at time tf, the maximum learning rate (applied to the BMU) is rf, and the neighbourhood width is wf. Normally the parameters should be chosen such that:

  • 0 < rf << r0 < 1
  • 0 < wf << w0

where << means is much smaller than (not the Haskell << operator!)