som-8.2.2: Self-Organising Maps.

Copyright(c) Amy de Buitléir 2012-2015
LicenseBSD-style
Maintaineramy@nualeargais.ie
Stabilityexperimental
Portabilityportable
Safe HaskellSafe
LanguageHaskell98

Data.Datamining.Clustering.SSOM

Contents

Description

A Simplified Self-organising Map (SSOM). An SSOM maps input patterns onto a set, where each element in the set is a model of the input data. An SSOM is like a Kohonen Self-organising Map (SOM), except that instead of a grid, it uses a simple set of unconnected models. Since the models are unconnected, only the model that best matches the input is ever updated. This makes it faster, however, topological relationships within the input data are not preserved. This implementation supports the use of non-numeric patterns.

In layman's terms, a SSOM can be useful when you you want to build a set of models on some data. A tutorial is available at https://github.com/mhwombat/som/wiki.

References:

  • de Buitléir, Amy, Russell, Michael and Daly, Mark. (2012). Wains: A pattern-seeking artificial life species. Artificial Life, 18 (4), 399-423.
  • Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43 (1), 59–69.

Synopsis

Construction

data SSOM t x k p Source

A Simplified Self-Organising Map (SSOM). x is the type of the learning rate and the difference metric. t is the type of the counter. k is the type of the model indices. p is the type of the input patterns and models.

Constructors

SSOM 

Fields

sMap :: Map k p

Maps patterns to nodes.

learningRate :: t -> x

A function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.

difference :: p -> p -> x

A function which compares two patterns and returns a non-negative number representing how different the patterns are. A result of 0 indicates that the patterns are identical.

makeSimilar :: p -> x -> p -> p

A function which updates models. For example, if this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount permit greater adjustments. If amount=1, the result should be identical to the target. If amount=0, the result should be the unmodified pattern.

counter :: t

A counter used as a "time" parameter. If you create the SSOM with a counter value 0, and don't directly modify it, then the counter will represent the number of patterns that this SSOM has classified.

Instances

(Num t, Ord x, Num x, Ord k) => Classifier (SSOM t) x k p Source 
Generic (SSOM t x k p) Source 
(NFData t, NFData k, NFData p) => NFData (SSOM t x k p) Source 
type Rep (SSOM t x k p) Source 

Deconstruction

toMap :: SSOM t x k p -> Map k p Source

Extracts the current models from the SSOM. A synonym for sMap.

Learning functions

exponential :: Floating a => a -> a -> a -> a Source

A typical learning function for classifiers. exponential r0 d t returns the learning rate at time t. When t = 0, the learning rate is r0. Over time the learning rate decays exponentially; the decay rate is d. Normally the parameters are chosen such that:

  • 0 < r0 < 1
  • 0 < d

Advanced control

trainNode :: (Num t, Ord k) => SSOM t x k p -> k -> p -> SSOM t x k p Source

Trains the specified node to better match a target. Most users should use train, which automatically determines the BMU and trains it.