Copyright | (c) Amy de Buitléir 2012-2014 |
---|---|
License | BSD-style |
Maintainer | amy@nualeargais.ie |
Stability | experimental |
Portability | portable |
Safe Haskell | Safe-Inferred |
Language | Haskell98 |
A Simplified Self-organising Map (SSOM). An SSOM maps input patterns onto a set, where each element in the set is a model of the input data. An SSOM is like a Kohonen Self-organising Map (SOM), except that instead of a grid, it uses a simple set of unconnected models. Since the models are unconnected, only the model that best matches the input is ever updated. This makes it faster, however, topological relationships within the input data are not preserved. This implementation supports the use of non-numeric patterns.
In layman's terms, a SSOM can be useful when you you want to build a set of models on some data. A tutorial is available at https://github.com/mhwombat/som/wiki.
References:
- de Buitléir, Amy, Russell, Michael and Daly, Mark. (2012). Wains: A pattern-seeking artificial life species. Artificial Life, 18 (4), 399-423.
- Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43 (1), 59–69.
- data SSOM f t k p = SSOM {
- sMap :: Map k p
- learningFunction :: f
- counter :: t
- data Exponential a = Exponential a a a
- toMap :: SSOM f t k p -> Map k p
- trainNode :: (Pattern p, LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Ord k, Integral t) => SSOM f t k p -> k -> p -> SSOM f t k p
Construction
A Simplified Self-Organising Map (SSOM).
SSOM | |
|
(Pattern p, Ord (Metric p), LearningFunction f, (~) * (Metric p) (LearningRate f), Num (LearningRate f), Ord k, Integral t) => Classifier (SSOM f t) k p | |
(Eq f, Eq t, Eq k, Eq p) => Eq (SSOM f t k p) | |
(Show f, Show t, Show k, Show p) => Show (SSOM f t k p) | |
Generic (SSOM f t k p) | |
type Rep (SSOM f t k p) |
data Exponential a Source
A typical learning function for classifiers.
returns a gaussian function. At time zero,
the learning rate is Exponential
r0 rf tfr0
. Over time the learning rate tapers off,
until at time tf
, the learning rate is rf
. Normally the
parameters should be chosen such that:
- 0 < rf << r0 < 1
- 0 < tf
where << means "is much smaller than" (not the Haskell <<
operator!)
Exponential a a a |
Eq a => Eq (Exponential a) | |
Show a => Show (Exponential a) | |
Generic (Exponential a) | |
(Floating a, Fractional a, Num a) => LearningFunction (Exponential a) | |
type Rep (Exponential a) | |
type LearningRate (Exponential a) = a |
Deconstruction
toMap :: SSOM f t k p -> Map k p Source
Extracts the current models from the SSOM.
A synonym for
.sMap
Advanced control
trainNode :: (Pattern p, LearningFunction f, Metric p ~ LearningRate f, Num (LearningRate f), Ord k, Integral t) => SSOM f t k p -> k -> p -> SSOM f t k p Source
Trains the specified node to better match a target.
Most users should use
, which automatically determines
the BMU and trains it.train