Copyright | (c) Amy de Buitléir 2012-2015 |
---|---|
License | BSD-style |
Maintainer | amy@nualeargais.ie |
Stability | experimental |
Portability | portable |
Safe Haskell | Safe-Inferred |
Language | Haskell98 |
A Simplified Self-organising Map (SSOM). An SSOM maps input patterns onto a set, where each element in the set is a model of the input data. An SSOM is like a Kohonen Self-organising Map (SOM), except that instead of a grid, it uses a simple set of unconnected models. Since the models are unconnected, only the model that best matches the input is ever updated. This makes it faster, however, topological relationships within the input data are not preserved. This implementation supports the use of non-numeric patterns.
In layman's terms, a SSOM can be useful when you you want to build a set of models on some data. A tutorial is available at https://github.com/mhwombat/som/wiki.
References:
- de Buitléir, Amy, Russell, Michael and Daly, Mark. (2012). Wains: A pattern-seeking artificial life species. Artificial Life, 18 (4), 399-423.
- Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43 (1), 59–69.
- data SSOM t x k p = SSOM {
- sMap :: Map k p
- learningRate :: t -> x
- difference :: p -> p -> x
- makeSimilar :: p -> x -> p -> p
- counter :: t
- toMap :: SSOM t x k p -> Map k p
- exponential :: Floating a => a -> a -> a -> a
- trainNode :: (Num t, Ord k) => SSOM t x k p -> k -> p -> SSOM t x k p
Construction
A Simplified Self-Organising Map (SSOM).
x
is the type of the learning rate and the difference metric.
t
is the type of the counter.
k
is the type of the model indices.
p
is the type of the input patterns and models.
SSOM | |
|
Deconstruction
toMap :: SSOM t x k p -> Map k p Source
Extracts the current models from the SSOM.
A synonym for
.sMap
Learning functions
exponential :: Floating a => a -> a -> a -> a Source
A typical learning function for classifiers.
returns the learning rate at time exponential
r0 d tt
.
When t = 0
, the learning rate is r0
.
Over time the learning rate decays exponentially; the decay rate is
d
.
Normally the parameters are chosen such that:
- 0 < r0 < 1
- 0 < d
where << means "is much smaller than" (not the Haskell <<
operator!)