som-8.0.5: Self-Organising Maps.

Copyright(c) Amy de Buitléir 2012-2015
LicenseBSD-style
Maintaineramy@nualeargais.ie
Stabilityexperimental
Portabilityportable
Safe HaskellSafe
LanguageHaskell98

Data.Datamining.Clustering.DSOMInternal

Description

A module containing private DSOM internals. Most developers should use DSOM instead. This module is subject to change without notice.

Synopsis

Documentation

data DSOM gm x k p Source

A Self-Organising Map (DSOM).

Although DSOM implements GridMap, most users will only need the interface provided by Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note:

  1. The functions adjust, and adjustWithKey do not increment the counter. You can do so manually with incrementCounter.
  2. The functions map and mapWithKey are not implemented (they just return an error). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type.

Constructors

DSOM 

Fields

gridMap :: gm p

Maps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"

learningRate :: x -> x -> x -> x

A function which determines the how quickly the SOM learns.

difference :: p -> p -> x

A function which compares two patterns and returns a non-negative number representing how different the patterns are. A result of 0 indicates that the patterns are identical.

makeSimilar :: p -> x -> p -> p

A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount permit greater adjustments. If amount=1, the result should be identical to the target. If amount=0, the result should be the unmodified pattern.

Instances

(GridMap gm p, (~) * k (Index (BaseGrid gm p)), FiniteGrid (gm p), GridMap gm x, (~) * k (Index (gm p)), (~) * k (Index (gm x)), (~) * k (Index (BaseGrid gm x)), Ord k, Ord x, Num x, Fractional x) => Classifier (DSOM gm) x k p Source 
Foldable gm => Foldable (DSOM gm x k) Source 
(Foldable gm, GridMap gm p, FiniteGrid (BaseGrid gm p)) => GridMap (DSOM gm x k) p Source 
Grid (gm p) => Grid (DSOM gm x k p) Source 
type BaseGrid (DSOM gm x k) p = BaseGrid gm p Source 
type Index (DSOM gm x k p) = Index (gm p) Source 
type Direction (DSOM gm x k p) = Direction (gm p) Source 

withGridMap :: (gm p -> gm p) -> DSOM gm x k p -> DSOM gm x k p Source

toGridMap :: GridMap gm p => DSOM gm x k p -> gm p Source

Extracts the grid and current models from the DSOM.

adjustNode :: (FiniteGrid (gm p), GridMap gm p, k ~ Index (gm p), k ~ Index (BaseGrid gm p), Ord k, Num x, Fractional x) => gm p -> (p -> x -> p -> p) -> (p -> p -> x) -> (x -> x -> x) -> p -> k -> k -> p -> p Source

scaleDistance :: (Num a, Fractional a) => Int -> Int -> a Source

trainNeighbourhood :: (FiniteGrid (gm p), GridMap gm p, k ~ Index (gm p), k ~ Index (BaseGrid gm p), Ord k, Num x, Fractional x) => DSOM gm x t p -> k -> p -> DSOM gm x k p Source

Trains the specified node and the neighbourood around it to better match a target. Most users should use train, which automatically determines the BMU and trains it and its neighbourhood.

justTrain :: (FiniteGrid (gm p), GridMap gm p, GridMap gm x, k ~ Index (gm p), k ~ Index (gm x), k ~ Index (BaseGrid gm p), k ~ Index (BaseGrid gm x), Ord k, Ord x, Num x, Fractional x) => DSOM gm x t p -> p -> DSOM gm x k p Source

rougierLearningFunction :: (Eq a, Ord a, Floating a) => a -> a -> a -> a -> a -> a Source

Configures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is rougierLearningFunction r p, where r is the maximal learning rate (0 <= r <= 1), and p is the elasticity.

NOTE: When using this learning function, ensure that abs . difference is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates.